Spring Cloud Dataflow - Set max-connection-pool for Composed Task Runner
I've encountered an issue on Spring Cloud Dataflow when running multiple composed tasks at once.
Hikari DataSource takes 10 connections from the connection pool by default. When running for example 10 composed tasks at once, this means 100 connections + connections required for every task on each composed task.
I tried running the Composed Task Runner locally with spring.datasource.hikari.maximum-pool-size=1
and it worked.
Is there any way how to set this property to every Composed Task Runner by default ? I did not find any documentation related to modifying things like this for composed tasks.
do you know?
how many words do you know
See also questions close to this topic
-
best way to add a spring boot web application with JSPs to a docker container?
so i have a spring boot web application, it uses JSPs, and im supposed to put it in a container .
my question is what is the best way ? ive tried to copy the project and then run it there using a wmvn like so : dockerfile:FROM openjdk:8-jdk-alpine ADD . /input-webapp WORKDIR /input-webapp EXPOSE 8080:8080 ENTRYPOINT ./mvnw spring-boot:run
which works, but it takes a long time getting the dependencies and feels messy .
and ive tried to package it into a jar, and then copy the jar only, and run it : dockerfile:
FROM openjdk:8-jdk-alpine ADD target/input-webapp-0.0.1-SNAPSHOT.jar input-webapp-0.0.1-SNAPSHOT.jar EXPOSE 8080:8080 ENTRYPOINT ["java","-jar","input-webapp-0.0.1-SNAPSHOT.jar"]
but this way it cant see the JSPs, or at least i think this is the problem, as i get a 404.
so is there a better way ? can i copy the jsps plus the jar to make it work? thanks
-
build spring boot (mvnw) with docker can not use cache
Spring Boot Docker Experimental Features Docker 18.06 comes with some “experimental” features, including a way to cache build dependencies. To switch them on, you need a flag in the daemon (dockerd) and an environment variable when you run the client. With the experimental features, you get different output on the console, but you can see that a Maven build now only takes a few seconds instead of minutes, provided the cache is warm.
my dockerfile can not use cache.
dockerfile
# syntax=docker/dockerfile:experimental FROM openjdk:8-jdk-alpine as build WORKDIR /workspace/app COPY mvnw . COPY .mvn .mvn COPY pom.xml . COPY src src RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests -s .mvn/wrapper/settings.xml RUN mkdir -p target/extracted && java -Djarmode=layertools -jar target/*.jar extract --destination target/extracted FROM openjdk:8-jre-alpine ENV TZ Asia/Shanghai RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN addgroup -S spring && adduser -S spring -G spring USER spring:spring ARG EXTRACTED=/workspace/app/target/extracted ARG JAVA_OPTS="-Xmx100m -Xms100m" COPY --from=build ${EXTRACTED}/dependencies/ ./ COPY --from=build ${EXTRACTED}/spring-boot-loader/ ./ COPY --from=build ${EXTRACTED}/snapshot-dependencies/ ./ COPY --from=build ${EXTRACTED}/application/ ./ ENTRYPOINT ["sh", "-c","java ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher"]
run shell
DOCKER_BUILDKIT=1 docker build -t org/spring-boot .
every time use many minutes
-
How can I delete a row by its SKU instead of its ID?
I try to delete the row using the sku of the product. I'm using spring boot and angular. I got an error when I added the sku on my button like this one
(click)="onDeleteProductBySku(deleteClick?.sku)"
it said that theProperty 'sku' does not exist on type '(product: Product) => void'.
. On my command prompt, I got this error. How can I solve this problem?Error: product/product.component.html:50:109 - error TS2339: Property 'sku' does not exist on type '(product: Product) => void'. 50 <button class="btn btn-outline-danger btn-sm me-2" (click)="onDeleteProductBySku(deleteClick?.sku)">Delete</button> product/product.component.ts:12:16 12 templateUrl: './product.component.html', ~~~~~~~~~~~~~~~~~~~~~~~~~~ Error occurs in the template of component ProductComponent.
ProductsController.java --> This is working on the postman.
//Delete a product record using sku //http://localhost:8080/products/deletebysku?sku=12345678 @DeleteMapping("/products/deletebysku") @ResponseBody private void deleteProductBySku(@RequestParam String sku){ productsService.deleteProductBySku(sku); }
product.component.ts
public deleteProduct!: Product; public onDeleteProductBySku(sku: string): void { this.productServive.deleteProductBySku(sku).subscribe( (response: void) => { this.messageShow(); console.log(response); this.getAllProduct(); }, (error: HttpErrorResponse) => { this.errorMessage(error.message); } ); } public deleteClick(product: Product) { this.deleteProduct = product; console.log("delete by sku"); }
product.service.ts
public deleteProductBySku(sku: string): Observable<void> { return this.http.delete<void>(`${this.apiServerUrl}/products/deletebysku?sku=${sku}`); }
product.component.html
<button class="btn btn-outline-danger btn-sm me-2" (click)="onDeleteProductBySku(deleteClick?.sku)">Delete</button>
-
spring boot transactional integration for event management
We have recently put a REST Spring Boot MVC application into production. We are using JPA with Postgresql in order to achieve some nonfunctional requirements: we need a Recovery Point Objective (RPO) of 0, and a short Recovery Time Objective (RTO). Our context also requires atomicity and consistency in the execution of business services, as provided by a relational RDBMS. Postgresql provides all that for us, and is one of the databases in the portfolio of our customer.
The application has an important part of processing which is done asynchronously. For that part, we have developed a custom event and batch module that integrates into the JPA - Postgresql ecosystem. Moreover, the use of a unique database transaction for job/event management and business logic provides a simple, consistent, and robust architecture over which we can develop application business logic.
This pattern in which we use the same transaction for both event management and business logic applies to the following use cases:
- A given business service may generate some events. In this situation, it is important to us the consistency and atomicity between the corresponding changes to the business model made by the business service and the registration of events.
- The asynchronous execution of an event might trigger/execute a business service. In this situation, we also need atomicity and consistency of the side effects of this business service into the persistent data model, and the management of the execution of the triggering event.
This provides us a simple and sound model, with the desired RPO and RTO properties.
We would like to evolve from our custom model to a model based on a sound and stable opensource solution for event management which keeps atomicity, consistency, and RPO and RTO as demanded. In this sense we are analyzing the use of Spring Batch in this scenario. Would Spring Batch be suitable for our scenario, in which , apart from job execution, we do have some intensive event-execution peaks?
Any other opensource product which would integrate seamlessly in our Spring Boot ecosystem?
-
Spring Batch FlatFileItemWriter - output customization
I would have the need to produce a CSV File with the following requirements:
- Every field is surrounded by double quotes
- Double quotes are escaped with backslash
- Backslash is escaped with backslash
Input:
- Field1
- Field2With\Backslash"DoubleQuotes"And|Pipe
- Field3
Expected output:
"Field1"|"Field2With\\Backslash\"DoubleQuotes\"And|Pipe"|"Field3"
Is it possible to obtain such an output?
-
I have hourly cron Job and fetch external Rest API with max 6 hours time zone, instead of taking last successful run it taking max 6 hrs
I have hourly cron Job, calling an External Rest API and get the details of max 6 hours fixed time zone but would take min 2 hr or 3hr, problem is instead of taking Last successful run it is taking 6 hrs fixed time zone.. How I can get last successful run?
-
Commit/rollback transactions manually with multiple datasources in Micronaut
I need to commit/rollback transactions manually with multiple datasources in micronaut-data. But when commits second transaction(
db2TransactionManager.commit(db2Transaction)
) it throws below exception. How can I solve this problem or is there any other way, also I tried with defining@Repository
and@Transactional
etc., but there is other problem that commits transactions separately?java.lang.IllegalStateException: Transaction synchronization is not active at io.micronaut.transaction.support.TransactionSynchronizationManager.getSynchronizations(TransactionSynchronizationManager.java:334) at io.micronaut.transaction.support.TransactionSynchronizationUtils.triggerBeforeCompletion(TransactionSynchronizationUtils.java:100) at io.micronaut.transaction.support.AbstractSynchronousTransactionManager.triggerBeforeCompletion(AbstractSynchronousTransactionManager.java:1011) at io.micronaut.transaction.support.AbstractSynchronousTransactionManager.processCommit(AbstractSynchronousTransactionManager.java:863) at io.micronaut.transaction.support.AbstractSynchronousTransactionManager.commit(AbstractSynchronousTransactionManager.java:807)
DemoService.java
@Singleton public class DemoService { private final SynchronousTransactionManager<Connection> db1TransactionManager; private final SynchronousTransactionManager<Connection> db2TransactionManager; private final SynchronousTransactionManager<Connection> db3TransactionManager; @Inject public LogsterSynchronizerService(@Named("db1") SynchronousTransactionManager<Connection> db1TransactionManager, @Named("db2") SynchronousTransactionManager<Connection> db2TransactionManager, @Named("db3") SynchronousTransactionManager<Connection> db3TransactionManager) { this.db1TransactionManager = db1TransactionManager; this.db2TransactionManager = db2TransactionManager; this.db3TransactionManager = db3TransactionManager; } public void processRecords() { TransactionStatus<Connection> db1Transaction = db1TransactionManager.getTransaction(null); db1TransactionManager.executeWrite(status -> insertDb1(status.getConnection())); TransactionStatus<Connection> db2Transaction = db2TransactionManager.getTransaction(null); db2TransactionManager.executeWrite(status -> insertDb2(status.getConnection())); TransactionStatus<Connection> db3Transaction = db3TransactionManager.getTransaction(null); db3TransactionManager.executeWrite(status -> insertDb3(status.getConnection())); db1TransactionManager.commit(db1Transaction); db2TransactionManager.commit(db2Transaction); db3TransactionManager.commit(db3Transaction); } private int insertDb1(Connection db1Connection) throws SQLException { String sql = "INSERT INTO tbl_demo(dt) VALUES (SYSTIMESTAMP)"; PreparedStatement ps = db1Connection.prepareStatement(sql); return ps.executeUpdate(); } private int insertDb2(Connection db2Connection) throws SQLException { String sql = "INSERT INTO tbl_demo(dt) VALUES (SYSTIMESTAMP)"; PreparedStatement ps = db2Connection.prepareStatement(sql); return ps.executeUpdate(); } private int insertDb3(Connection db3Connection) throws SQLException { String sql = "INSERT INTO tbl_demo(dt) VALUES (SYSTIMESTAMP)"; PreparedStatement ps = db3Connection.prepareStatement(sql); return ps.executeUpdate(); } }
application.yml
micronaut: application: name: demo-app datasources: db1: url: jdbc:oracle:thin:@db1.demo.com:1521:db1 driverClassName: oracle.jdbc.OracleDriver username: test password: test schema-generate: none dialect: ORACLE auto-commit: false maximum-pool-size: 3 minimum-idle: 1 idle-timeout: 5 db2: url: jdbc:oracle:thin:@db2.demo.com:1522:db2 driverClassName: oracle.jdbc.OracleDriver username: test password: test schema-generate: none dialect: ORACLE auto-commit: false maximum-pool-size: 3 minimum-idle: 1 idle-timeout: 5 db3: url: jdbc:oracle:thin:@db3.demo.com:1522:db3 driverClassName: oracle.jdbc.OracleDriver username: test password: test schema-generate: none dialect: ORACLE auto-commit: false maximum-pool-size: 3 minimum-idle: 1 idle-timeout: 5 netty: default: allocator: max-order: 3
pom.xml
... <properties> <packaging>jar</packaging> <jdk.version>11</jdk.version> <release.version>11</release.version> <micronaut.version>3.4.2</micronaut.version> <micronaut.data.version>3.3.0</micronaut.data.version> <exec.mainClass>com.demo.Application</exec.mainClass> <micronaut.runtime>netty</micronaut.runtime> </properties> ... <dependency> <groupId>io.micronaut.data</groupId> <artifactId>micronaut-data-jdbc</artifactId> <scope>compile</scope> </dependency> <dependency> <groupId>io.micronaut.sql</groupId> <artifactId>micronaut-jdbc-hikari</artifactId> <scope>compile</scope> </dependency> ...
-
Problem while filling a RadGridView with a BindingList of objects
I need some help on a little problem; I want to use a component which inherits from a RadGridView from Telerik.
I want to fill it with a BindingList of RowService, a class I create, the code looks like this :public class RowService { public String ServiceName { get; set; } public String Description { get; set; } public ServiceControllerStatus ServiceStatus { get; set; } public String State { get; set; } public ServiceStartModeEx StartMode { get; set; } public String StartType { get; set; } public String StartName { get; set; } public String PathName { get; set; } public String Version { get; set; } public String NewVersion { get; set; } }
The columns have the same name as the attributes of this class.
I add my differents RowService in a BindingList called bSourceServices. When I try to associate it to my grid with the following line, the grid stay empty :plkUcGridView1.DataSource = bSourceServices;
Here is a picture of what I obtain
My grid is empty, even if bSourceServices isn't (I checked with a foreach and it contains what it should contains, the list isn't empty).Thanks to anyone who would help me understand this problem !
-
Hikari connection pook with oracle on azure
We are using Hikari Connection pool to connect to Oracle on Azure. We tried increasing the timeout value to 12000000 after consecutive connection time out error with a value 600000.
Error is intermittent 2022-04-06 08:32:18 java.lang.Exception: Error while getting connection from connection pool for:#### Connection is not available, request timed out after 300001ms.
What ever be the time we get this error intermittently.
-
JDBC Connection not being released
I'm calling a stored procedure below, everything works fine but I have observed that (through visualizing connections on db) for some reason application does not releases the connection after executing below code and every time this code is executed, a new connection is created until the limit reaches and its not able to acquire the JDBC connection anymore.
NOTE: the em (entity manager) is autowired here:
Session session = em.unwrap(Session.class); try{ ProcedureCall call = session.createStoredProcedureCall("sp_name"); Output outputs = call.getOutputs().getCurrent(); List<Object[]> resultList = ((ResultSetOutput) outputs).getResultList(); session.close(); return resultList; }catch(Exception e){ session.close(); return null; }
Below is the HikariCP configuration
spring.datasource.hikari.maximumPoolSize=10 spring.datasource.hikari.minimumIdle=2 spring.datasource.hikari.idleTimeout=15000 spring.datasource.hikari.maxLifetime=600000 spring.datasource.hikari.connectionTimeout=30000
-
How to use 1 configuration for all aliases of an SCDF task in a flow
I have an SCDF composite task flow with multiple tasks. At the end of the flow OR when any task in the flow fails, a final "analyzer" task must be launched to create and send a report about the flow processing.
The composite task looks like this:
task1 'FAILED' -> analyzer && task2 'FAILED' -> analyzer && task3 '*' -> analyzer
When I launch this task flow, I get the following error message because the task
analyzer
is used multiple times:duplicate app name. Use a label to ensure uniqueness
To prevent this, I can alias the
analyzer
task for every use. Then I gottask1 'FAILED' -> task1-analyzer:analyzer && task2 'FAILED' -> task2-analyzer:analyzer && task3 '*' -> task3-analyzer:analyzer
This works fine, BUT with this definition my
analyzer
task fails because it is missing its configuration properties.It seems that now I need to configure each alias of the
analyzer
task individually. Even when the configuration is the same for all of them.For example
deployer.my-composite.analyzer.kubernetes...
becomes
deployer.my-composite.task1-analyzer.kubernetes... deployer.my-composite.task2-analyzer.kubernetes... deployer.my-composite.task3-analyzer.kubernetes...
Is it possible to configure all aliases of a task with just one configuration? I have flows with about 10 processing tasks and it is quite annoying to repeat the same properties 10 times.
Or is there a simpler way to express the conditional flow so that I don't need 10 aliases of my analyzer task?
-
Spring Cloud Data Flow Scaling with Apache Kafka
I've followed many Spring tutorials on scaling (here, and here), but still haven't found the secret knock that keeps all consumers busy. The intent is to have 5 (example) sink processes constantly busy. The documentation implies that for a simple stream of
source | processor | sink
, I would use the following stream properties:deployer.sink.count=5 app.processor.producer.partitionKeyExpression=headers['X-PartitionKey']?:'' app.sink.spring.cloud.stream.kafka.binder.autoAddPartitions=true app.sink.spring.cloud.stream.kafka.binder.minPartitionCount=5
Naturally, the processor adds a header field called X-PartitionKey that ends up being unique enough that it should balance adequately.
What I find is that only 2 or 3 ever receive messages, and the remainder sit idle. It feels like the first few watch multiple partitions, and the others just sit in stand-by saying
Found no committed offset for partition
. I can see by usingkowl
that the messages are getting unique partitions, but the load never gets balanced to reflect as such.Am I missing configuration somewhere? Is it a kafka binder issue?
Update #1
I've noticed that each instance isn't getting a unique clientId. Not sure if this is relevant.
Instance 1 - [Consumer clientId=consumer-mystream-2, groupId=mystream] Instance 2 - [Consumer clientId=consumer-mystream-3, groupId=mystream] Instance 3 - [Consumer clientId=consumer-mystream-3, groupId=mystream] Instance 4 - [Consumer clientId=consumer-mystream-3, groupId=mystream] Instance 5 - [Consumer clientId=consumer-mystream-2, groupId=mystream]
Update #2
Using the below 3 properties, all 5 instances are busy. This method seems to bypass SCSt and uses a MessageKey instead of PartitionKey. Not perfect yet, but better.
deployer.sink.count=5 app.processor.spring.cloud.stream.kafka.binder.messageKeyExpression=headers['X-PartitionKey']?:'' app.sink.spring.cloud.stream.kafka.binder.autoAddPartitions=true
Update #3
Adding
spring.cloud.stream.bindings.output.producer.partition-count
equal to the actual number of Kafka partitions seems to have resolved the issue. If yourdeployer.*.count
property is less than the actual number of partitions, the excess partitions will not be assigned messages, but the consumers will be assigned to them anyways, and may sit idle.