Infinispan connection refused
I'm trying to setup a very simple infinispan x-site setup, composed of 2 keycloak instances that embed infinispan. I created the following entry in my xml configuration:
<outbound-socket-binding name="remote-cache">
<remote-destination host="${remote.cache.host:localhost}" port="${remote.cache.port:11222}"/>
</outbound-socket-binding>
However when I start my app (with the option -Dremote.cache.host=<remote_site_ip>
) it fails with a CacheException Unable to start cache loaders
, caused by a Connection refused Exception
.
I have two questions:
- How do I bootstrap my cluster ? Is it not expected that there's a connection refused during the setup phase, since they're not started on both sites at the very same time ?
- I don't see anywhere that the application is listening to port 11222, and it doesn't appear in netstat. How do I configure this ?
Any help would be greatly appreciated,
thanks.
See also questions close to this topic
-
What causes a synchronization lock to get stuck?
I have this simple code, it's simply a testing case
try { synchronized(this) { while(number != 4) { System.out.println("Waiting..."); this.wait(); } number = 4; this.notifyAll(); } } catch(InterruptedException e) {}
Of what I know regarding use of the wait() method, once wait is invoked, what comes after should be done. However I can't see to get the wait to end in this case. I have attempted to place a second synchronized block but that doesn't seem to work.
Do you know what could be causing the wait to hang? I looked up deadlocking but that seems to be an entirely different matter.
-
Why the cloud spanner library for Java perform `ExecuteStreamingSql` instead of `ExecuteSql` when it calls read query?
I doesn't have any trouble about it, but just I'd like to know the difference of
ExecuteStreamingSql
andExecuteSql
by understanding why the library usesExecuteStreamingSql
. -
How to find smallest missing number in range with given array in Dart or Java
Imagine that you have a range from 1 to 6 and array arr = [1,2,3] How to write nice algorithm where the function will return smallest possible missing number in this range according to data in an array, meaning :
input: arr[1, 2, 3] output: 4 is smallest missing
or
input: arr[2, 3, 4] output: 1 is smallest missing
- In case that there is no missing numbers function can give output 7, its okay.
- In case if array is empty somehow you may return 1.
I tried my best with some codes coming from www.geeksforgeeks.com but it didn't help me. Its okay if you write code in Java but Dart is the language that Im working on
Thanks a lot for being helpful in advance !
-
How to find out which Jakarta EE implementations are used by WildFly?
WildFly is a Jakarta EE compatible application server. This means that all of the Jakarta EE APIs have to be implemented by the server. I am interested in the concrete implementations that WildFly is using for a specific version of WildFly. What is the best way to create a table with columns Jakarta EE API - Implementation of WildFly (or is there some online resource listing this)? I have looked deeply into the WildFly documentation, but so far without success.
-
AEAD not supported
I'm using Wildlfy 21 and I configured the AEAD ciphers but a security scan still complains that AEAD is not supported. I wonder if there is a way to tell Wildfly to only use the server side ciphers. Or am I missing another configuration somewhere. Here's the relevant part of my standalone.xml:
<server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enabled-cipher-suites="ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE\ -RSA-AES256-GCM-SHA384,ECDHE-ECDSA-CHACHA20-POLY1305,ECDHE-RSA-CHACHA20-POLY1305,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES128-GCM-SHA256,ECDHE-ECDSA-AES256-SH\ A384,ECDHE-RSA-AES256-SHA384,ECDHE-ECDSA-AES128-SHA256,ECDHE-RSA-AES128-SHA256" enabled-protocols="TLSv1.2" enable-http2="false"/> <host name="default-host" alias="localhost"> <http-invoker security-realm="ApplicationRealm"/> </host> </server>
I would really appreciate any help you could give me. Thanks.
-
How to redirect application path in wildfly?
Excuse me, I'm new to this. I have developed an application with maven, and when I run the application in my wildfly it opens the following path: "127.0.0.1:8080/myapp-1.0.0" and everything runs perfect, but I simply want that: x.x.x.x/ point to: x.x.x.x:8080/myapp-x.x.x
I do not know if it is a configuration in standalone.xml as it redirects or something more complicated.
-
How do I get Keycloak to connect to MySQL DB?
I've been crawling a number of sites like this trying to get Keycloak working with a MySQL persistence layer. I am using docker, but I'm using my own images so it pulls passwords and other sensitive data from a secrets manager instead of environment variables or Docker secrets. The images are pretty close to stock besides that however.
Anyway, I have a MySQL 8 container up and running, and from within the Keycloak 12.0.3 container I can connect to the MySQL container fine:
# mysql -h mysql -u keycloak --password=somethingtochangelater -D keycloak -e "SHOW DATABASES;" mysql: [Warning] Using a password on the command line interface can be insecure. +--------------------+ | Database | +--------------------+ | information_schema | | keycloak | +--------------------+
So there's no problems of connectivity between the instances, and that username/password has access to the
keycloak
database fine.So then I ran several commands to configure the Keycloak instance (keycloak is installed at
/opt/myco/bin/keycloak
):/opt/myco/bin/keycloak/bin/standalone.sh & # Pausing for server startup sleep 20 # Add mysql module - JDBC driver unpacked at /opt/myco/bin/keycloak-install/mysql-connector-java-8.0.23/mysql-connector-java-8.0.23.jar /opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="module add --name=com.mysql --dependencies=javax.api,javax.transaction.api --resources=/opt/myco/bin/keycloak-install/mysql-connector-java-8.0.23/mysql-connector-java-8.0.23.jar --module-root-dir=/opt/myco/bin/keycloak/modules/system/layers/keycloak/" # Removing h2 datasource /opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/data-source=KeycloakDS:remove" # Adding MySQL datasource /opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-class-name=com.mysql.cj.jdbc.Driver)" # TODO - add connection pooling options here... # Configuring data source /opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="data-source add --name=KeycloakDS --jndi-name=java:jboss/datasources/KeycloakDS --enabled=true --password=somethingtochangelater --user-name=keycloak --driver-name=com.mysql --use-java-context=true --connection-url=jdbc:mysql://mysql:3306/keycloak?useSSL=false&characterEncoding=UTF-8" # Testing connection /opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/data-source=KeycloakDS:test-connection-in-pool" # Creating admin user /opt/myco/bin/keycloak/bin/add-user-keycloak.sh -r master -u "admin" -p "somethingelse" # Shutting down initial server /opt/myco/bin/keycloak/bin/jboss-cli.sh --connect command=":shutdown"
This all appears to run fine. Note especially the
test-connection-in-pool
has no problems:{ "outcome" => "success", "result" => [true], "response-headers" => {"process-state" => "reload-required"} }
However, when I go to start the server back up again, it crashes with several exceptions, starting with:
22:31:52,484 FATAL [org.keycloak.services] (ServerService Thread Pool -- 56) Error during startup: java.lang.RuntimeException: Failed to connect to database at org.keycloak.keycloak-model-jpa@12.0.3//org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:377) at org.keycloak.keycloak-model-jpa@12.0.3//org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lazyInit(LiquibaseDBLockProvider.java:65) ...
it keeps going, though I suspect that Exception ultimately to be fatal, and it eventually dies with:
22:31:53,114 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 40) WFLYCTL0190: Step handler org.jboss.as.controller.AbstractAddStepHandler$1@33063168 for operation add at address [ ("subsystem" => "jca"), ("workmanager" => "default"), ("short-running-threads" => "default") ] failed -- java.util.concurrent.RejectedExecutionException: java.util.concurrent.RejectedExecutionException at org.jboss.threads@2.4.0.Final//org.jboss.threads.RejectingExecutor.execute(RejectingExecutor.java:37) at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.rejectShutdown(EnhancedQueueExecutor.java:2029) ...
The module at
/opt/myco/bin/keycloak/modules/system/layers/keycloak/com/mysql/main
has the jar file and module.xml:# ls module.xml mysql-connector-java-8.0.23.jar # cat module.xml <?xml version='1.0' encoding='UTF-8'?> <module xmlns="urn:jboss:module:1.1" name="com.mysql"> <resources> <resource-root path="mysql-connector-java-8.0.23.jar"/> </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies>
The standalone.xml file looks reasonable to me:
... <subsystem xmlns="urn:jboss:domain:datasources:6.0"> <datasources> ... <datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true"> <connection-url>jdbc:mysql://mysql:3306/keycloak?useSSL=false&characterEncoding=UTF-8</connection-url> <driver>com.mysql</driver> <security> <user-name>keycloak</user-name> <password>somethingtochangelater</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> <driver name="mysql" module="com.mysql"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> </driver> </drivers> </datasources> ...
So.... anyone have any idea what's going on? What else do I need to do to get Keycloak talking properly to MySQL? Anything else I can do to debug what the issue is?
-
Keycloak Admin CLI - Updating a realm with JSON file
Objective:
Our objective is to update the entire realm provided a json file.Problem:
The issue at hand is we cannot seem to update the realm entirely to include the client changes as well.Actions taken:
Option 1: Based on the Keycloak Admin CLI documentation, a Keycloak realm can be updated from a JSON file using the following command:kcadm.sh update realms/demorealm -f demorealm.json
However, when making an update to a property within the
clients
section of the JSON file (i.e. a client's description), the change is not reflected within the Keycloak realm.We also tried to take a look at the
kcadm.sh help update
. We tried to utilize the merge flag (Merge new values with existing configuration on the server. Merge is automatically enabled unless--file
is specified) . We do have a file specified and therefore tried to enable it using the flag - but to no success. The clients did not change as expected.Option 2: We have tried the partial import command found in Keycloak documentation
$ kcadm.sh create partialImport -r demorealm -s ifResourceExists=OVERWRITE-o -f demorealm.json
With the
ifResourceExists
set toOVERWRITE
, it accurately changes clients. However, it alters other Realm configurations such as assigned users roles. Ex: After manually creating a new user via the Keycloak UI and setting roles for the user, the roles are lost after running the command with theOVERWRITE
flag set. Setting theifResourceExists
toSKIP
does not properly update values for a client as it is skipped altogether.Question: Is it possible, either with a different command or different flags, to update a Keycloak realm in its entirety with a single Keycloak admin command? Neither Option 1 or Option 2 listed above worked for us. We want to avoid making individual
update client
calls when updating the Realm.Notes:
We have properly authenticated and confirmed that changes made at the realm level are reflected in Keycloak. -
user setup in keycloak with organization information
I am looking for best recommended approach to create / manage users with organization name and id in Keycloak ( through a html form )
I read following documentation but cannot find a straight forward way to manage users there with organization name and Org id. https://www.keycloak.org/docs/latest/authorization_services/
The approach that i used was using a custom attributes but i am not sure if that is the recommened approach or not.
Step-1, For every user create a custom attribute "OrgId" with value unique to that organization lets say 1.
Step-2, For the Client, that the user belongs to, define a protocol mapper "OrgId"
Step-3, Create a table for Organziation into our system, add an Organization entry there when first user for that organization is created.
Problem i am trying to solve: We need to keep track of various actions that users belongs to an organization is doing, such as we need to keep track which organization bought what type of products from our system
-
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?
-
How to set InfinispanQueryBuilder parameters in Apache Camel Route using rest DSL
I need to query Infinispan using parameters passed to a a REST Service. The application is based on Apache Camel Quarkus The following code works fine as long as InfinispanQueryBuilder parameters are hard-coded. I can't figure out how to pass the Path params. For the record I have checked that they are there
@ApplicationScoped public class IbanDetailsRouter extends RouteBuilder { @Override public void configure() throws Exception { restConfiguration().bindingMode(RestBindingMode.json); rest("/sepaplus").get("/iban/{isoCountryCode}/{ibanNationalId}") .outType(Iban.class) .to("direct:ibanDetails"); from("direct:ibanDetails") .log("country code = ${header.isoCountryCode} - iban national id = ${header.ibanNationalId}") .setHeader(InfinispanConstants.OPERATION, constant(InfinispanOperation.QUERY)) .setHeader(InfinispanConstants.QUERY_BUILDER, constant(new InfinispanQueryBuilder() { @Override public Query build(QueryFactory queryFactory) { return queryFactory.create("FROM sepaplus.Iban iban WHERE iban.ibanIsoCountryCode = :ibanIsoCountryCode " + "AND iban.ibanNationalId = :ibanNationalId") .setParameter("ibanIsoCountryCode", "CH") .setParameter("ibanNationalId", "08910"); } })) .to("infinispan:sepaplus-cache") .log("Instance: ${body.get(0).getInstitutionName()}") .transform().simple("${body.get(0)}"); } }
Many thanks for the help
-
Replicated cache with NON_XA transaction fails to syncronize cache data upon Wildfly v.21 (same with v.22) second cluster node start
Works perfectly on Wildlfy v.18 (Infinispan 9.4.16) but not on v.21 or v.22 (Infinispan 11.04)
A standard Wildfly v.22 (same with v.21) configuration using standalone-full-ha.xml running as a cluster of two nodes.
Cache is configured as:
<cache-container name="opencell"> <transport lock-timeout="60000"/> <replicated-cache name="opencell-cft-cache" statistics-enabled="true"> <transaction mode="NON_XA"/> </replicated-cache> </cache-container>
Cache is accessed this way:
@Resource(lookup = "java:jboss/infinispan/cache/opencell/opencell-cft-cache") private Cache<CacheKeyStr, Map<String, CustomFieldTemplate>> cftsByAppliesTo;
Java version: OpenJDK 64-Bit Server VM 18.9 (build 11+28, mixed mode)
Infinispan fails to synchronize caches when starting a second cluster node because of missing transaction.
Changing transaction to NONE, a second cluster node starts with no errors.
16:13:46,141 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (thread-18,ejb,opencell-node2) ISPN000136: Error executing command PutKeyValueCommand on Cache 'opencell-cft-cache', writing keys [/JobInstance_GenericWorkflowJob]: org.infinispan.commons.CacheException: javax.transaction.InvalidTransactionException: WFTXN0002: Transaction is not a supported instance: TransactionImpl{xid=Xid{formatId=2, globalTransactionId=0000000000000001,branchQualifier=0000000000000001}, status=ACTIVE} at org.infinispan@11.0.4.Final//org.infinispan.transaction.impl.TransactionTable.enlist(TransactionTable.java:227) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.TxInterceptor.enlist(TxInterceptor.java:423) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.TxInterceptor.handleWriteCommand(TxInterceptor.java:387) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:227) at org.infinispan@11.0.4.Final//org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:63) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:59) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.TransactionSynchronizerInterceptor.visitCommand(TransactionSynchronizerInterceptor.java:41) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndHandle(BaseAsyncInterceptor.java:190) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateTransferInterceptor.handleTxWriteCommand(StateTransferInterceptor.java:259) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:249) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:96) at org.infinispan@11.0.4.Final//org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:63) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndFinally(BaseAsyncInterceptor.java:155) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:249) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:210) at org.infinispan@11.0.4.Final//org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:63) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:128) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:90) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:61) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:53) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.DDAsyncInterceptor.visitPutKeyValueCommand(DDAsyncInterceptor.java:59) at org.infinispan@11.0.4.Final//org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:63) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:49) at org.infinispan@11.0.4.Final//org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invokeAsync(AsyncInterceptorChainImpl.java:226) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateConsumerImpl.invokePut(StateConsumerImpl.java:739) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateConsumerImpl.doApplyState(StateConsumerImpl.java:676) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateConsumerImpl.applyChunk(StateConsumerImpl.java:644) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateConsumerImpl.applyStateIteration(StateConsumerImpl.java:618) at org.infinispan@11.0.4.Final//org.infinispan.statetransfer.StateConsumerImpl.applyState(StateConsumerImpl.java:597) at org.infinispan@11.0.4.Final//org.infinispan.commands.statetransfer.StateResponseCommand.invokeAsync(StateResponseCommand.java:80) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokeCommand(BasePerCacheInboundInvocationHandler.java:115) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.invoke(BaseBlockingRunnable.java:100) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.lambda$runAsync$0(BaseBlockingRunnable.java:91) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883) at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2251) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runAsync(BaseBlockingRunnable.java:74) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:41) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.handleRunnable(BasePerCacheInboundInvocationHandler.java:163) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.TxPerCacheInboundInvocationHandler.handle(TxPerCacheInboundInvocationHandler.java:89) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleCacheRpcCommand(GlobalInboundInvocationHandler.java:167) at org.infinispan@11.0.4.Final//org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:113) at org.infinispan@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1378) at org.infinispan@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1302) at org.infinispan@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131) at org.infinispan@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445) at org.jgroups@4.2.5.Final//org.jgroups.JChannel.up(JChannel.java:784) at org.jgroups@4.2.5.Final//org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:135) at org.jgroups@4.2.5.Final//org.jgroups.stack.Protocol.up(Protocol.java:306) at org.jgroups@4.2.5.Final//org.jgroups.protocols.FORK.up(FORK.java:142) at org.jgroups@4.2.5.Final//org.jgroups.protocols.FRAG3.up(FRAG3.java:160) at org.jgroups@4.2.5.Final//org.jgroups.protocols.FlowControl.up(FlowControl.java:351) at org.jgroups@4.2.5.Final//org.jgroups.protocols.FlowControl.up(FlowControl.java:359) at org.jgroups@4.2.5.Final//org.jgroups.protocols.pbcast.GMS.up(GMS.java:868) at org.jgroups@4.2.5.Final//org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243) at org.jgroups@4.2.5.Final//org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049) at org.jgroups@4.2.5.Final//org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772) at org.jgroups@4.2.5.Final//org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753) at org.jgroups@4.2.5.Final//org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405) at org.jgroups@4.2.5.Final//org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592) at org.jgroups@4.2.5.Final//org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132) at org.jgroups@4.2.5.Final//org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186) at org.jgroups@4.2.5.Final//org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254) at org.jgroups@4.2.5.Final//org.jgroups.protocols.MERGE3.up(MERGE3.java:281) at org.jgroups@4.2.5.Final//org.jgroups.protocols.Discovery.up(Discovery.java:300) at org.jgroups@4.2.5.Final//org.jgroups.protocols.TP.passMessageUp(TP.java:1385) at org.jgroups@4.2.5.Final//org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.jboss.as.clustering.common@21.0.2.Final//org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49) at org.jboss.as.clustering.common@21.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: javax.transaction.InvalidTransactionException: WFTXN0002: Transaction is not a supported instance: TransactionImpl{xid=Xid{formatId=2, globalTransactionId=0000000000000001,branchQualifier=0000000000000001}, status=ACTIVE} at org.wildfly.transaction.client@1.1.13.Final//org.wildfly.transaction.client.ContextTransactionManager.resume(ContextTransactionManager.java:148) at org.infinispan@11.0.4.Final//org.infinispan.transaction.impl.TransactionTable.enlist(TransactionTable.java:219) ... 71 more
-
SerializableBiFunction does not work on replicated infinispan cache in Wildlfy 22 - class not found in lambda deserialization
Running wildfly v.22 on a cluster with replicated cache.
We use functions to manipulate infinispan cache data.
On Wildlfy v.18 cache.compute() was working fine, but on Widlfly v.22 same code gives an error that lambda function can not de deserialized on a node because JobCacheContainerProvider class not found in org.jboss.as.clustering.infinispan module. org.jboss.as.clustering.infinispan is a default value for a module attribute in Wildlfy's cache container definition.
Any suggestions how to use cache.compute() without having to create a module just for lambda function definitions. Or how to specify a currently loaded application as a module.
public class JobCacheContainerProvider implements Serializable { @Resource(lookup = "java:jboss/infinispan/cache/opencell/opencell-running-jobs") private Cache<CacheKeyLong, JobExecutionStatus> runningJobsCache; public void addUpdateJobInstance(JobInstance jobInstance) { SerializableBiFunction<? super CacheKeyLong, JobExecutionStatus, JobExecutionStatus> remappingFunction = (jobInstIdFullKey, jobExecutionStatusOld) -> { if (jobExecutionStatusOld != null) { return jobExecutionStatusOld; } else { return new JobExecutionStatus(jobInstance.getId(), jobInstance.getCode()); } }; runningJobsCache.compute(new CacheKeyLong(currentUser.getProviderCode(), jobInstance.getId()), remappingFunction); }
This is the exception:
Caused by: java.lang.ClassNotFoundException: org.meveo.cache.JobCacheContainerProvider from [Module "org.jboss.as.clustering.infinispan" version 22.0.1.Final from local module loader @252dc8c4 (finder: local module finder @43045f9f (roots: C:\andrius\programs\wildfly-22.0.1.Final\modules,C:\andrius\programs\wildfly-22.0.1.Final\modules\system\layers\base,C:\andrius\programs\wildfly-22.0.1.Final\modules\system\add-ons\keycloak))] at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) at org.infinispan@11.0.8.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readGenericThrowable(ThrowableExternalizer.java:282) at org.infinispan@11.0.8.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:259) at org.infinispan@11.0.8.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32) at org.infinispan@11.0.8.Final//org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49) at org.infinispan@11.0.8.Final//org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192) at org.infinispan@11.0.8.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221) at org.infinispan@11.0.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)