Active MQ Artemis, most of the addresses and queue disappeared after HA failover
Ubuntu 18.04 Artemis 2.14
I've been experimenting with HA. Usually I can shut down the primary, and the secondary comes alive, with all the addresses and queues. But today I shut the primary down and the secondary came to life but with only a few of the addresses and queues. Some addresses appeared with no queues, some with only one, but most were totally missing.
I started the primary broker again, and HA switched back, but still without all the objects. They're all setup the same, for the most part.
I created the objects (addresses and queues) through the console, then used the data tools to export them from the journal and an import them into this instance -- which I'm preparing to run as the production instance.
What would cause the objects to disappear? Should I instead define them in the broker.xml file?
See also questions close to this topic
-
HBase regions not fail over correctly, stuck in "OPENING" RIT
I am using the hbase-2.2.3 to setup a small cluster with 3 nodes. (both hadoop and HBase are HA mode) node1: NN, JN, ZKFC, ZK, HMaster, HRegionServer node2: NN, JN, ZKFC, DN, ZK, Backup HMaster, HRegionServer node3: DN, JN, ZK, HRegionServer
When I reboot node3, it cause the regions-in-transaction (some regions are in OPENING). In the master log, I can see: master.HMaster: Not running balancer because 5 region(s) in transition
Anyone know how to fix this issue? Great thanks
-
Is there a redis pub/sub replacement option, with high availability and redundancy, or, probably p2p messaging?
I have an app with hundreds of horizontally scaled servers which uses redis pub/sub, and it works just fine.
The redis server is a central point of failure. Whenever redis fails (well, it happens sometimes), our application falls into inconsistent state and have to follow recovery process which takes time. During this time the entire app is hardly useful.
Is there any messaging system/framework option, similar to redis pub/sub, but with redundancy and high availability so that if one instance fails, other will continue to deliver the messages exchanged between application hosts?
Or, better, is there any distributed messaging system in which app instances exchange the messages in a peer-to-peer manner, so that there is no single point of failure?
-
Apache flink high availability not working as expected
Tried to test High availability by bringing down Task manager along with Job manager and yarn Node manager at same time, I thought yarn will automatically assign that application to other node but it's not happening 😔😔 How this can be achieved?
-
Load Balancing ActimeMQ Artemis in JBoss EAP 7.2.0
We are developing an application using Spring Boot and Apache Camel that is reading a message from ActiveMQ Artemis, doing some transformation, and sending it to ActiveMQ Artemis. Our application is deployed as war file in on-premise JBoss EAP 7.2.0. Both the source and target applications are remote to our application and they are also deployed on JBoss EAP 7.2.0. The remote queues to which Camel is connecting are ActiveMQ Artemis which were created in JBoss and connecting using http-remoting protocol. These setup was working when there were only one node of each of the applications.
Now we are making the source and target applications 3 nodes each (i.e. they will be deployed in multiple JBoss servers). For accessing the front-end of the source and target applications we are configuring and accessing them through a load balancer.
Can we configure the load balancer to access the source and target brokers from the Camel layer? There will be 3 source and 3 target brokers. Or is clustering the brokers the only option in this case?
We are thinking of load balancing between the queues and not clustering. Suppose we have three queues
q1
,q2
, andq3
with corresponding brokersb1
,b2
, andb3
. I will configure the load balancer url in the Camel layer likehttp-remoting://<load-balancer-url>:<port>
(much like we do while load balancing HTTP API requests). Any message coming in will hit the load balancer, and the load balancer will decide which queue to route the message to. -
Method ActiveMQServerControl.listNetworkTopology() result explanation
I created two servers
artemis create serverN
Made a cluster of 2 servers, taking server0 and server1 configurations from the examples/features/clustered/clustered-static-discovery/src/main/resources/activemq.
Started the servers
artemis run
The method
ActiveMQServerControl.listNetworkTopology()
, like the Cluster Info in web interface, shows that 1 server is active, while the one that is launched second.Is that how it should be? I thought that this method should display the data of all servers in the cluster (2 lives).
Artemis version
2.17.0
-
Make Artemis Slave Replication Use SSL
In Artemis when using replication to keep master/slave pairs synchronized the data will be replicated to the slave using a 'connection'.
I want to ensure this replication connection is encrypted. I suspect that this is done by using SSL on the
connectors
section of thebroker.xml
. However digging through the guides/official docs does not explicitly state how this is done. Yeah I can go waddling through source code and play with settings and try and sniff the traffic just thought asking here might be a bit easier.Lets assume I have just a master/slave pair for now(I know not good for split brain but lets keep it simple for now) and will be using static connection lists as UDP is not allowed in my data center I have the following setup.
<connectors xmlns="urn:activemq:core"> <connector name="master"> tcp://master:61616?sslEnabled=true;keyStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001.jks;keyStorePassword=1q2w3e4r;needClientAuth=true;trustStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001_trust.jks;truststorepassword=1q2w3e4r </connector> <connector name="slave"> tcp://slave:61616?sslEnabled=true;keyStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001.jks;keyStorePassword=1q2w3e4r;needClientAuth=true;trustStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001_trust.jks;truststorepassword=1q2w3e4r </connector> </connectors> <cluster-connections> <cluster-connection name="amq-cluster"> <connector-ref>master</connector-ref> <retry-interval>500</retry-interval> <retry-interval-multiplier>1.1</retry-interval-multiplier> <max-retry-interval>5000</max-retry-interval> <initial-connect-attempts>-1</initial-connect-attempts> <reconnect-attempts>-1</reconnect-attempts> <forward-when-no-consumers>false</forward-when-no-consumers> <max-hops>1</max-hops> <static-connectors> <connector-ref>master</connector-ref> <connector-ref>slave</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <!-- what master/slave group is this broker part of, master and slave must match --> <group-name>nft-group-1</group-name> <!-- does the broker initiate a quorum vote if connection to slave fails --> <vote-on-replication-failure>true</vote-on-replication-failure> <!-- how many votes should backup intiate when requesting a quorum?--> <vote-retries>5</vote-retries> <!-- how long should the broker wait between vote retries --> <vote-retry-wait>5000</vote-retry-wait> <vote-on-replication-failure>true</vote-on-replication-failure> <cluster-name>amq-cluster</cluster-name> </master> </replication> </ha-policy>
From my understanding the connectors will be used when forming the master slave pairs and then the replication will be done via SSL using the configuration from connectors section is this the case?