hazelcast-IMDG clustering with multiple instances
how to run two hazelcast instances in two different machines within a same cluster.? can someone please help me
i want to know how we can configure multicast and tcp in network element for testing High availability. i tried searching in many articles but could not find any relevant configuration
1 answer
-
answered 2020-08-10 19:42
Rafał Leszko
If you have two machines, with static IPs, then the simplest way is to use the TCP/IP configuration.
Assuming the IP addresses or your machines are:
1.2.3.4
and5.6.7.8
and you use the default Hazelcast port5701
, you should have the following configuration:<multicast enabled="false" /> <tcp-ip enabled="true"> <member-list> <member>1.2.3.4</member> <member>5.6.7.8</member> </member-list> </tcp-ip>
See also questions close to this topic
-
Need REST API call to find latest databricks runtime of LTS type
I need a way to find latest supported (LTS) runtime as explained on https://docs.microsoft.com/en-us/azure/databricks/runtime/dbr#runtime-versioning.
this will help me configure developer friendly cluster on demand.
-
How to create a configuration file (json) for the entire solution in visual studio?
I have a solution with many projects; I would like to have a shared configuration (for example all have the same connection string). During deployment I don't want to make the same modifications in 10 places.
To make matters worse the development folders (bin\Debug\netcoreapp3.1) differs from deployment so that I cannot create a file in a relative path.
Edit: Of course I could just create a config file on a fixed path (ex /etc/MyApp/config.json (linux) or c:%appdata%\MyApp\config.json (win) (or a registry entry)
-
Nginx proxy_pass problem with Nginx docker container (tries to GET an html)
I have implemented a couple of microservices, a web frontend and a REST backend server, and I would like to "connect them" through a reverse proxy in order to avoid those nasty CORS problems.
It is important to say that this is my very first time using Nginx, and of course, using it as a reverse proxy. As a consequence, I thought that using the official nginx image would greatly help me.
My Dockerfile has this contents:
FROM nginx:1.19.7 COPY configuration files etc ...
And my (simplified for the question) docker-compose, looks like this:
version: "3.3" services: web_server: build: ./frontend container_name: web_server_container ports: - "5000:5000" reverse_proxy: build: ./proxy container_name: nginx_reverse_proxy ports: - 8080:80 links: - web_server
web_server
contains a Flask webserver serving html files in port 5000. In/
it serves the index.With this definition of services, I use "docker-compose up --build" in order to start them all.
I have entered the running
reverse_proxy
container, and changednginx.conf
, adding this:server { listen 80 default_server; server_name localhost; access_log /var/log/nginx/proxy.log; error_log /var/log/nginx/proxy_error.log; location / { proxy_pass http://web_server_container:5000/; } }
After this, I check the configuration with
nginx -t
(andnginx -T
) andservice nginx reload
(and restart, and nginx -s reload ... and everything the web says to ensure the new config has been loaded).My problem is that, when I access http://localhost:8080/ from a browser, it goes to the "welcome to nginx" page, instead of the one in "/" in web_server_container. However, from inside the container, if I use
curl http://webserver_container:5000
it properly redirects me to the page in "/" in the frontend.Also, if I change the configuration to:
server { listen 80 default_server; server_name localhost; access_log /var/log/nginx/proxy.log; error_log /var/log/nginx/proxy_error.log; location /web { proxy_pass http://web_server_container:5000/; } }
I would expect that, if I go to http://localhost:8080/web in the browser, it redirects me to the web server container, but it does not. However, I can get the contents from http://localhost:5000 without problem. Also,the docker-compose output shows:
nginx_reverse_proxy | 2021/03/01 16:52:56 [error] 25#25: *64 open() "/usr/share/nginx/html/web" failed (2: No such file or directory), client: 172.19.0.1, server: localhost, request: "GET /web HTTP/1.1", host: "localhost:8080" nginx_reverse_proxy | 2021/03/01 16:53:32 [error] 25#25: *66 open() "/usr/share/nginx/html/web" failed (2: No such file or directory), client: 172.19.0.1, server: localhost, request: "GET /web HTTP/1.1", host: "localhost:8080"
So, instead of redirecting, it looks like is trying to show "web.html". It is weird, because the "server" block is being executed (I know it because the changes in the log names are reflected in the actual files).
In both cases, the logs are empty, which is something really weird too (maybe has to do with the way the image was built).
I also tried with this answer with no luck.
Does anybody have a clue of what could be happening?
Thank you!
-
ColdFusion 2018 Standard 2 node cluster with J2EE shared sessions for failover
Why we want to configure that setup?
We would like to have a Blue/Green zero downtime setup for our CF2018 App.
We currently have a basic CF Server Install (IIS + CF2018) in one server that connects to another Server for the DB (we are using CF2018 Standard).
Our app uses J2EE sessions
There are posts that explain how to use the External Session Storage feature included in CF (redis) but that won’t work with J2EE sessions, the CF admin interface wont allow it.
How can I setup 2 servers in a cluster (behind a load balancer) with J2EE session failover functionality by using CF2018 Standard Edition?
-
Cassandra cluster vs cassandra ring
If I have one Cassandra cluster setup across 5 data centers (3 are private DCs) and 2 are Public (Azure DCs), can I say I have 5 rings or is this 1 cluster and 1 ring ?
Can someone help understanding the term "ring" in this context.
-
Hbase fail to create table in cloudera
I am beginner in Hadoop. I am facing a problem when I try to create a simple table in Hbase.These are following ERRORS.
21/02/26 11:36:38 ERROR client.ConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
21/02/26 11:36:56 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts
21/02/26 11:36:56 ERROR zookeeper.ZooKeeperWatcher: hconnection-0x4844cdb60x0, quorum=quickstart.cloudera:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception -
Why is Hazelcast IMap journal not producing all expected events?
The minimal working example below fast produces events which then update an IMap. The IMap in turn produces update events from its journal.
public class FastIMapExample { private static final int NUMBER_OF_GROUPS = 10; private static final int NUMBER_OF_EVENTS = 1000; public static void main(String[] args) { JetInstance jet = Jet.newJetInstance(); IMap<Long, Long> groups = jet.getMap("groups"); Pipeline p1 = Pipeline.create(); p1.readFrom(fastStreamOfLongs(NUMBER_OF_EVENTS)) .withoutTimestamps() .writeTo(Sinks.mapWithUpdating(groups, event -> event % NUMBER_OF_GROUPS, (oldState, event) -> increment(oldState) )); Pipeline p2 = Pipeline.create(); p2.readFrom(Sources.mapJournal(groups, START_FROM_OLDEST)) .withIngestionTimestamps() .map(x -> x.getKey() + " -> " + x.getValue()) .writeTo(Sinks.logger()); jet.newJob(p2); jet.newJob(p1).join(); } private static StreamSource<Long> fastStreamOfLongs(int numberOfEvents) { return SourceBuilder .stream("fast-longs", ctx -> new AtomicLong(0)) .<Long>fillBufferFn((num, buf) -> { long val = num.getAndIncrement(); if (val < numberOfEvents) buf.add(val); }) .build(); } private static long increment(Long x) { return x == null ? 1 : x + 1; } }
Example output:
3 -> 7 3 -> 50 3 -> 79 7 -> 42 ... 6 -> 100 0 -> 82 9 -> 41 9 -> 100
I was expecting to see precisely 1000 events describing each update. Instead I see about 50-80 events. (It seems that the output contains all the latest updates (i.e.
"-> 100"
) from each group, but otherwise it is a random subset.)When
NUMBER_OF_GROUPS
equalsNUMBER_OF_EVENTS
(or when event generation is artificially slowed down) I receive all 1000 updates.Is this behaviour expected? Is it possible to receive all update events from the fast source?
-
How to set retention to evict cache for HazelcastInstance using ClientConfig?
I'm using hazelcast-spring mvn package to enable caching. But I'd like to assign retention to evict cache every day. I don't know how to set that retention.
Here's my Configuration class;
@Configuration public class CacheClientConfig { @Autowired private CacheConfigProperties cacheConfigProperties; @Bean public HazelcastInstance hazelcastInstance() { ClientConfig config = new ClientConfig(); config.getConnectionStrategyConfig().setAsyncStart(false); config.getNetworkConfig().setConnectionAttemptLimit(cacheConfigProperties.getConnectionAttemptLimit()); config.getNetworkConfig().setConnectionAttemptPeriod(cacheConfigProperties.getConnectionAttemptPeriod()); config.getConnectionStrategyConfig().setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC); config.getGroupConfig().setName(cacheConfigProperties.getUser()).setPassword(cacheConfigProperties.getPassword()); cacheConfigProperties.getServerAddress().stream().forEach( address -> config.getNetworkConfig().addAddress(address) ); SerializerConfig slaSerializerConfig = new SerializerConfig().setTypeClass(ServiceLevelAgreement.class).setClass(SlaSerializer.class); config.getSerializationConfig().addSerializerConfig(slaSerializerConfig); SerializerConfig slaPeriodSerializerConfig = new SerializerConfig().setTypeClass(ServiceLevelAggrementPeriod.class).setClass(SlaPeriodSerializer.class); config.getSerializationConfig().addSerializerConfig(slaPeriodSerializerConfig); HazelcastInstance instance = HazelcastClient.newHazelcastClient(config); return instance; } @Bean public CacheManager cacheManager() { return new HazelcastCacheManager(hazelcastInstance()); } @Bean public KeyGenerator keyGenerator() { return new SimpleKeyGenerator(); } }
-
Does Hazelcast Jet allow to query accumulator values from rollingAggregate?
The minimal working example below presents some (simplified) processing where:
- each event is processed individually (no windowing),
- each event belongs to a certain group,
- each event updates a group state, which then is used to generate some output value.
public class RollingExample { public static void main(String[] args) { JetInstance jet = Jet.newJetInstance(); Pipeline p = Pipeline.create(); p.readFrom(TestSources.itemStream(10)) .withoutTimestamps() .groupingKey(event -> event.sequence() % 10) //simulates 10 groups .rollingAggregate(AggregateOperation .withCreate(DoubleAccumulator::new) //group state simulated by a single double value .andAccumulate((DoubleAccumulator acc, SimpleEvent event) -> acc.accumulate(event.sequence())) //update group state with given event .andCombine(DoubleAccumulator::combine) .andDeduct(DoubleAccumulator::deduct) .andExportFinish(DoubleAccumulator::export)) .map(x -> x.getKey() + " -> " + x.getValue()) //map group state to some output value .writeTo(Sinks.logger()); jet.newJob(p).join(); } }
Given the above example, is it possible to query individual group states from outside, e.g. using Hazelcast Client? By "query" I mean programmatically getting current value of an accumulator for given group.
-
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?