Tomcat primary secondary setup in multiple AWS EC2 instances
I have got a java application war file deployed inside a tomcat (primary) server of an AWS EC2 instance. This war typically performs some backend tasks and does NOT directly interact with any end users. I have created a similar tomcat (secondary) server inside another EC2 instance and deployed the same war file.
How can I set up the primary & secondary tomcat instances such that the secondary EC2's tomcat can become active automatically incase if the primary EC2's tomcat goes down?
1 answer
-
answered 2020-12-02 13:39
Hiran Chaudhuri
The issue not on Tomcat. You bring up multiple, and either they all act on their own or can even share session information when running as a cluster. This cluster targets an active/active setup.
However unless your application is aware of that and supports that use case, your only option is to run one Tomcat including your war at a time. So you are searching for an active/passive setup and may need additional software as listed in implementations.
See also questions close to this topic
-
What to test with JUnit
Premise I am about to try my hand for the first time in testing a university project in which I was thrown without the slightest experience, finding myself in front of a gigantic problem: How and what should I test
Mine is a simple app that communicates with a movie database and has an MVVM structure.
These are my packages: activities, adapters (for recycler views), fragments, models, repositories, request, response, utils, viewmodels
My question is should i test everything? For example do activities or fragments have to be tested with Espresso or also with JUnit?
-
How to make a regex for decimal that has a defined length (coma includes)
I am looking for a 15 characters length regex with a decimal. In the swift documentation, the regex would look like this : 3!a15d where 3!a means [a-zA-Z]{3} and 15d means a decimal of 15 characters length with a coma.
I tried the regex below :
([A-Z]{3}[0-9]{1,14}[,][0-9]{1})|([A-Z]{3}[0-9]{1,13}[,][0-9]{1,2})|([0-9]{1,12}[,][0-9]{1,3})|([0-9]{1,11}[,][0-9]{1,4})|([0-9]{1,10}[,][0-9]{1,5})|([0-9]{1,9}[,][0-9]{1,6})|([0-9]{1,8}[,][0-9]{1,7})|([0-9]{1,7}[,][0-9]{1,8})|([0-9]{1,6}[,][0-9]{1,9})|([0-9]{1,5}[,][0-9]{1,10})|([0-9]{1,4}[,][0-9]{1,11})|([0-9]{1,3}[,][0-9]{1,12})|([0-9]{1,2}[,][0-9]{1,13})|[0-9]{1}[,][0-9]{1,14}
But it didn't work. Do you have any tips to help me?
-
Server throwing broken pipe exception
Now I am working on client-server application, client side using exoplayer to request video file from server (struts based server), when the exoplayer first time requesting video source, struts throws IOException: Broken pipe, but it loads the video file successfully on the second try. On server side throws such exception every time the client restarts.
On the some posts, I saw some said that client closing connection, but here exoplayer requests the source and i don't know how to increase timeout for the its connection. Please help me to figure out the problem?
Error:
Jan 26, 2021 3:35:24 PM com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor error SEVERE: java.io.IOException: Broken pipe org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:351) at org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:776) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:298) at org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:251) at org.apache.catalina.connector.CoyoteOutputStream.close(CoyoteOutputStream.java:157) at org.apache.struts2.dispatcher.StreamResult.doExecute(StreamResult.java:305) at org.apache.struts2.dispatcher.StrutsResultSupport.execute(StrutsResultSupport.java:193) at com.opensymphony.xwork2.DefaultActionInvocation.executeResult(DefaultActionInvocation.java:372) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:276) at org.apache.struts2.interceptor.DeprecationInterceptor.intercept(DeprecationInterceptor.java:41) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:168) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:76) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:229) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:229) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:191) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:73) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.interceptor.DateTextFieldInterceptor.intercept(DateTextFieldInterceptor.java:125) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:91) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:253) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:140) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:193) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:189) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:245) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:575) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:81) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:99) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:526) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:140) at org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101) at org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:152) at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1261) at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:793) at org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:563) at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:501) at org.apache.coyote.http11.Http11OutputBuffer$SocketOutputBuffer.doWrite(Http11OutputBuffer.java:538) at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:112) at org.apache.coyote.http11.Http11OutputBuffer.doWrite(Http11OutputBuffer.java:190) at org.apache.coyote.Response.doWrite(Response.java:601) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:339) ... 77 more
-
Lambda doesn't copy object to another bucket when delete object event triggers
Lambda below works just fine with the object create event triggers but doesn't copy object on delete object event. Turning versioning on (to switch to deletes with markers instead of permanent) doesn't change it. Lambda role has arn:aws:iam::aws:policy/AmazonS3FullAccess and arn:aws:iam::aws:policy/AWSLambda_FullAccess policies attached. What is a problem with this function?
import json import boto3 # boto3 S3 initialization s3_client = boto3.client("s3") def lambda_handler(event, context): source_bucket_name = event['Records'][0]['s3']['bucket']['name'] destination_bucket_name = source_bucket_name + '-glacier' print(f'Copying from {source_bucket_name} to {destination_bucket_name}') print("Event :", event) # Filename of an object (with path) file_key_name = event['Records'][0]['s3']['object']['key'] # Copy Source Object copy_source_object = {'Bucket': source_bucket_name, 'Key': file_key_name} # S3 copy object operation s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket_name, Key=file_key_name, StorageClass='GLACIER') return { 'statusCode': 200, 'body': json.dumps('Hello from S3 events Lambda!') }
-
Serverless Configuration warning at ‘functions.createUser.events[1].cognitoUserPool.pool’: should be string
I’m trying to attach an exported out value from SST App Stack and attach it to a function event. However, when I do so it runs into a configuration validation error.
Configuration warning at ‘functions.createUser.events[1].cognitoUserPool.pool’: should be string
My yml file is:
createUser: handler: createUser.handler description: function to create a new user events: - http: path: user method: post cors: true authorizer: aws_iam - cognitoUserPool: pool: !ImportValue ${self:custom.sstApp}-UserPoolName trigger: PreSignUp
My exported value from my sst app
new CfnOutput(this, 'UserPoolName', { value: userPool.userPoolProviderName, exportName: app.logicalPrefixedName('UserPoolName'), });
Can someone explain how I can access the string value from what looks to be an array exported from sst app.
-
Is there a way to prevent or save Realm JS' realm-object-server files from being deleted on AWS Elastic Beanstlak deployment?
Is there a way to prevent or save Realm JS'
realm-object-server
files from being deleted when deploying to AWS Elastic Beanstalk? It ends up taking around 25 minutes to re-download everything.If it's not possible, is there some way to improve it? I'm not sure if some unnecessary duplication is happening because Realm.Sync.addListener() is downloading in a separate folder "realm-object-server/listener" (/realms), while opening realms create "realm-object-server/{guid}". These are the default paths.
I'm using Node.js 12 running on Amazon Linux 2, still on Legacy Realm, planning to migrate to MongoDB Realm soon.
-
Spring boot embedded tomcat threadpool configuration
Application is built using spring boot.My application uses application.yaml for external config .But when i am trying to add below config in application.yaml , application fails to start with error as tomcat is not valid.However similar equivalent config I have tried in another application with application.propeties it works there.
server: port:8080 tomcat: max-threads:500 accept-count:500 max-connections:10000 min-spare-threads:500
-
Why Jenkins lost connection with Tomcat server
My Jenkins is lost connection with the Tomcat server. I also has added private key in Jenkins credentials.
This is my jenkinsfile for 'Deploy-toTomcat' stage
steps { sshagent(['tomcat']) { sh 'scp -o StrictHostKeyChecking=no target/*.war ubuntu@35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war' } } }
This is the error when I am trying to build the pipeline in Jenkins
+ scp -o StrictHostKeyChecking = no target/WebApp.war ubuntu@35.239.69.247:/home/nat/prod/apache-tomcat-9.0.41/webapps/webapp.war command-line line 0: missing argument. lost connection script returned exit code 1
error
$ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 139377 killed; [ssh-agent] Stopped.
I also put command chmod 777 webapps I am following this link https://www.youtube.com/watch?v=dSMSHGoHVJY&list=PLjNII-Jkdjfz5EXWlGMBRk63PC8uJsHMo&index=7 to deploy the tomcat.
Hope anyone knows can answer my question on how I want to deploy to tomcat. The source code that I test to build the pipeline also from https://github.com/cehkunal/webapp.git. Thank you.
-
Custom template resolver not working when trying to use Thymeleaf with React JS
I'm trying to setup a Spring Boot + React JS application and want that the are running inside the same WAR archive. So thanks to several tutorials on the internet I found out that the best way should be to add thymeleaf as dependency and then make a custom template resolver which points to the react js build directory. Somehow this is not working.
The error I get:
org.thymeleaf.exceptions.TemplateInputException: Error resolving template [index], template might not exist or might not be accessible by any of the configured Template Resolvers
My web config with the custom resolvers:
@Controller @EnableWebMvc @ComponentScan public class SpringWebConfig implements ApplicationContextAware, WebMvcConfigurer { private ApplicationContext applicationContext; @Autowired private SpringResourceTemplateResolver templateResolver; @Autowired private SpringTemplateEngine templateEngine; public SpringWebConfig() { super(); } @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext = applicationContext; } @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/api/**") .allowedOrigins("*") .allowCredentials(false).maxAge(3600); } @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler( "/static/**", "/favicon.ico", "manifest.json" ) .addResourceLocations( "classpath:/public/static/", "classpath:/public/favicon.ico", "classpath:/public/manifest.json" ); } @Bean public SpringResourceTemplateResolver templateResolver(){ SpringResourceTemplateResolver templateResolver = new SpringResourceTemplateResolver(); templateResolver.setApplicationContext(this.applicationContext); templateResolver.setPrefix("/WEB-INF/classes/public/"); templateResolver.setSuffix(".html"); templateResolver.setTemplateMode(TemplateMode.HTML); templateResolver.setCacheable(true); return templateResolver; } @Bean public SpringTemplateEngine templateEngine(){ SpringTemplateEngine templateEngine = new SpringTemplateEngine(); templateEngine.setTemplateResolver(templateResolver); templateEngine.setEnableSpringELCompiler(true); templateEngine.addDialect(new Java8TimeDialect()); return templateEngine; } @Bean public ThymeleafViewResolver viewResolver(){ ThymeleafViewResolver viewResolver = new ThymeleafViewResolver(); viewResolver.setTemplateEngine(templateEngine); return viewResolver; } }
The frontend controller:
@Controller public class FrontendController { @GetMapping("**") public String index() { return "index"; } }
The structure in the tomcat webapp folder (I've added the WAR as ROOT):
What am I doing wrong? Or is there a better way to achieve that a Spring Boot and React JS is running inside the same WAR file? Just took the Thymeleaf option because of the many posts on the internet.
-
(nodejs, ec2) yarn package install error in EC2 ubuntu 16.04
I install node using 'nvm' and
Running yarn install on ec2 does not proceed with this message.
Even if I wait a long time, the completion message does not appear.
What's the problem?
[7/10] ⠈ iconv: CXX(target) Release/obj.target/iconv/src/binding.o [6/10] ⠈ phantomjs-prebuilt: Copying extracted folder /tmp/phantomjs/ph [-/10] ⠈ waiting... [9/10] ⠁ java: CXX(target) Release/obj.target/nodejavabridge_bindings/s [-/10] ⠁ waiting...
node version: 10.16.3
java version: 1.8.0._201
linux version: 16.04
-
Unable to connect remote mysql from nodejs app of amazon ec2
I am trying to connect remote mysql using following node source code.
var http = require('http'); var mysql = require('mysql'); const util = require( 'util' ); const express = require('express'); const config = { host: "18.223.7.150", user: "****", password: "****", database: "*****" } function makeDb( config ) { const connection = mysql.createConnection( config ); return { query( sql, args ) { return util.promisify( connection.query ) .call( connection, sql, args ); }, close() { return util.promisify( connection.end ).call( connection ); } }; } const db = makeDb( config ); let app = express(); let router = express.Router(); router.post('/sync/:id', async function(req, res){ var customerSql = "select * from customers where id=?"; let customer = await db.query(customerSql,req.params.id); res.json({customer}); });
...
It is working well on localhost.
And when I am using a amazon rds database on amazon ec2 instance, it is working well.
And when I am using php with same database config on same ec2 instance, it is working well.
But on only node js, it doesn't work.
The remote mysql database exists on shared server.
I can connect the remote mysql database using mysql client on the ec2 instance.
Very strange. What is wrong?
Can you help me?
-
After creating/clone one of instances on AWS (EC2), Nginx 502 bad gateway
The thing is, I just created one new instance in my AWS (EC2) Console. At first I tried:
Cloning my project, by creating new image
Creating/Launch instance directly.
But When i did two of these steps. My current instance (that i clone) is getting 502 Nginx error bad gateway. So I decided to revert all my steps above, delete and terminate those things.
One of my friends said that it's REVERSE PROXY But i'm still not sure how and why.
Now my server stop running for that project only.
-
Configure nginx reverse proxy to be highly available
I have an Ec2 instance configured to act as a reverse proxy in my dev envirement , i want to avoid single point of failure and make it HA in production .
This is the current config
upstream backend { server 10.x.x.x:9091; } server { listen 7001; server_name 172.28.25.29 ; location / { include proxy_params; proxy_pass http://backend; } }
It is possible to create another instance and add it to the servers section or i have to use an application load balancer ?
-
MySQL Router (innodb cluster) is not balancing reads
I have setup an InnoDB Cluster with 3 nodes in single primary mode and placed multiple MySQL Routers (one per app server) in front of my cluster to act as gateway/load balancer. While general sql operations are going smoothly I've noticed the load is not balanced at all despite using round-robin (RW) and round-robin-with-fallback (R) as my routing strategies.
My test consist on a simple node app with a mysql connection pool of 50 that executes a loop with a SELECT over and over. I get this kind of results :
- sql1: 1-60 queries/s
- sql2 (primary): 1K queries/s
- sql3: 1-50 queries/s
So it seems all of my nodes receive at least some of the queries but the primary is receiving nearly all of the load. Am I doing something wrong here ? Shouldn't I get more balanced reads with round-robin ?
EDIT: after running
show processlist;
on all nodes It seems only the primary is actually handling real queries. The other nodes are only processing queries by the MySQL Router user...
More details :
Router is 8.0.21 (not using 22 due to a major bug) and Server is 8.0.22
All the routers are automatically and successfully boostrapped at launch :
mysqlrouter --bootstrap "bootstrap_user@bootstrap_host:3306" --name "router1" --report-host "router1" --account "router_user" --account-create never --user=mysqlrouter --directory /tmp/mysqlrouter --force --conf-use-gr-notifications
Here's my router config. /tmp is used as it's running inside a container and might be stopped and re-created at anytime so it gets bootstrapped all over again when this happens.
# File automatically generated during MySQL Router bootstrap # Some values such as the routing_strategy are edited after the automatic bootstrap [DEFAULT] name=router1 user=mysqlrouter logging_folder= runtime_folder=/tmp/mysqlrouter/run data_folder=/tmp/mysqlrouter/data keyring_path=/tmp/mysqlrouter/data/keyring master_key_path=/tmp/mysqlrouter/mysqlrouter.key connect_timeout=15 read_timeout=30 dynamic_state=/tmp/mysqlrouter/data/state.json [logger] level = INFO [metadata_cache:my_cluster] cluster_type=gr router_id=1 user=cluster_router metadata_cluster=my_cluster ttl=0.5 auth_cache_ttl=-1 auth_cache_refresh_interval=2 use_gr_notifications=1 [routing:my_cluster_rw] bind_address=0.0.0.0 bind_port=3306 destinations=metadata-cache://my_cluster/?role=PRIMARY routing_strategy=round-robin protocol=classic [routing:my_cluster_ro] bind_address=0.0.0.0 bind_port=3307 destinations=metadata-cache://my_cluster/?role=SECONDARY routing_strategy=round-robin-with-fallback protocol=classic
-
Kubeadm join: Fails while creating HA cluster with multiple master nodes
I have
5 Vm in my GCP
, out of which three are supposed to bemaster1, master2, master3
and other two are worker nodes(worker1 & worker 2)
. I have created aTCP Loadbalancer(LB)
to enable load balancing for the master nodes. I have two sections in the LB:i)frontend ii)backend
In the backend, i have defined all master ips there. And the frontend, i generated a static public ip and given port
6443 as LB port
.In master1, i sucessfully ran the
kubeadm init
command as follows:kubeadm init --control-plane-endpoint="<LB_IP>:6443" --apiserver-advertise-address=10.128.0.2 --pod-network-cidr=10.244.0.0/16
where 10.128.0.2 is the master1 internal ip & 10.244.0.0/16 is the network cidr for the kube-flannel.
The kubeadm init runs sucessfully and gives two kubeadm join commands, one to join a new control plane and other to join a new worker node.
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join LB_IP:6443 --token znnlha.6Gfn1vlkunwpz36b \ --discovery-token-ca-cert-hash sha256:dc8834a2a5b4ada38a1ab9831e4cae67e9d64cb585458a194018f3ba5a82ac4U \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join LB_IP:6443 --token znnlha.6sfn1vlkunwpz36A \ --discovery-token-ca-cert-hash sha256:dc8834a2a5b4ada38a1ab9831e4cae68e9d64cb585458a194018f3ba5a82ac4e
I am not using
--upload-certs
for transfering the certificates from one control plane to another. I am doing it manually.But when I run the above
kubeadm join
command to add a new control plane, on the one of my other master nodes,saymaster2
, I am getting an error like following :[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get "https://LB_IP:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp LB_IP:6443: connect: connection refused