Can Two Load balancers have same ssl endpoint and certificate
One of my application is running behind a Load balancer on a server in east region
. I have created a replica of the same application and deployed it on a server in west
region
My question is, that can I achieve High availability using two load balancers? Something Like
- Application running in EAST region behind load balancer LB-1 (Primary)
- If we shut down the above, then Application running in WEST region should become active which is running behind LB-2.
My thoughts:
- Replication Of Code on deployment: Write Jenkins script which will trigger a deploy command to deploy APP TO
west
REGION, whenever a deployment is done oneast
region. - CHecking the health of primary Server/application: Write cron, which will check if the server on
east
region is down - If it is down, then
a. Using Load Balancer
PATCH
API, remove the mapping of load balancer inEAST
region. b. Using Load BalancerPATCH
API, update the mapping of load balancer inWEST
region [To match with the previous east region mappings]
Are these feasible?
1 answer
-
answered 2020-08-22 22:45
aled
Note that each Dedicated Load Balancer has a unique DNS host name in CloudHub. And the certificate subject's common name attribute must match the host name to avoid SSL/TLS validation errors in the clients.
If you are intending to failover transparently for the clients, meaning that the next requests go through LB-2, then you should have a DNS CNAME record that matches LB-1 and you need to point to LB-2. If you don't have a DNS CNAME record to point to the other dedicated load balancer, then you need to change the clients URL to point to LB-2, and need to be sure that the certificate has a Subject Alternative Name with LB-2 host name, so it is valid for both.
See also questions close to this topic
-
Azure DevOps deployment fails with the error message "The operation was canceled."
My Azure DevOps pipeline tasks successfully complete without issues except for the final deployment step:
Job Issues - 1 Error The job running on agent XXXX ran longer than the maximum time of 00:05:00 minutes. For more information, see https://go.microsoft.com/fwlink/?linkid=2077134
The build logs state the operation was canceled:
021-03-02T20:50:00.4223027Z Folders: 695 2021-03-02T20:50:00.4223319Z Files: 10645 2021-03-02T20:50:00.4223589Z Size: 672611102 2021-03-02T20:50:00.4223851Z Compressed: 249144045 2021-03-02T20:50:03.6023001Z ##[warning]Unable to apply transformation for the given package. Verify the following. 2021-03-02T20:50:03.6032907Z ##[warning]1. Whether the Transformation is already applied for the MSBuild generated package during build. If yes, remove the <DependentUpon> tag for each config in the csproj file and rebuild. 2021-03-02T20:50:03.6034584Z ##[warning]2. Ensure that the config file and transformation files are present in the same folder inside the package. 2021-03-02T20:50:04.5268038Z Initiated variable substitution in config file : C:\azagent\A2\_work\_temp\temp_web_package_3012195912183888\Areas\Admin\sitemap.config 2021-03-02T20:50:04.5552027Z Skipped Updating file: C:\azagent\A2\_work\_temp\temp_web_package_3012195912183888\Areas\Admin\sitemap.config 2021-03-02T20:50:04.5553082Z Initiated variable substitution in config file : C:\azagent\A2\_work\_temp\temp_web_package_3012195912183888\web.config 2021-03-02T20:50:04.5642868Z Skipped Updating file: C:\azagent\A2\_work\_temp\temp_web_package_3012195912183888\web.config 2021-03-02T20:50:04.5643366Z XML variable substitution applied successfully. 2021-03-02T20:51:00.8934630Z ##[error]The operation was canceled. 2021-03-02T20:51:00.8938641Z ##[section]Finishing: Deploy IIS Website/App:
When I examine the deployment states, I notice one of my tasks takes quite a while for what should be a fairly simple operation:
The file transform portion takes over half of the allotted 5 minutes? Could this be the issue?
steps: - task: FileTransform@1 displayName: 'File Transform: ' inputs: folderPath: '$(System.DefaultWorkingDirectory)/_site.com/drop/Release/Nop.Web.zip' fileType: json targetFiles: '**/dataSettings.json'
It may be inefficient but FileTransform log shows a significant amount of time spent after the variable has been substituted. Not sure what's causing the long delay, but the logs don't account for the time after the variable has been successfully substituted:
2021-03-02T23:04:44.3796910Z Folders: 695 2021-03-02T23:04:44.3797285Z Files: 10645 2021-03-02T23:04:44.3797619Z Size: 672611002 2021-03-02T23:04:44.3797916Z Compressed: 249143976 2021-03-02T23:04:44.3970596Z Applying JSON variable substitution for **/App_Data/dataSettings.json 2021-03-02T23:04:45.2396016Z Applying JSON variable substitution for C:\azagent\A2\_work\_temp\temp_web_package_0182869515217865\App_Data\dataSettings.json 2021-03-02T23:04:45.2399264Z Substituting value on key DataConnectionString with (string) value: *** **2021-03-02T23:04:45.2446986Z JSON variable substitution applied successfully. 2021-03-02T23:07:25.4881687Z ##[section]Finishing: File Transform:**
-
Is it possible to deploy NodeJS app without SSH access?
According to that answer hosting that allows deploy NodeJS apps
will generally give you (at a minimum) shell access (via SSH) which you can use to run the Node.JS application
But is it possible to deploy NodeJS application without access to ssh? On my hosting plan, I have only FTP access and I was wandering if I can do that or should I change hosting provider?
-
Error While Deploying Django with Jenkins
I have created a new job in Jenkins. After build getting an error on the console output.
Using base prefix '/usr' New python executable in /var/www/project/venv/bin/python3 Not overwriting existing python script /var/www/project/venv/bin/python (you must use /var/www/project/venv/bin/python3) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/virtualenv.py", line 2327, in <module> main() File "/usr/lib/python2.7/site-packages/virtualenv.py", line 712, in main symlink=options.symlink) File "/usr/lib/python2.7/site-packages/virtualenv.py", line 926, in create_environment install_distutils(home_dir) File "/usr/lib/python2.7/site-packages/virtualenv.py", line 1482, in install_distutils mkdir(distutils_path) File "/usr/lib/python2.7/site-packages/virtualenv.py", line 323, in mkdir os.makedirs(path) File "/usr/lib64/python3.6/os.py", line 220, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/var/www/project/venv/lib64/python3.6/distutils' Running virtualenv with interpreter /bin/python3
Jenkins Exec Commands:
rm -rf /var/www/project/*.* touch /var/www/project/debug.log cp -rf /home/centos/build/* /var/www/project/ cd /var/www/project virtualenv -p python3 venv source venv/bin/activate pip3 install -r requirements.txt sudo ln -sf /opt/project/.env /var/www/project/mysite/ python3 mysite/manage.py collectstatic --noinput python3 mysite/manage.py makemigrations --noinput python3 mysite/manage.py migrate --noinput sudo chown -R centos.centos /var/www/project/* sudo /usr/sbin/service httpd restart
-
Error: The required anti-forgery form field "__RequestVerificationToken" is not present, while doing concurrent submit
I am facing an error - The required anti-forgery form field "__RequestVerificationToken" is not present while there were concurrent users submitting form at same time. There are 3 servers on google cloud platform which host web site through load balancer. The "machineKey" in config is the same on all the servers.
Could you please help?
-
nginx load balancing: wait (but dont timeout) until load falls
I have an ML inference server that is able to process about 100 requests per second and if it goes higher it breaks and times out.
Now the actual load sometimes jumps to about 200 requests per second. The clients are fine with waiting for ~2 seconds for the response but are not fine with requests timing out.
I'm thinking of putting up a reverse proxy that can somehow stall the requests when the load is high, but keep them alive and then forward them to the inference server when the load drops. I'm not even sure it's possible and I hope I made my problem clear.
Any advice or suggestions on how to solve this?
-
HTTPS load balancer redirect from https:443 to http:8080
i have a little trouble here. I have a tomcat server configured in my instance group accesible through 8080 port, for example, i can reach my web page wiht this link: http://www.pegachucho.tk:8080/java-web-project/ . I could made succesfully a standart http load balancer to reach my instances, but when i tried to create the https load balancer i couldnt reach my page, for example, https://www.pegachucho.tk:443 didn“t work :(. I want to acces to my lb via https and redirect to port 8080 of my instances. Do you have any ideas? When I try to curl the ip this message appears:
- Trying 35.190.31.24...
- TCP_NODELAY set
- Connected to 35.190.31.24 (35.190.31.24) port 443 (#0)
- schannel: SSL/TLS connection with 35.190.31.24 port 443 (step 1/3)
- schannel: checking server certificate revocation
- schannel: using IP address, SNI is not supported by OS.
- schannel: sending initial handshake data: sending 156 bytes...
- schannel: sent initial handshake data: sent 156 bytes
- schannel: SSL/TLS connection with 35.190.31.24 port 443 (step 2/3)
- schannel: failed to receive handshake, need more data
- schannel: SSL/TLS connection with 35.190.31.24 port 443 (step 2/3)
- schannel: encrypted data got 7
- schannel: encrypted data buffer: offset 7 length 4096
- schannel: next InitializeSecurityContext failed: SEC_E_ILLEGAL_MESSAGE (0x80090326) - This error usually occurs when a fatal SSL/TLS alert is received (e.g. handshake failed). More detail may be available in the Windows System event log.
- Closing connection 0
- schannel: shutting down SSL/TLS connection with 35.190.31.24 port 443
- schannel: clear security context handle curl: (35) schannel: next InitializeSecurityContext failed: SEC_E_ILLEGAL_MESSAGE (0x80090326) - This error usually occurs when a fatal SSL/TLS alert is received (e.g. handshake failed). More detail may be available in the Windows System event log.
-
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?
-
Anypoint Platform Apps/Schedulers
Is there a way to get details of all the apps in an Anypoint Platform Business Group. For example, if there are 3 apps available in the RunTime Manager, I am looking at details like below:
I know there are cloudhub APIs which can get the details, but is there a custom API? If yes, can it be integrated with a reporting tool like PowerBI to create a live dashboard?
Please advise.
-
Unable to deploy to cloudhub mule4 application with Gradle
I am trying to build my mule application and deploy it to cloudhub with Gradle (The Gradle version I'm using is: Gradle 4.10.2) . For this I have added the build.gradle file in my project as shown below:
When I give the Gradle build command it is successful as shown below:
But when I execute Gradle deploy (
gradle deploy --info
) command, it fails with following error:Any help to resolve the above issue would be appreciated.
-
Unable to call external API from VPN/VPC in Mulesoft
I am trying to call an external API using HTTP request from my Mule application. When the Mule application is deployed locally, I am able to access the external API. However, when I deploy the application to cloudhub with VPN/VPC, the call to external API fails with error as TimeOut.
As per my understanding, since I am able to access the external API from my local machine the external API port is open and there is no restriction. I could not figure out what could be the reason it is not working in VPN/VPC.
Also checked the firewall rules in cloudhub and nothing seems to be wrong.
Can someone help me with this?
Thanks in advance.