How we can map a common ip to all worker ip in docker swarm
As we know that in docker swarm we create multiple worker and one manager. The conatiner is running in mutliple worker. So we can access that in the browser by putting that worker node ip and then port like (ip:80). We can access another worker node by putting their ip and port. But What if I want that I run put one commone IP and run the container. So if anyh one of the nodes goes down then It my site does not goes down. it use another runnig worker.
worker1: 192.168.99.100:80 wokere2: 192.168.99.100:80 worker3: 192.168.99.100:80
I want one common IP so that if any one goes down the it should not goes down.
1 answer
-
answered 2022-05-04 17:58
matic1123
You basically have primarily two ways of doing this:
- You can put an HTTP proxy in front of the docker swarm, and then this proxy has health check routes on the node if any goes down the proxy removes the IP of the downed node from rotation until it comes back up (traefik, Nginx, caddy, ...).
- You can use keepalived, and with this approach, you point the domain to your virtual VRRP IP which then is "floating" around.
I know a very good ops person and they use keepalived in their company without any issues and complications. In our company, we decided to go with the proxy because we also route other "traffic" over them to different systems (legacy, ...) and we have VMWare's top licence with veeam and we can solve real-time duplications (in case of VM going down and such) with that.
So both methods are proven and tested :)
do you know?
how many words do you know
See also questions close to this topic
-
best way to add a spring boot web application with JSPs to a docker container?
so i have a spring boot web application, it uses JSPs, and im supposed to put it in a container .
my question is what is the best way ? ive tried to copy the project and then run it there using a wmvn like so : dockerfile:FROM openjdk:8-jdk-alpine ADD . /input-webapp WORKDIR /input-webapp EXPOSE 8080:8080 ENTRYPOINT ./mvnw spring-boot:run
which works, but it takes a long time getting the dependencies and feels messy .
and ive tried to package it into a jar, and then copy the jar only, and run it : dockerfile:
FROM openjdk:8-jdk-alpine ADD target/input-webapp-0.0.1-SNAPSHOT.jar input-webapp-0.0.1-SNAPSHOT.jar EXPOSE 8080:8080 ENTRYPOINT ["java","-jar","input-webapp-0.0.1-SNAPSHOT.jar"]
but this way it cant see the JSPs, or at least i think this is the problem, as i get a 404.
so is there a better way ? can i copy the jsps plus the jar to make it work? thanks
-
build spring boot (mvnw) with docker can not use cache
Spring Boot Docker Experimental Features Docker 18.06 comes with some “experimental” features, including a way to cache build dependencies. To switch them on, you need a flag in the daemon (dockerd) and an environment variable when you run the client. With the experimental features, you get different output on the console, but you can see that a Maven build now only takes a few seconds instead of minutes, provided the cache is warm.
my dockerfile can not use cache.
dockerfile
# syntax=docker/dockerfile:experimental FROM openjdk:8-jdk-alpine as build WORKDIR /workspace/app COPY mvnw . COPY .mvn .mvn COPY pom.xml . COPY src src RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests -s .mvn/wrapper/settings.xml RUN mkdir -p target/extracted && java -Djarmode=layertools -jar target/*.jar extract --destination target/extracted FROM openjdk:8-jre-alpine ENV TZ Asia/Shanghai RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN addgroup -S spring && adduser -S spring -G spring USER spring:spring ARG EXTRACTED=/workspace/app/target/extracted ARG JAVA_OPTS="-Xmx100m -Xms100m" COPY --from=build ${EXTRACTED}/dependencies/ ./ COPY --from=build ${EXTRACTED}/spring-boot-loader/ ./ COPY --from=build ${EXTRACTED}/snapshot-dependencies/ ./ COPY --from=build ${EXTRACTED}/application/ ./ ENTRYPOINT ["sh", "-c","java ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher"]
run shell
DOCKER_BUILDKIT=1 docker build -t org/spring-boot .
every time use many minutes
-
PyQT5 doesn't work on docker ImportError: libsmime3.so: cannot open shared object file: No such file or directory
I have a Dockerfile with PyQT installed like below
FROM ubuntu:20.04 ENV DEBIAN_FRONTEND=noninteractive RUN adduser --quiet --disabled-password qtuser && usermod -a -G audio qtuser RUN apt-get update -y \ && apt-get install alsa -y \ && apt-get install -y python3-pyqt5 \ && apt-get install python3-pip -y && \ pip3 install pyqtwebengine WORKDIR /htmltopdf
I built my image like this
docker build -t html-to-pdf .
Then I ran my image like this
docker run --rm -v "$(pwd)":/htmltopdf -u qtuser -it html-to-pdf python3 htmlToPdfnew.py --url https://www.w3schools.com/howto/howto_css_register_form.asp
But I'm getting below error
Traceback (most recent call last): File "htmlToPdfnew.py", line 2, in <module> from PyQt5 import QtWidgets, QtWebEngineWidgets ImportError: libsmime3.so: cannot open shared object file: No such file or directory
I do NOT get that error in my PC.
below is my python code
import sys from PyQt5 import QtWidgets, QtWebEngineWidgets from PyQt5.QtCore import QUrl, QTimer from PyQt5.QtGui import QPageLayout, QPageSize from PyQt5.QtWidgets import QApplication import argparse def main(): url = '' parser = argparse.ArgumentParser(description="Just an example", formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument("--url", help="Type url") args = parser.parse_args() config = vars(args) url = config['url'] app = QtWidgets.QApplication(sys.argv) loader = QtWebEngineWidgets.QWebEngineView() loader.setZoomFactor(1) layout = QPageLayout() layout.setPageSize(QPageSize(QPageSize.A4Extra)) layout.setOrientation(QPageLayout.Portrait) loader.load(QUrl(url)) loader.page().pdfPrintingFinished.connect(lambda *args: QApplication.exit()) def emit_pdf(finished): QTimer.singleShot(2000, lambda: loader.page().printToPdf("test.pdf", pageLayout=layout)) loader.loadFinished.connect(emit_pdf) sys.exit(app.exec_()) if __name__ == '__main__': main()
so how do I resolve below error ?
Traceback (most recent call last): File "htmlToPdfnew.py", line 2, in <module> from PyQt5 import QtWidgets, QtWebEngineWidgets ImportError: libsmime3.so: cannot open shared object file: No such file or directory
-
Output the Tuesday 6 weeks in the future in Python?
UPDATE: post edited to add answer to end of post
Core Question
Using Python, how do I output the date of the Tuesday that occurs 6 weeks after a certain date range?
Context
I work at a SaaS company in a customer facing role. Whenever I do an implementation for a client, the client receives a survey email on the Tuesday that occurs in the 6th week after our initial interaction.
To know which Tuesday to be extra nice on, we currently have to reference a chart that says if the interaction falls in date range x, then the client receives their survey solicitation on Tuesday y.
An example of this would be: if the interaction happened sometime within Apr. 18 - Apr. 22, then the survey goes out on May 31.
I would prefer for this to be done without having to hard code the date ranges and their corresponding Tuesdays into my program (just because I'm lazy and don't want to update the dates manually as the months go by), but I'm open to that solution if that's how it has to be. :)
Code Attempt
I can use datetime to output a particular date x weeks from today's date, but I'm not sure how to get from here to what I want to do.
import time from datetime import datetime, timedelta time1 = (time.strftime("%m/%d/%Y")) #current date time2 = ((datetime.now() + timedelta(weeks=6)).strftime('%m/%d/%Y')) #current date + six weeks print(time1) print((datetime.now() + timedelta(weeks=6)).strftime('%m/%d/%Y'))
Disclaimer: I am a beginner and although I did search for an answer to this question before posting, I may not have known the right terms to use. If this is a duplicate question, I would be thrilled to be pointed in the right direction. :)
~~~UPDATED ANSWER~~~
Thanks to @Mandias for getting me on the right track. I was able to use week numbers to achieve my desired result.
from datetime import datetime, timedelta, date today = date.today() #get today's date todays_week = today.isocalendar()[1] #get the current week number based on today's date survey_week = todays_week + 6 #add 6 weeks to the current week number todays_year = int(today.strftime("%Y")) #get today's calendar year and turn it from a str to an int survey_week_tuesday = date.fromisocalendar(todays_year, survey_week, 2) #(year, week, day of week) 2 is for Tuesday print("Current Week Number:") print(todays_week) print("Current Week Number + 6 Weeks:") print(todays_week + 6) print("Today's Year:") print(todays_year) print("The Tuesday on the 6th week from the current week (i.e. survey tuesday):") print(survey_week_tuesday.strftime('%m-%d-%Y')) #using strftime to format the survey date into MM-DD-YYYY format because that's what we use here even though DD-MM-YYYY makes more sense
-
insert bulk documents into mongo db
I need to insert multiple docs into mongo db at once. cannot directly import a csv file or use insertMany since there are nested objects inside each document. for outer object, one key's value will change every time while the rest of the key's value remain the same and need to generate random values for two of the other keys. for inner object, values change every time. this seems complicated to me and if anyone could help me breakdown the problem statement and help me automate it to avoid the tedious manual work, it'd be very helpful. I'm using Studio 3T and node.js to code.
{ "_id" : ObjectId("626f6f7b4199350845746a54"), "isApproved" : false, "msgStatus" : false, "name" : "IN", "createdBy" : "BAA0704", "customerId" : "HH00012", "villageId" : "1848", "ans" : { "responseID" : "5f440bc3-c76c-411a-b1e4-6a25a5f2aba3", "submittedTime" : "31-03-2022 16:45", "syncedTime" : "31-03-2022 16:45", "formRevisionSubmittedIn" : "2", "tags" : "NA", "timeSpent" : "0:16:12", "name" : "shruthi", "villagePopulation" : "10000", "age" : "28", "bankAccount" : "yes", "familyMembers" : "7", "maritalStatusYes" : "Yes", "maritalStatusNo" : "No", "kids" : "3", "socialMediaHandles" : "facebook" }, "createdAt" : ISODate("2022-05-02T13:22:19.630+0000"), "updatedAt" : ISODate("2022-05-02T13:22:19.630+0000"), "__v" : NumberInt(4325), "nId" : NumberInt(11)
-
CFG of w != w^R
Given
L={w in {a,b}* | w != w^R}
I want to find its CFG. Please do not tell me the answer for that.
What is the intuition of solving these kind of questions?
I tried doing it for about 1 hour, with no luck.
Thanks!
-
How can I import .cer in SAP Commerce Cloud
I am using SAP Commerce Cloud, in Public Cloud. And I am trying to insert a .cer file to make rest calls to API Gateway.
I read about importing it in java using command lines to import to keystore.
But, I don't know how to do it in the SAP Commerce Cloud
-
How to incorporate the planetlab workload for load balancing for CloudAnalyst?
I have set up a dynamic load balancing policy. I would like to test this with the planetlab workload traces on CloudAnalyst tool. How can I do this?
-
How would one create different environments in Commercetools?
I am trying to set up Commercetools with a CI/CD pipeline. It is my first time working with microservices and cloud architectures.
With a monolithic code base you can have development, QA and production environments - how would one go about this with Commercetools? Would you setup two/three projects? Should the projects share the same microservices or would you set up multiple of those as well? If not, then I suppose you would do end-to-end testing on production, that can't be right?
I am not interested in how to setup microservices, I am interested in how to set up the project that performs changes to the Commercetools API.
Thanks for any help.
-
(Terraform) Error 400: Invalid request: instance name (pg_instance)., invalid
On GCP, I'm trying to create a Cloud SQL instance with this Terraform code below:
resource "google_sql_database_instance" "postgres" { name = "pg_instance" database_version = "POSTGRES_13" region = "asia-northeast1" deletion_protection = false settings { tier = "db-f1-micro" disk_size = 10 } } resource "google_sql_user" "users" { name = "postgres" instance = google_sql_database_instance.postgres.name password = "admin" }
But I got this error:
Error: Error, failed to create instance pg_instance: googleapi: Error 400: Invalid request: instance name (pg_instance)., invalid
Are there any mistakes for my Terraform code?
-
How does Kubernetes and Terraform work seamlessly together and what role do they each undertake?
I am a bit confused about the individual roles of Kubernetes and Terraform when using them both on a project.
Until very recently, I had a very clear understanding of both their purposes and everything made sense to me. But, then I heard in one of Nana's videos on Terraform, that Terraform was also very advanced in orchestration and I got confused.
Here's my current understanding of both these tools:
Kubernetes: Orchestration software that controls many docker containers working together seamlessly. Kubernetes makes sure that new containers are deployed based on the desired infrastructure defined in configuration files (written with the help of a tool like Terraform, as IaC).
Terraform: Tool for provisioning, configuring, and managing infrastructure as IaC.
So, when we say that Terraform is a good tool for orchestration, do we mean that it's a good tool for orchestrating infrastructure states or docker containers as well?
I hope someone can clear that out for me!
-
Automate Azure Devops (FTP Upload) and Git to upload on Remote Server
The current setup is as below
- Version Control - Git
- Repos and Branch hosted on - Azure DevOps
- Codebase - External server
The dev team clones Azure Repo into local git project and any staged changes are committed via Git and pushed to specific branch of Azure DevOps. In this setup we would want to upload the changes to external FTP servers and avoid manual upload. Currently trying to use Azure Devops FTP Upload Task (https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/ftp-upload?view=azure-devops), however facing issues; yaml script as below
trigger: - main pool: vmImage: 'ubuntu-latest' variables: phpVersion: 7.4 webAppName: 'Test Project' buildConfiguration: 'Release' vmImageName: 'ubuntu-latest' steps: - publish: $(System.DefaultWorkingDirectory)/AzureRepoName artifact: Test Project Deploy - task: FtpUpload@2 displayName: 'FTP Upload' inputs: credentialsOption: inputs serverUrl: 'ftps://00.00.00.00:22' username: ftp-username password: ftp-password rootDirectory: '$(System.DefaultWorkingDirectory)/AzureRepoName' remoteDirectory: '/home/public_html' clean: false cleanContents: false preservePaths: true trustSSL: true
PROBLEM
Following errors occur when I commit (for test purposes) something.
Starting: PublishPipelineArtifact ============================================================================== Task : Publish Pipeline Artifacts Description : Publish (upload) a file or directory as a named artifact for the current run Version : 1.199.0 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/publish-pipeline-artifact ============================================================================== Artifact name input: Test Project Deploy ##[error]Path does not exist: /home/vsts/work/1/s/AzureRepoName Finishing: PublishPipelineArtifact
I want to upload any staged change that is committed to main branch on Azure Devops to be automatically deploy on the remote FTP server
Thanks
-
invalid mount config for type "bind": bind source path does not exist: /container/tdarr-server/server
I am trying to use an nfs share to use docker swarm with a single endpoint server for a drive, the NFS share its self does work as i can create files on it, however when trying to use it on a stack i get a bind source path error. my nfs share is set to /container on both machines so each machine can find it at the same location. here is what i have as a volume in my docker compose file:
volumes: - /container/tdarr-server/server:/app/server - /container/tdarr-server/configs:/app/configs - /container/tdarr-server/logs:/app/logs - /container/plex/media:/media - /container/tdarr-server/transcode:/transcode
-
add nodes to dockers swarm from different servers
I am new to docker swarm, I read documentation and googled to the topic, but results was vague, Is it possible to add worker or manager node from distinct and separate Virtual private servers?
-
In Docker Swarm mode, is there a way to restart a Docker service, just before it reaches a specific threshold of cpu or memory?
The story goes like this!
- I have used
Docker Swarm mode
as an orchestration tool! - I have deployed my
live_ofbiz
service in swarm mode! - Service
live_ofbiz
contains bloated containers, with an image of size 1.25 GB. - There is some way or the other, where the container is leaking memory.
- I have limited the memory usage of the service to 6GB.
(
docker stats
) - With about 150 daily users on the application, the container is bound to die after 10-12 days, leaving a downtime of 2 minutes, when the memory limit capacity of 6GB is reached!
So my question is, is there any way through which I can set a threshold limit of 6GB, and in the meantime, a new container can launch itself and replace the
live_ofbiz
's running container (just likedocker service update --image ofbiz:$SAME_OLD_IMAGE_VERSION live_ofbiz
, but in an automated fashion)?A possible solution can be adding a corn job to identify the memory limit reached, and hence triggering the update command via shell! But I would refrain from using a corn job due to some restrictions!
I'd like to know if Docker Swarm mode / Services configuration provides such a solution by default or not!
Thank you in advance! :)
- I have used