In Docker Swarm mode, is there a way to restart a Docker service, just before it reaches a specific threshold of cpu or memory?
The story goes like this!
- I have used
Docker Swarm mode
as an orchestration tool! - I have deployed my
live_ofbiz
service in swarm mode! - Service
live_ofbiz
contains bloated containers, with an image of size 1.25 GB. - There is some way or the other, where the container is leaking memory.
- I have limited the memory usage of the service to 6GB.
(
docker stats
) - With about 150 daily users on the application, the container is bound to die after 10-12 days, leaving a downtime of 2 minutes, when the memory limit capacity of 6GB is reached!
So my question is, is there any way through which I can set a threshold limit of 6GB, and in the meantime, a new container can launch itself and replace the live_ofbiz
's running container (just like docker service update --image ofbiz:$SAME_OLD_IMAGE_VERSION live_ofbiz
, but in an automated fashion)?
A possible solution can be adding a corn job to identify the memory limit reached, and hence triggering the update command via shell! But I would refrain from using a corn job due to some restrictions!
I'd like to know if Docker Swarm mode / Services configuration provides such a solution by default or not!
Thank you in advance! :)
1 answer
-
answered 2022-05-04 18:01
matic1123
As far as I know, unless you have your own API/CRON/... system/application which would be monitoring and then doing the actions needed Docker Swarm does not have a way to do so. You can probably use more than one replica of the "troublesome" service and then whenever the swarm decides to .. kill the "bad" one at least one is still running so you would minimize downtime to almost 0 or 0.
do you know?
how many words do you know
See also questions close to this topic
-
best way to add a spring boot web application with JSPs to a docker container?
so i have a spring boot web application, it uses JSPs, and im supposed to put it in a container .
my question is what is the best way ? ive tried to copy the project and then run it there using a wmvn like so : dockerfile:FROM openjdk:8-jdk-alpine ADD . /input-webapp WORKDIR /input-webapp EXPOSE 8080:8080 ENTRYPOINT ./mvnw spring-boot:run
which works, but it takes a long time getting the dependencies and feels messy .
and ive tried to package it into a jar, and then copy the jar only, and run it : dockerfile:
FROM openjdk:8-jdk-alpine ADD target/input-webapp-0.0.1-SNAPSHOT.jar input-webapp-0.0.1-SNAPSHOT.jar EXPOSE 8080:8080 ENTRYPOINT ["java","-jar","input-webapp-0.0.1-SNAPSHOT.jar"]
but this way it cant see the JSPs, or at least i think this is the problem, as i get a 404.
so is there a better way ? can i copy the jsps plus the jar to make it work? thanks
-
build spring boot (mvnw) with docker can not use cache
Spring Boot Docker Experimental Features Docker 18.06 comes with some “experimental” features, including a way to cache build dependencies. To switch them on, you need a flag in the daemon (dockerd) and an environment variable when you run the client. With the experimental features, you get different output on the console, but you can see that a Maven build now only takes a few seconds instead of minutes, provided the cache is warm.
my dockerfile can not use cache.
dockerfile
# syntax=docker/dockerfile:experimental FROM openjdk:8-jdk-alpine as build WORKDIR /workspace/app COPY mvnw . COPY .mvn .mvn COPY pom.xml . COPY src src RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests -s .mvn/wrapper/settings.xml RUN mkdir -p target/extracted && java -Djarmode=layertools -jar target/*.jar extract --destination target/extracted FROM openjdk:8-jre-alpine ENV TZ Asia/Shanghai RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN addgroup -S spring && adduser -S spring -G spring USER spring:spring ARG EXTRACTED=/workspace/app/target/extracted ARG JAVA_OPTS="-Xmx100m -Xms100m" COPY --from=build ${EXTRACTED}/dependencies/ ./ COPY --from=build ${EXTRACTED}/spring-boot-loader/ ./ COPY --from=build ${EXTRACTED}/snapshot-dependencies/ ./ COPY --from=build ${EXTRACTED}/application/ ./ ENTRYPOINT ["sh", "-c","java ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher"]
run shell
DOCKER_BUILDKIT=1 docker build -t org/spring-boot .
every time use many minutes
-
PyQT5 doesn't work on docker ImportError: libsmime3.so: cannot open shared object file: No such file or directory
I have a Dockerfile with PyQT installed like below
FROM ubuntu:20.04 ENV DEBIAN_FRONTEND=noninteractive RUN adduser --quiet --disabled-password qtuser && usermod -a -G audio qtuser RUN apt-get update -y \ && apt-get install alsa -y \ && apt-get install -y python3-pyqt5 \ && apt-get install python3-pip -y && \ pip3 install pyqtwebengine WORKDIR /htmltopdf
I built my image like this
docker build -t html-to-pdf .
Then I ran my image like this
docker run --rm -v "$(pwd)":/htmltopdf -u qtuser -it html-to-pdf python3 htmlToPdfnew.py --url https://www.w3schools.com/howto/howto_css_register_form.asp
But I'm getting below error
Traceback (most recent call last): File "htmlToPdfnew.py", line 2, in <module> from PyQt5 import QtWidgets, QtWebEngineWidgets ImportError: libsmime3.so: cannot open shared object file: No such file or directory
I do NOT get that error in my PC.
below is my python code
import sys from PyQt5 import QtWidgets, QtWebEngineWidgets from PyQt5.QtCore import QUrl, QTimer from PyQt5.QtGui import QPageLayout, QPageSize from PyQt5.QtWidgets import QApplication import argparse def main(): url = '' parser = argparse.ArgumentParser(description="Just an example", formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument("--url", help="Type url") args = parser.parse_args() config = vars(args) url = config['url'] app = QtWidgets.QApplication(sys.argv) loader = QtWebEngineWidgets.QWebEngineView() loader.setZoomFactor(1) layout = QPageLayout() layout.setPageSize(QPageSize(QPageSize.A4Extra)) layout.setOrientation(QPageLayout.Portrait) loader.load(QUrl(url)) loader.page().pdfPrintingFinished.connect(lambda *args: QApplication.exit()) def emit_pdf(finished): QTimer.singleShot(2000, lambda: loader.page().printToPdf("test.pdf", pageLayout=layout)) loader.loadFinished.connect(emit_pdf) sys.exit(app.exec_()) if __name__ == '__main__': main()
so how do I resolve below error ?
Traceback (most recent call last): File "htmlToPdfnew.py", line 2, in <module> from PyQt5 import QtWidgets, QtWebEngineWidgets ImportError: libsmime3.so: cannot open shared object file: No such file or directory
-
shell script : container vs host
For Loop to start from 2nd argument behaving differently on container and host
#!/bin/bash for i in "${@:2}" do echo $i done
Call:
script.sh 129 5 6 7
Output: Container: Alpine:Latest #skipping 2 characters
9 5 6 7
Host: Debian GNU/Linux #skipping 1st argument complete
5 6 7
-
React error: Container "tripMap" not found. Suddenly not working or being despite no changes being made
I'm working on a React project with the Trimble Map api. I've run into an issue with the map container not being found. It was working previously but when I restarted the local host the error occurred. Not sure exactly what's causing the problem.
First guess would be that its trying to render before the map container is created and causing the error but not sure where the issue would be located.
Below is my method. Thanks for any help you can provide
**StyleMenu.jsx** **//This is the file the map is being created in and also a menu that lets you change the style on click. This was working previously but now its broken without any changes.** import React, { Component, useState } from "react" import TrimbleMaps, { MapMouseEvent } from "@trimblemaps/trimblemaps-js"; import { MenuIcon, PlusIcon, MinusIcon } from "./IconSVG"; export const tripMap = new TrimbleMaps.Map({ **//creation of the map** container: "tripMap", **//container that can't be found despite it being found previously** center: new TrimbleMaps.LngLat(-74.87136635072328, 39.64996988267103), zoom: 7, style: 'transportation' **//value I need to update with the onChange function below** }) class StyleMenu extends Component { constructor(props) { super(props); this.state = { value: 'transportation', showing: false } } onChange = e =>{ this.setState({value : e.target.value}); tripMap.setStyle(e.target.value); **//what should set the value of the style of the map was working previous but container cannot be found now.** } render(){ const {value} = this.state; const { showing } = this.state; console.log(this.state.value); return( <div> <button className="menu-btn" onClick={() => this.setState({ showing : !showing})}><MenuIcon /></button> {showing ? <div className="menu-container"> <div className="menu-header"> <p>MAP STYLES</p> </div> <div className="menu-list"> <form> <label className="btn-transportation"> <span className="icon-transportation"></span> <input type="radio" value="transportation" checked={value === "transportation"} onChange={this.onChange} /> <div className="transportation-text"> Day </div> </label> <br/> <label className="btn-transportation-dark"> <span className="icon-transportation-dark"></span> <input type="radio" value="transportation_dark" checked={value === "transportation_dark"} onChange={this.onChange} /> <div className="transportation-dark-text"> Night </div> </label> <br/> <label className="btn-satellite"> <span className="icon-satellite"></span> <input type="radio" value="satellite" checked={value === "satellite"} onChange={this.onChange} /> <div className="satellite-text"> Satellite </div> </label> <br/> <label className="btn-terrain"> <span className="icon-terrain"></span> <input type="radio" value="terrain" checked={value === "terrain"} onChange={this.onChange} /> <div className="terrain-text"> Terrain </div> </label> <br/> <label className="btn-basic"> <span className="icon-basic"></span> <input type="radio" value="basic" checked={value === "basic"} onChange={this.onChange} /> <div className="basic-text"> Basic </div> </label> <br/> </form> </div> </div> :null } <button className="menu-btn-2"><PlusIcon /></button> <button className="menu-btn-3"><MinusIcon /></button> </div> ) } } export default StyleMenu;
**Map.jsx** import React, { useEffect } from "react" import TrimbleMaps from "@trimblemaps/trimblemaps-js"; import { Box } from "@mui/material"; import { TripManagementClient, ShipmentDataPayload } from "../services/TripManagementClient"; import StyleMenu, { tripMap} from "./StyleMenu"; import { valueToPercent } from "@mui/base"; export function Map({ height, coords }) { useEffect(() => { setupMap() }, []); /** * Creates an array of TrimbleMaps.LngLat objects from the `coords` collections prop * * @returns Array<TimbleMaps.LngLat> */ const parseStops = () => { const trimbleStops = [] coords.forEach(coord => { trimbleStops.push(new TrimbleMaps.LngLat(coord.long, coord.lat)); }); return trimbleStops; } const setupMap = () => { TrimbleMaps.APIKey = ""; //hidden for security <tripMap/> **//reference created map from the StyleMenu.js file.** const driverIcon = new TrimbleMaps.Marker().setLngLat([-74.415108, 39.362129]).addTo(tripMap); const endIcon = new TrimbleMaps.Marker().setLngLat([-75.165222, 39.952583]).addTo(tripMap); //All map customization and updates tripMap.on('load', function() { //updates the map after its initialized and loaded in the application. Most customizations done post initialization are placed here. //customize your map once loaded console.log("map loaded") //prints map loaded to the console }); return tripMap; } return ( <><StyleMenu /><Box id="tripMap" sx={{ height: height }}></Box></> ) }
**App.js //map and style menu joined and rendered here. below is the relevant code to the map** return ( <div> <ThemeProvider theme={theme}> {payload && <div> <Header customer={payload.orderNumber} /> <Map height={300} coords={ getStops(payload) } /> //map rendering. <TripDetails {...payload} /> </div> } </ThemeProvider> </div> ); } export default App;
-
Quarkus docker container failed to run / connect to the DB
In my project, using Quarkus, Angular and PostgreSQL DB, when I run the backend & and the frontend in dev Mode, I can connect to the DB (which is postgreSQL image running in a docker container) and create new lines in the tables and just works fine. Of course the Quarkus docker file is auto-generated. Here is the "application.properties" file I typed (inside the Quarkus project) :
quarkus.datasource.db-kind=postgresql quarkus.datasource.username= username quarkus.datasource.password= pwd quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/db-mcs-thirdparty quarkus.flyway.migrate-at-start=true quarkus.flyway.baseline-on-migrate=true quarkus.flyway.out-of-order=false quarkus.flyway.baseline-version=1
and this is the "docker-compose.yml" file which I placed inside the backend folder (Quarkus):
version: '3.8' services: db: container_name: pg_container image: postgres:latest restart: always environment: POSTGRES_USER: username POSTGRES_PASSWORD: pwd POSTGRES_DB: db-mcs-thirdparty ports: - "5432:5432" pgadmin: container_name: pgadmin4_container image: dpage/pgadmin4 restart: always environment: PGADMIN_DEFAULT_EMAIL: usernamepgadmin PGADMIN_DEFAULT_PASSWORD: pwdpgadmin ports: - "5050:80"
But when I build a Quarkus docker image and try to run it in docker container, it fails !! knowing that the Angular docker container runs well, also the DB. Here the error logs which I get after running the container:
Starting the Java application using /opt/jboss/container/java/run/run-java.sh ...
__ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2022-05-06 12:58:31,967 WARN [io.agr.pool] (agroal-11) Datasource '<default>': The connection attempt failed. 2022-05-06 12:58:32,015 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.net.UnknownHostException: db at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:229) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.postgresql.core.PGStream.createSocket(PGStream.java:241) at org.postgresql.core.PGStream.<init>(PGStream.java:98) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:109) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223) at org.postgresql.Driver.makeConnection(Driver.java:400) at org.postgresql.Driver.connect(Driver.java:259) at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:210) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:513) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:494) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1126) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)
So I replaced "localhost" in the:
quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/db-mcs-thirdparty
with the IP address, with the DB's name, even I tried to enter the user name and the psw in that same line, etc ...., but didn't work.
I even stopped all the running containers (DB, frontend) and tried to run only the Quarkus container, the same case happens. For the ports that I used, you can check the attached image. the used ports
How should resolve this issue? thank you in advance.
-
(Terraform) Error 400: Invalid request: instance name (pg_instance)., invalid
On GCP, I'm trying to create a Cloud SQL instance with this Terraform code below:
resource "google_sql_database_instance" "postgres" { name = "pg_instance" database_version = "POSTGRES_13" region = "asia-northeast1" deletion_protection = false settings { tier = "db-f1-micro" disk_size = 10 } } resource "google_sql_user" "users" { name = "postgres" instance = google_sql_database_instance.postgres.name password = "admin" }
But I got this error:
Error: Error, failed to create instance pg_instance: googleapi: Error 400: Invalid request: instance name (pg_instance)., invalid
Are there any mistakes for my Terraform code?
-
How does Kubernetes and Terraform work seamlessly together and what role do they each undertake?
I am a bit confused about the individual roles of Kubernetes and Terraform when using them both on a project.
Until very recently, I had a very clear understanding of both their purposes and everything made sense to me. But, then I heard in one of Nana's videos on Terraform, that Terraform was also very advanced in orchestration and I got confused.
Here's my current understanding of both these tools:
Kubernetes: Orchestration software that controls many docker containers working together seamlessly. Kubernetes makes sure that new containers are deployed based on the desired infrastructure defined in configuration files (written with the help of a tool like Terraform, as IaC).
Terraform: Tool for provisioning, configuring, and managing infrastructure as IaC.
So, when we say that Terraform is a good tool for orchestration, do we mean that it's a good tool for orchestrating infrastructure states or docker containers as well?
I hope someone can clear that out for me!
-
Automate Azure Devops (FTP Upload) and Git to upload on Remote Server
The current setup is as below
- Version Control - Git
- Repos and Branch hosted on - Azure DevOps
- Codebase - External server
The dev team clones Azure Repo into local git project and any staged changes are committed via Git and pushed to specific branch of Azure DevOps. In this setup we would want to upload the changes to external FTP servers and avoid manual upload. Currently trying to use Azure Devops FTP Upload Task (https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/ftp-upload?view=azure-devops), however facing issues; yaml script as below
trigger: - main pool: vmImage: 'ubuntu-latest' variables: phpVersion: 7.4 webAppName: 'Test Project' buildConfiguration: 'Release' vmImageName: 'ubuntu-latest' steps: - publish: $(System.DefaultWorkingDirectory)/AzureRepoName artifact: Test Project Deploy - task: FtpUpload@2 displayName: 'FTP Upload' inputs: credentialsOption: inputs serverUrl: 'ftps://00.00.00.00:22' username: ftp-username password: ftp-password rootDirectory: '$(System.DefaultWorkingDirectory)/AzureRepoName' remoteDirectory: '/home/public_html' clean: false cleanContents: false preservePaths: true trustSSL: true
PROBLEM
Following errors occur when I commit (for test purposes) something.
Starting: PublishPipelineArtifact ============================================================================== Task : Publish Pipeline Artifacts Description : Publish (upload) a file or directory as a named artifact for the current run Version : 1.199.0 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/publish-pipeline-artifact ============================================================================== Artifact name input: Test Project Deploy ##[error]Path does not exist: /home/vsts/work/1/s/AzureRepoName Finishing: PublishPipelineArtifact
I want to upload any staged change that is committed to main branch on Azure Devops to be automatically deploy on the remote FTP server
Thanks
-
invalid mount config for type "bind": bind source path does not exist: /container/tdarr-server/server
I am trying to use an nfs share to use docker swarm with a single endpoint server for a drive, the NFS share its self does work as i can create files on it, however when trying to use it on a stack i get a bind source path error. my nfs share is set to /container on both machines so each machine can find it at the same location. here is what i have as a volume in my docker compose file:
volumes: - /container/tdarr-server/server:/app/server - /container/tdarr-server/configs:/app/configs - /container/tdarr-server/logs:/app/logs - /container/plex/media:/media - /container/tdarr-server/transcode:/transcode
-
add nodes to dockers swarm from different servers
I am new to docker swarm, I read documentation and googled to the topic, but results was vague, Is it possible to add worker or manager node from distinct and separate Virtual private servers?