Update node condition type in status
I am trying to patch, clear Node conditions in OpenShift and/or Kubernetes cluster on a worker node. Patch isn't working, trying even workarounds, maybe update the key in etcd.
Main problem that i created new node conditions and then i removed them but they are not removed from list although they are no longer there or being updated by the controller.
$ oc describe node node1.example.com
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
ExampleToRemove False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 13 Feb 2019 15:09:42 -0500
do you know?
how many words do you know
See also questions close to this topic
-
Java Spring Webflux on Kubernetes: always [or-http-epoll-1], [or-http-epoll-2], [or-http-epoll-3], [or-http-epoll-4] despite configured resource
Small question regarding a Java 11 Spring Webflux 2.6.6+ web app, containerized and deployed using Kubernetes please.
From the web app application logs, I am seeing things such as:
INFO [service,1bcce5941c742568,22c0ab2133c63a77] 11 --- [or-http-epoll-2] a.b.c.SomeClass : Some message from the reactive pipeline. INFO [service,67cb40974712b3f4,15285d01bce9dfd5] 11 --- [or-http-epoll-4] a.b.c.SomeClass : Some message from the reactive pipeline. INFO [service,5011dc5e09de30b7,f58687695bda20f2] 11 --- [or-http-epoll-3] a.b.c.SomeClass : Some message from the reactive pipeline. INFO [service,8046bdde07b13261,5c30a56a4a603f4d] 11 --- [or-http-epoll-1] a.b.c.SomeClass : Some message from the reactive pipeline.
And always, I can only see
[or-http-epoll-1] [or-http-epoll-2] [or-http-epoll-3] [or-http-epoll-4]
which I think stands for:[reactor-http-epoll-N]
The problem is, no matter how much CPU I allocate from Kubernetes, it is always those 4, no less, no more.
I tried:
resources: requests: cpu: 1 memory: 1G limits: cpu: 2 memory: 2G resources: requests: cpu: 4 memory: 4G limits: cpu: 6 memory: 6G resources: requests: cpu: 10 memory: 10G limits: cpu: 10 memory: 10G
But again, always only those 4.
I am having a hard time understanding what is the problem here, and why am I stuck with only/always 4 "or-http-epoll-".
Thank you
-
How does Kubernetes and Terraform work seamlessly together and what role do they each undertake?
I am a bit confused about the individual roles of Kubernetes and Terraform when using them both on a project.
Until very recently, I had a very clear understanding of both their purposes and everything made sense to me. But, then I heard in one of Nana's videos on Terraform, that Terraform was also very advanced in orchestration and I got confused.
Here's my current understanding of both these tools:
Kubernetes: Orchestration software that controls many docker containers working together seamlessly. Kubernetes makes sure that new containers are deployed based on the desired infrastructure defined in configuration files (written with the help of a tool like Terraform, as IaC).
Terraform: Tool for provisioning, configuring, and managing infrastructure as IaC.
So, when we say that Terraform is a good tool for orchestration, do we mean that it's a good tool for orchestrating infrastructure states or docker containers as well?
I hope someone can clear that out for me!
- Disable Redhat Openshift Service on AWS
-
Readiness probe failes because mongosh --eval freezes
I installed the latest
bitnami/mongodb
chart for astandalone
architecture. TheReadiness
andLiveness
-probes are failing because the statementmongosh --eval "db.adminCommand('ping')"
does not terminate and freezes the shell. Full output is this:
1002180000@mongodb-6fb5b57d86-c9rh9:/$ mongosh --eval "db.adminCommand('ping')" Current Mongosh Log ID: 6274eabd30405cdc76830f1a Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.3.1 Using MongoDB: 5.0.8 Using Mongosh: 1.3.1 For mongosh info see: https://docs.mongodb.com/mongodb-shell/ ------ The server generated these startup warnings when booting: 2022-05-06T09:29:15.814+00:00: You are running on a NUMA machine. We suggest launching mongod like this to avoid performance problems: numactl --interleave=all mongod [other options] 2022-05-06T09:29:15.814+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never' ------ { ok: 1 }
If I use local
mongosh 1.3.1
with port-forwarding to my k8s-cluster, everything works fine and the shell returns to the command prompt.However, 'mongosh' returns to the prompt if I append
mongosh --eval "db.adminCommand('ping'); exit();"
But for the probes I'd like to have the result of the
ping
command as return code. -
Kubernetes pod went down
I am pretty new to Kubernetes so I don't have much idea. Last day a pod went down and I was thinking if I would be able to recover the tmp folder.
So basically I want to know that when a pod in Kubernetes goes down, does it lose access to the "/tmp" folder ?
-
GCP GKE Google Kubernetes Engine The connection to the server localhost:8080 was refused
i am trying GCP and GKE google kubernetes engine. 1-)i am create a cluster 2-)i opened cloud shell and used command "kubectl get nodes"
i get this error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
how can i solve. thanks.
-
OKD Single Node not reachable
I have a single node OKD deployed on AWS. It has been working for months. Now all the sudden it stopped being reachable. The node is up and running but cannot be reached. It is almost as if the IP address of the box changed and the URL is pointing to the wrong IP.
I tried accessing the machine with kubectl but I get:
The connection to the server api.atlanta.maleable.us:6443 was refused - did you specify the right host or port?
I am at lost. The domain is completely managed by Route 53. When I look at the DNS entries, I don't understand what the A domains are pointing to. I see stuff like this:
atlanta-xs2h2-int-e371d06bfc4a500d.elb.ca-central-1.amazonaws.com.
However I don't know how to troubleshoot this. Any suggestion?
-
Does OpenShift Origin source code include Kubernetes source code?
I'm trying to read the OpenShift Origin source code, I know Origin includes more features than Kubernetes, and someone said Origin has build-in Kubernetes, is that true? I'm green hand, I did not find any Kubernetes code in the Origin project. If Origin and Kubernetes are in different projects, does it mean I need to deploy both of them on server? So confused, can you help me? Thanks.
-
How to deal with dynamic libraries in openshift init containers
Right now I have a custom image for ffmpeg, which has the burden of having to maintain the custom image. I was thinking of a way to do this more efficiently and came across init containers. I think init containers are a good solution to make sure that I don't have to maintain a custom image and because it imposes the separation of concerns principle.
In my deployment config, I made an init container where I retrieve the ffmpeg image from docker and copy it.
kind: DeploymentConfig spec: initContainers: - name: ffmpeg image: 'docker.io/jrottenberg/ffmpeg:4.4-centos8' command: - cp - /usr/local/bin/ffmpeg - /opt/ffmpeg/ resources: {} volumeMounts: - name: opt-ffmpeg mountPath: /opt/ffmpeg terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent containers: name: example image: example/example:v1 volumeMounts: - name: ffmpeg mountPath: /opt/ffmpeg volumes: - name: ffmpeg emptyDir: {}
The problem is that ffmpeg is missing a few libraries. Ffmpeg is looking for shared libraries and is not able to find them. I know that static libraries provide everything, but I haven't found a good static library to use yet.
sh-4.4$ ldd ffmpeg linux-vdso.so.1 (0x00007ffe5876a000) libnss_wrapper.so => /lib64/libnss_wrapper.so (0x00007f1f19b37000) libavdevice.so.58 => not found libavfilter.so.7 => not found libavformat.so.58 => not found libavcodec.so.58 => not found libavresample.so.4 => not found libpostproc.so.55 => not found libswresample.so.3 => not found libswscale.so.5 => not found libavutil.so.56 => not found libm.so.6 => /lib64/libm.so.6 (0x00007f1f197b5000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f1f19595000) libc.so.6 => /lib64/libc.so.6 (0x00007f1f191d0000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f1f18fcc000) /lib64/ld-linux-x86-64.so.2 (0x00007f1f19d46000)
Does anyone know a way to link dynamic libraries? Or does anyone have a suggestion how I can use ffmpeg without a custom image?
Thanks in advance.
-
Route annotations from yml file was not working in openshift
We are using openshift for the deployment where we have 3 pods running with same service
To achieve load balancing we are trying to create a annotations in the route.
Adding annotations in Route from console it is working fine
But the same is not working if I configured from yml file.
Is anyone facing the same issue or any available fix for this
apiVersion: v1 kind: Route metadata: annotations: haproxy.router.openshift.io/balance : roundrobin haproxy.router.openshift.io/disable_cookies: true name: frontend spec: host: www.example.com path: "/test" to: kind: Service name: frontend
-
Openshift build fails only in a specific project (with meteor error)
Using Openshift 4.6. I'm trying to build an image with a buildConfig, a docker strategy from a binary source (local directory). The buildConfig is generated with the command:
oc new-build . name=build-name --strategy=docker --to=registry-url/image-name:tag --push-secret=secret-name --to-docker.
The build is started with the command:
oc start-build build-name --from-dir=. --wait --follow
The base image is pulled properly and the local Dockerfile starts executing. The dockerfile involves usage of meteor command, and it's okay as the base image contains meteor in /root directory and I'm running everything with
--allow-superuser
. When running any meteor command (even --version), the build fails with:## There is an issue with `node-fibers` ## `/root/.meteor/packages/meteor-tool/.2.2.0.1jauib.qcbe++os.linux.x86_64+web.browser+web.browser.legacy+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/fibers/bin/linux-x64-57-glibc/fibers.node` is missing. Try running this to fix the issue: /root/.meteor/packages/meteor-tool/.2.2.0.1jauib.qcbe++os.linux.x86_64+web.browser+web.browser.legacy+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node /root/.meteor/packages/meteor-tool/.2.2.0.1jauib.qcbe++os.linux.x86_64+web.browser+web.browser.legacy+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/fibers/build SyntaxError: Invalid regular expression Error: Missing binary. See message above. at Object.<anonymous> (/root/.meteor/packages/meteor-tool/.2.2.0.1jauib.qcbe++os.linux.x86_64+web.browser+web.browser.legacy+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/fibers/fibers.js:20:8) at Module._compile (module.js:635:30) at Object.Module._extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.js:497:12) at Function.Module._load (module.js:489:3) at Module.require (module.js:579:17) at require (internal/module.js:11:18) at Object.<anonymous> (/opt/meteor/dist/bundle/programs/server/boot.js:1:75) at Module._compile (module.js:635:30)
The file fibers.node definitely exists, as when I run in the Dockerfile
echo $(ls -la )
it printsfibers.node
. To complicate things more, this problem only occurs when building in a specific project (namespace). When running the same buildConfig, with same secrets, in different projects it works well. Unfortunately, I need this project.What can be the cause of faulty build in a specific namespace? Otherwise, there are no problen with it and it has more than sufficient resources and limits
-
Getting logs/more information during start-build command execution
Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] } [start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................ [start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished [start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition [Pipeline] } ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd]; {err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ... ............................................................ Uploading finished Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition , status=1} [Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought
--build-loglevel [0-5]
might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???) NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations. Consider removing those options from startBuild and using the logs() command to follow the build output.[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported. [start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported. [start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ... [start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?