Helm Chart available for highly available PostgreSQL on Openshift
I was looking for options to run Postgres HA in production on Openshift.
I tried incubator/patroni chart(added sa and scc), but sometimes it runs properly and sometimes lock is not acquired either to master or replica instance of postgres. Also there is no way to create automatic schema. Schema needs to be created manually by execing into the pod.
Again I tried stable/postgresql, still there are issues in the helm chart while running it on Openshift
I saw some helm charts for production grade setup such as Zalando Postgres Operator with Patroni and Crunchy Postgres Operator but through single helm chart I am not able to run full setup of highly available postgresql. There are manual steps involved like installing pgo client and connectiong it to psql.
So, is there any postgres highly available helm chart which can be run in production on Openshift with 1 or 2 commands by just changing in values.yaml file.
1 answer
-
answered 2020-10-16 21:11
titou10
Bitnami have many interesting helm chart, including one for PostgreSQL:
https://github.com/bitnami/charts/tree/master/bitnami/postgresql
For HA: https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha
See also questions close to this topic
-
"error: operator does not exist: uuid - character varying" In node-postgres
I'm getting this error in node-Postgres
error: operator does not exist: uuid - character varying at Parser.parseErrorMessage (C:\Users\smith\Desktop\WaveFrame\node_modules\pg-protocol\dist\parser.js:278:15) at Parser.handlePacket (C:\Users\smith\Desktop\WaveFrame\node_modules\pg-protocol\dist\parser.js:126:29) at Parser.parse (C:\Users\smith\Desktop\WaveFrame\node_modules\pg-protocol\dist\parser.js:39:38) at Socket. (C:\Users\smith\Desktop\WaveFrame\node_modules\pg-protocol\dist\index.js:10:42) at Socket.emit (events.js:315:20) at addChunk (_stream_readable.js:309:12) at readableAddChunk (_stream_readable.js:284:9) at Socket.Readable.push (_stream_readable.js:223:10) at TCP.onStreamRead (internal/stream_base_commons.js:188:23) { length: 272, severity: 'ERROR', code: '42883', detail: undefined, hint: 'No operator matches the given name and argument types. You might need to add explicit type casts.', position: '47', internalPosition: undefined, internalQuery: undefined, where: undefined, schema: undefined, table: undefined, column: undefined, dataType: undefined, constraint: undefined, file: 'd:\pginstaller_13.auto\postgres.windows-x64\src\backend\parser\parse_oper.c', line: '731', routine: 'op_error' }
Here is my code
const dayjs = require('dayjs'); const { v4: uuidv4 } = require('uuid'); const utc = require('dayjs/plugin/utc'); const timezone = require('dayjs/plugin/timezone'); dayjs.extend(utc); dayjs.extend(timezone); dayjs.tz.setDefault('America/New_York') exports.actLog = async (author, action, target, reason, count) => { const unId = uuidv4(); const oldDate = dayjs(); const dateTime = dayjs(oldDate).add(30 - (oldDate.minute() % 30), "minutes").format('YYYY-MM-DD HH:mm:ss'); pool.query(`INSERT INTO logging.moderation (author, id-author, action, target, id, time, reason, count) VALUES ('${author.username}#${author.discriminator}', '${author.id}', '${action}', '${target}', '${unId}', '${dateTime}', '${reason}')`, function (err, result) { if (err) { console.error(err) } }) } exports.checkWarn = async (user, author, reason, callback) => { pool.query(`SELECT count FROM moderation.warnings WHERE id-target = ${user.id}`, async (err, result) => { if (err) { console.error(err) return err; } else { if (result.rows[0].count == 2 || 3) { callback(2) //this.actLog(author, 'warn', user, reason, result.rows[0].count) } else if (result.rows[0].count == 1) { callback(1) //this.actLog(author, 'warn', user, reason, result.rows[0].count) } } }) } exports.addWarn = async (author, user) => { const unId = uuidv4(); const oldDate = dayjs(); const dateTime = dayjs(oldDate).add(30 - (oldDate.minute() % 30), "minutes").format('YYYY-MM-DD HH:mm:ss'); pool.query(`SELECT count FROM moderation.warnings WHERE id-target = ${user.id}`, async (err, result) => { pool.query(`INSERT INTO moderation.warnings (author, id-author, target, id-target, id, time, count) VALUES ('${author.username}#${author.discriminator}', '${author.id}', '${user.username}#${user.discriminator}', '${user.id}', '${unId}', '${dateTime}')`, function (err, result) { if (err) { console.error(err) } }) }) }
I'm fairly new to Postgres, and I'm most used to mysql.js so I'm not that sure what's causing this.
-
How to make PHP's PostgreSQL functions log PHP errors for these PG errors which are normally ignored?
Normally, if I send a syntax error or many other kinds of errors with the
pg_*
functions from PHP to PostgreSQL, it will log both as a PostgreSQL error and a PHP error. (As expected.)However, "some" errors (I'm not sure how it determines which are "unimportant") result in logging only in PostgreSQL, but not as a PHP error. This means that I'm unable to see which script file and line number caused the error, which makes it impossible to debug those errors since I have no idea where they occurred.
Here are the errors (at least which happen to me daily) which are logged by PG but apparently not seen as errors by PHP (and their query in parenthesis):
there is no transaction in progress
(simplyCOMMIT
)there is already a transaction in progress
(simplyBEGIN
)
I wonder:
- How do I make these also trigger a PHP error?
- Why are these "exempt" from triggering PHP errors?
(I do have my own error logger function in PHP, BTW, but it never seems to receive any errors for those specific kinds of PG errors.)
Could it be because they are considered "WARNING"s instead of "ERROR"s by PG? But even if so, how do I make PHP consider those errors?
To make it clear, this code:
pg_db_call('SELECT;R');
Causes the PHP error:
"ERROR: syntax error at or near "R"
to be logged. And the PG ERROR:syntax error at or near "R"
However, this code:
pg_db_call('BEGIN');
Causes NO PHP error to be logged! However, the PG WARNING:
there is already a transaction in progress BEGIN
-
TypeError: __init__() takes 1 positional argument but 33 were given (FLASK)
I am developing in Python, I am using Flask and SLQAlchemy, however I get an error when submitting the form. This is my code:
from flask import Flask, render_template, request from flask_sqlalchemy import SQLAlchemy from datetime import datetime app = Flask(__name__) #DB configuration app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:password@localhost/db' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db = SQLAlchemy(app) class form_filled(db.Model): __tablename__ = 'form_filled' idregister = db.Column(db.Integer, primary_key=True) username = db.Column(db.Text()) q1 = db.Column(db.Text()) q2 = db.Column(db.Text()) q3 = db.Column(db.Text()) q4 = db.Column(db.Text()) dano = db.Column(db.Text()) q5 = db.Column(db.Text()) q6 = db.Column(db.Text()) q7 = db.Column(db.Text()) q8 = db.Column(db.Text()) q9 = db.Column(db.Text()) q10 = db.Column(db.Text()) q11 = db.Column(db.Text()) q13 = db.Column(db.Text()) q14 = db.Column(db.Text()) q15 = db.Column(db.Text()) q16 = db.Column(db.Text()) q17 = db.Column(db.Text()) q18 = db.Column(db.Text()) q19 = db.Column(db.Text()) q20 = db.Column(db.Text()) q21 = db.Column(db.Text()) q22 = db.Column(db.Text()) q23 = db.Column(db.Text()) q24 = db.Column(db.Text()) q25 = db.Column(db.Text()) q26 = db.Column(db.Text()) q27 = db.Column(db.Text()) q28 = db.Column(db.Text()) q29 = db.Column(db.Text()) q30 = db.Column(db.Text()) registered_date = db.Column(db.DateTime) def __init__(self, username, q1, q2, q3, q4, dano ,q5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15, q16, q17, q18, q19, q20, q21, q22, q23, q24, q25, q26, q27, q28, q29, q30,registered_date): #now = datetime.now() self.username = username self.q1 = q1 self.q2 = q2 self.q3 = q3 self.q4 = q4 self.dano = dano self.q5 = q5 self.q6 = q6 self.q7 = q7 self.q8 = q8 self.q9 = q9 self.q10 = q10 self.q11 = q11 #self.q12 = q12 self.q13 = q13 self.q14 = q14 self.q15 = q15 self.q16 = q16 self.q17 = q17 self.q18 = q18 self.q19 = q19 self.q20 = q10 self.q21 = q21 self.q22 = q22 self.q23 = q23 self.q24 = q24 self.q25 = q25 self.q26 = q26 self.q27 = q27 self.q28 = q28 self.q29 = q29 self.q30 = q30 self.registered_date = registered_date @app.route('/') def home(): return render_template('home.html') @app.route('/about') def about(): return render_template('about.html') @app.route('/success', methods=['GET','POST']) def success(): if request.method=='POST': username=request.form.get("username") q1=request.form.get("q1", "") q2=request.form.get("q2", "") q3=request.form.get("q3", "") q4=request.form.get("q4", "") dano=request.form.get("tb1", "") q5=request.form.get("q5", "") q6=request.form.get("q6", "") q7=request.form.get("q7", "") q8=request.form.get("q8", "") q9=request.form.get("q9", "") q10=request.form.get("q10", "") q11=request.form.get("q11", "") q12=request.form.get("q13", "") q13=request.form.get("q14", "") q14=request.form.get("q15", "") q15=request.form.get("q16", "") q16=request.form.get("q17", "") q17=request.form.get("q18", "") q18=request.form.get("q19", "") q19=request.form.get("q20", "") q20=request.form.get("q21", "") q21=request.form.get("q21", "") q22=request.form.get("q22", "") q23=request.form.get("q23", "") q24=request.form.get("q24", "") q25=request.form.get("q25", "") q26=request.form.get("q26", "") q27=request.form.get("q27", "") q28=request.form.get("q28", "") q29=request.form.get("q29", "") q30=request.form.get("options", "") if (username=='' or q1=='' or q2=='' or q3=='' or q4=='' or q5=='' or q6=='' or q7=='' or q8=='' or q9=='' or q10=='' or q11=='' or q13=='' or q14=='' or q15=='' or q16=='' or q17=='' or q18=='' or q19=='' or q20=='' or q21=='' or q22=='' or q23=='' or q24=='' or q25=='' or q26=='' or q27=='' or q28=='' or q29==''): return render_template('home.html', message='Please, select all the options') else: data = form_filled(username, q1, q2, q3, q4, dano ,q5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15, q16, q17, q18, q19, q20, q21, q22, q23, q24, q25, q26, q27, q28, q29, q30) db.session.add(data) db.session.Commit() #print(q1,q2,q3,q4,dano) #print(request.form) return render_template('success.html') if __name__ == '__main__': app.run(debug=True)
I get this Error:
TypeError: init() takes 1 positional argument but 33 were given
additionally I don't know how to insert a record with the current date (like datenow), I'm using Postgresql If you would be so kind to help me, since I have been in this for a long time and I can not know what the problem is
-
My Openshift cluster is unable to resolve host github private repository during build
Getting error as below while trying to build image using private github repository.
Cloning "git@github.com:XYZ/repo.git" ... WARNING: timed out waiting for git server, will wait 1m4s error: ssh: Could not resolve hostname github.com: Name or service not known fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.
Please note- I am running my Openshift cluster over VM, and over VM git server is resolved sucessfully.
Any kind of help/idea behind it would be appreciable. Thanks in Advance.
-
Vault : Enable https UI with OpenShift4 and helm3
I'm here in order to have some help about how to enable the https for the vault UI with openshift and helm3 with a self signed certificate.
To do that, I use helm3 and a free OpenShift 4 cluster with a Red Hat CodeReady Containers.
Currently, this is what I have done :
Add hashicorp repo :
helm repo add hashicorp https://helm.releases.hashicorp.com
Install the latest version of vault :
[tim@Host-002 crc-linux-1.22.0-amd64]$ helm install vault hashicorp/vault \ > --namespace vault \ > --set "global.openshift=true" \ > --set "server.dev.enabled=true"
Then I run
oc get pods
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc get pods NAME READY STATUS RESTARTS AGE vault-0 0/1 ContainerCreating 0 20s vault-agent-injector-7bfb9cffc5-4tl6s 0/1 ContainerCreating 0 21s
I run an interactive shell session with the vault-0 pod :
oc rsh vault-0
Then I initialize Vault :
/ $ vault operator init --tls-skip-verify -key-shares=1 -key-threshold=1 Unseal Key 1: iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs= Initial Root Token: s.xVb0DvIMQRYam7oS2C0ZsHBC Vault initialized with 1 key shares and a key threshold of 1. Please securely distribute the key shares printed above. When the Vault is re-sealed, restarted, or stopped, you must supply at least 1 of these keys to unseal it before it can start servicing requests. Vault does not store the generated master key. Without at least 1 key to reconstruct the master key, Vault will remain permanently sealed! It is possible to generate new unseal keys, provided you have a quorum of existing unseal keys shares. See "vault operator rekey" for more information.
Export the token :
export VAULT_TOKEN=s.xVb0DvIMQRYam7oS2C0ZsHBC
Unseal Vault :
/ $ vault operator unseal --tls-skip-verify iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs= Key Value --- ----- Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1.6.2 Storage Type file Cluster Name vault-cluster-21448fb0 Cluster ID e4d4649f-2187-4682-fbcb-4fc175d20a6b HA Enabled false
I check the pods :
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc get pods NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 35m vault-agent-injector-7f5bc979b6-p5bw6 1/1 Running 0 35m
I'm able to get the UI without https :
In the OpenShift console, I switch to the Administrator mode and this is what I've done :
- Networking part - Routes > Create routes
- Name : vault-route
- Hostname : 192.168.130.11
- Path :
- Service : vault
- Target Port : 8200 -> 8200 (TCP)
Now, if I check the URL : http://192.168.130.11/ui :
The UI is available.
What I've done for the https access :
I've created the directory /vault/certs in my /home and :
openssl genrsa 2048 > ca-key.pem openssl req -new -x509 -nodes -days 365000 -key ca-key.pem -out ca.pem openssl req -newkey rsa:2048 -days 365000 -nodes -keyout server-key.pem -out server-req.pem openssl rsa -in server-key.pem -out server-key.pem openssl x509 -req -in server-req.pem -days 365000 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
For the informations requested I used :
Country Name (2 letter code) [AU]:XX State or Province Name (full name) [Some-State]:XXX Locality Name (eg, city) []:XXX Organization Name (eg, company) [Internet Widgits Pty Ltd]:XXX Organizational Unit Name (eg, section) []:XXX Common Name (e.g. server FQDN or YOUR name) []:192.168.130.11
And :
[tim@localhost certs]$ openssl verify -CAfile ca.pem server-cert.pem server-cert.pem: OK
To configure https :
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc create secret tls vault-cert --cert=/home/vault/certs/server-cert.pem --key=/home/vault/certs/server-key.pem -n vault secret/vault-cert created
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc create secret generic pki-int-cert --form-file=ca.pem=/home/vault/certs/ca.pem -n vault secret/pki-int-cert created
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc edit statefulset.apps/vault
And I've updated the volumeMounts section like that :
volumeMounts: - mountPath: /vault/data name: data - mountPath: /vault/config name: config - mountPath: /home/vault name: home - mountPath: /vault/certs name: certs readOnly: true
And the volumes section like that :
volumes: - configMap: defaultMode: 420 name: vault-config name: config - emptyDir: {} name: home - name: certs projected: defaultMode: 420 sources: - secret: name: pki-int-cert - secret: name: vault-cert
I kill the vault-0 pod to take into account the changes and I check if my pod has access to my different secrets :
[tim@localhost certs]$ oc rsh vault-0 / $ ls bin etc lib mnt proc run srv tmp var dev home media opt root sbin sys usr vault / $ cd vault/ /vault $ ls certs config data file logs /vault $ cd certs/ /vault/certs $ ls ca.pem tls.crt tls.key
Then I've edited the vault-config file like that :
[tim@Host-002 crc-linux-1.22.0-amd64]$ oc edit cm vault-config
apiVersion: v1 data: extraconfig-from-values.hcl: |- disable_mlock = true ui = true listener "tcp" { tls_cert_file = "/vault/certs/tls.crt" tls_key_file = "/vault/certs/tls.key" tls_client_ca_file = "/vaut/certs/ca.pem" address = "[::]:8200" cluster_address = "[::]:8201" } storage "file" { path = "/vault/data" }
And I rekill my pod.
After that, if I try to use the first route created, I've this error :
So I've deleted the first route and I recreate it with https :
- Networking part > Routes > Create routes
- Name : vault-route
- Hostname : 192.168.130.11
- Path :
- Service : vault
- Target Port : 8200 -> 8200 (TCP)
- Secure route enabled
- TLS Termination : Passthrough
And if I try the url https://192.168.130.11/ui :
I've this error... I think I missed something but I don't know what...
Someone to help me ?
Thanks a lot !
-
Why Use Argo CD in Combination with Tekton
I am configuring a cloud-native OpenShift CI/CD process using Tekton. Tekton has the option to trigger via events and also has the option to deploy directly to a cluster. Given this functionality, I am confused of the ideal use case for Argo CD.
Argo CD appears to share very similar functionality with Tekton except lacks the ability to run builds. If I can build and deploy apps entirely via Tekton, what advantage does Argo provide?
-
Remove helm resources when re-deploying with kubectl
I'm pretty new to using Helm. I've a use case where I had a few resources deployed using helm charts. Now, I want to deploy the exact same resources using regular kubectl way
kubectl apply
. I was of belief that the previously deployed resources via Helm will be deleted, since I use the same k8s manifests for Helm chart, but I see a few labels such asapp.kubernetes.io/managed-by: Helm
still lingering. Do I need to explicitly clean the cluster ? -
mount volume from filesystem to grafana cointainer
I'm trying to mount a filesystem in my local containing dashboard json files into grafana container via grafana helm (in this case I use terraform) it's in values.yaml file. but I'm not sure how to do that?
-
EKS KubeSchedular
I am new Kubernetes and completely new to setting it up in
EKS
.I am trying to achieve sharing of
GPU
between multiple pods, but for that going through few of the documents and articles, I found out I should update thekubeschedualr configuration
with parameters which will then allow me the make the necessary changes for enabling sharing ofGPU
between pods.Question
How do I update the
kubeschedular
configuration inEKS
. If update for the configuration is not possible, is there some other way, I can setupkubeschedular
for only those pods which requireGPU
??? . -
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?