Linux and Windows operating systems for AlwaysOn
I have read that we have the ability to mix Linux and Windows operating systems for AlwaysOn configuration. For example, you can run your primary on Windows OS and use secondary replicas on Linux. Wanted to know what are the major challenges/benefits(in terms of performance, auto scaling, costs, dbs etc) if we have a setup like this and is it advised to setup an environment like this in prod. Please share your feedback on this to help.Thank you in Advance.
See also questions close to this topic
-
flutter accessing shared folder/files on windows
I'm working on a project which requires access to shared folders/files using flutter.
Are there packages/workarounds to do this? Either within the same network or neither is fine. So far I only found path provider which writes/read on local devices. (I'm using Windows)
-
Host a Git repo on Windows server using only OpenSSH/Git
Is it possible to host a Git Repo that I can pull/push to, which support LFS, using only Git for Windows?
I have tried and failed to get Git Servers such as Bonobo and Gitbucket approved through my IT department.
I am wondering if it is at all possible to have some sort of repo I can interact with solely using OpenSSH/Git for Windows. I cannot for the life of me find documentation over this... Any and all help is appreciated.
-
(Python) Is there a way to show warnings before closing the CMD?
I'm trying to create a "persistent" Python program. Is there a way to prevent CMD windows from closing, or at least show a warning message before closing?
-
Freeing some memory space on my Amazon Linux 2
I was doing testing on my server and always getting this error
file for here-document: No space left on device -bash: cannot create temp file for here-document: No space left on device
So I check it with the command "df".
Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 1970540 0 1970540 0% /dev tmpfs 1988952 0 1988952 0% /dev/shm tmpfs 1988952 65980 1922972 4% /run tmpfs 1988952 0 1988952 0% /sys/fs/cgroup /dev/nvme0n1p1 8376300 8376280 20 100% / tmpfs 397792 16 397776 1% /run/user/1000
I saw that the file /dev/nvme0n1p1 is using most of the memory. Is there any way I can free those memory? Or what can I do to have free some memory.
-
EC2 Instance with 'This site can’t be reached' error
I am always getting 'This site can’t be reached' error when trying to access the ip address of my EC2 instance.
This is happening for all ip addresses (Public IPv4 address, Public IPv4 DNS, Private IPv4 addresses, Private IPv4 DNS) and for all EC2 instances I create. Particularly, I am trying to run WordPress from my EC2 following this guide. All the installation runs fine but still the ip is unavailable.
Here are some proposed solutions I tried but didn't solve the issue:
- My inbound and outbound security rules are already allowing ssh (port 22), http (port 80) and https (port 443) from all origins (0.0.0.0/0, ::/0).
- I disactivated my Windows firewall. Anyway, I can't access from other computers or from my mobile either.
- The ec2 created is the basic Linux 2 t2.micro (exactly as in the guide) and I have tried reaching the ip from a brand new ec2 instance without WordPress or anything and the same happens. Am I expected to get anything from the ip of a brand new ec2 at all?
- I can connect with ssh without issues.
I am a root user under the free tier, is there any impact?
Would really appreciate if someone could tell where else to look as most solutions on internet point to the list above and none of them solved my case.
-
Is chmod 777 safe for Shiny app hosted on AWS EC2?
I've hosted a web application (using
Shiny
, a R package for interactive web applications) on AWS EC2. I've also set up an authentication mechanism on the app withshinymanger
(https://datastorm-open.github.io/shinymanager/). This authentication requires reading and writing from a sqlite database (database.sqlite
) that stores user information (username, password, etc). In order to create new users in the authentication system, I need "write" permission, so I changed the permissions of the sqlite database withchmod 777
Screenshot of user permission of sqlite database
Question: The AWS server is only owned by one other person, but I'm worried that since the web application is hosted on a public IP address, bad people could infiltrate the sensitive data. Is this permission setting safe? If not, how do I change the permission setting so that I can still read, write, and do everything that 777 allows, but safer!
-
Cluster Autoscaler and Horizontal Pod Autoscaler working together
I have a cluster with Cluster Autoscaler activated and HPA for one of my deployments.
This is the HPA definition:
kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-cpu spec: scaleTargetRef: apiVersion: apps/v1 kind: ReplicationController name: hello-hpa-cpu minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 50
Now in a situation where my cluster is being used very lightly, that means this deployment will only have 1 available replica.
And since the cluster is not under high usage, it could be the case that the node containing that replica is scheduled for deletion (downscaling).
In that case, it would make my deployment have a downtime (when the cluster node is deleted, the only replica for the deployment is deleted as well, so it needs to be rescheduled in a new pod). I don't want that to happen (the downtime).
From this issue: https://github.com/kubernetes/kubernetes/issues/48307, it seems that Pod Disruption Budgets are not applicable to deployments with only 1 replica.
So the only solution to my problem would be to have
minReplicas
set to 2?Or is there something else I could do to prevent this downtime, and still let
minReplicas
as 1? -
Autoscaling Greenplum on GKE does not behave in the correct manner
Good evening, everyone.
I have installed Greenplum on a GKE cluster which is running fine (following the guide here)
The problem now is that the autoscaling is not correctly adding pods: it creates the new pods and after 5 seconds they are in a terminated state. I have already created the GKE without the "alpha configuration" as it was specified in another StackOverflow's question.
No errors are thrown, but i see these in the logs:
E0301 16:49:01.684087 2685 pod_workers.go:191] Error syncing pod 62b01d0a-0890-416b-b288-b3ec943d70ba ("segment-a-1_default(62b01d0a-0890-416b-b288-b3ec943d70ba)"), skipping: unmounted volumes=[greenplum-system-pod-token-78b9b ssh-key-volume config-volume my-greenplum-pgdata cgroups podinfo], unattached volumes=[greenplum-system-pod-token-78b9b ssh-key-volume config-volume my-greenplum-pgdata cgroups podinfo]: timed out waiting for the condition E0301 16:49:01.731718 2774 kubelet.go:1686] Unable to attach or mount volumes for pod "segment-a-2_default(c82d8954-1b17-4be6-a262-13ddc7c991b3)": unmounted volumes=[config-volume my-greenplum-pgdata cgroups podinfo greenplum-system-pod-token-78b9b ssh-key-volume], unattached volumes=[config-volume my-greenplum-pgdata cgroups podinfo greenplum-system-pod-token-78b9b ssh-key-volume]: timed out waiting for the condition; skipping pod
Any suggestions on how to troubleshoot it?
-
load distribution between pods in hpa
I notice the cpu utilization of pods in same hpa varies from 31m to 1483m. Is this expected and normal? See below for the cpu utilization of 8 pods which are of the same hpa.
NAME CPU(cores) myapp-svc-pod1 31m myapp-svc-pod2 87m myapp-svc-pod3 1061m myapp-svc-pod4 35m myapp-svc-pod5 523m myapp-svc-pod6 1483m myapp-svc-pod7 122m myapp-svc-pod8 562m
-
Connect to staging vpn using openvpen instead of openvpn3
I was given the following files from work, in a compressed folder
ca.crt dh.pem myuser.crt myuser.key myuser.ovpn ta.key
The suggested way is to connect using openvpn3 from this site
Is it possible to use these files to connect using the command line openvpn that comes with the linux mint distribution?
Thank you
-
Cannot update pip in Linux Mint 18.2
I'm having great difficulty installing Jupyter Notebook, and it seems updating it might help with the permissions error I keep getting.
First I attempt to install setuptools using
pip install setuptools
, as that was the error returned when I tried to update pip:Then when, as prompted, I attempt to update pip using
pip install --upgrade pip
, I get the following:That suggests the old version of pip is preventing me from updating pip(!). Where do I go from here?
-
Problem installing nodejs version >= 10 on Linux Mint 19.3, which stubbornly installs nodejs 8.10
Trying to use https://github.com/nodesource/distributions/blob/master/README.md to install nodejs in version higher than 10 ( and then npm) on Linux Mint 19.3. It stubbornly installs the 8.10 version.
Tried fixing it with tip from https://unix.stackexchange.com/questions/538536/newest-version-of-nodejs-is-not-intalling-in-linux-mint-tina but 1) "check_alt "Linux Mint" "tricia" "Ubuntu" "bionic" is already in the script 2) the result is the same.
Attempted to use sudo apt-get install as well as wget, which failed just like my last attempt, using the installation script downloaded:
dag@Arokh:~/Desktop/tmp$ sudo apt-get remove nodejs [sudo] password for dag: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libc-ares2 libhttp-parser2.7.1 libuv1 nodejs-doc Use 'sudo apt autoremove' to remove them. The following packages will be REMOVED: nodejs 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 18,0 MB disk space will be freed. Do you want to continue? [Y/n] y (Reading database ... 331148 files and directories currently installed.) Removing nodejs (8.10.0~dfsg-2ubuntu0.4) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... dag@Arokh:~/Desktop/tmp$ sudo ./setup_10.x ## Installing the NodeSource Node.js 10.x repo... ## Populating apt-get cache... + apt-get update Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88,7 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88,7 kB] Ign:4 http://packages.linuxmint.com tricia InRelease Hit:5 https://deb.nodesource.com/node_10.x bionic InRelease Hit:6 http://archive.canonical.com/ubuntu bionic InRelease Get:7 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74,6 kB] Hit:8 http://packages.linuxmint.com tricia Release Fetched 252 kB in 2s (137 kB/s) Reading package lists... Done ## You seem to be using Linux Mint version tricia. ## This maps to Ubuntu "bionic"... Adjusting for you... ## Confirming "bionic" is supported... + curl -sLf -o /dev/null 'https://deb.nodesource.com/node_10.x/dists/bionic/Release' ## Adding the NodeSource signing key to your keyring... + curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - OK ## Creating apt sources list file for the NodeSource Node.js 10.x repo... + echo 'deb https://deb.nodesource.com/node_10.x bionic main' > /etc/apt/sources.list.d/nodesource.list + echo 'deb-src https://deb.nodesource.com/node_10.x bionic main' >> /etc/apt/sources.list.d/nodesource.list ## Running `apt-get update` for you... + apt-get update Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88,7 kB] Hit:3 http://archive.canonical.com/ubuntu bionic InRelease Hit:4 https://deb.nodesource.com/node_10.x bionic InRelease Get:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88,7 kB] Ign:6 http://packages.linuxmint.com tricia InRelease Get:7 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74,6 kB] Hit:8 http://packages.linuxmint.com tricia Release Fetched 252 kB in 2s (140 kB/s) Reading package lists... Done ## Run `sudo apt-get install -y nodejs` to install Node.js 10.x and npm ## You may also need development tools to build native addons: sudo apt-get install gcc g++ make ## To install the Yarn package manager, run: curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list sudo apt-get update && sudo apt-get install yarn dag@Arokh:~/Desktop/tmp$ sudo apt-get update Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88,7 kB] Hit:2 http://archive.canonical.com/ubuntu bionic InRelease Hit:3 https://deb.nodesource.com/node_10.x bionic InRelease Hit:4 http://archive.ubuntu.com/ubuntu bionic InRelease Get:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88,7 kB] Ign:6 http://packages.linuxmint.com tricia InRelease Hit:7 http://packages.linuxmint.com tricia Release Get:8 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74,6 kB] Fetched 252 kB in 2s (139 kB/s) Reading package lists... Done dag@Arokh:~/Desktop/tmp$ sudo apt-get install -y nodejs Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nodejs 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/4 845 kB of archives. After this operation, 18,0 MB of additional disk space will be used. Selecting previously unselected package nodejs. (Reading database ... 331139 files and directories currently installed.) Preparing to unpack .../nodejs_8.10.0~dfsg-2ubuntu0.4_i386.deb ... Unpacking nodejs (8.10.0~dfsg-2ubuntu0.4) ... Setting up nodejs (8.10.0~dfsg-2ubuntu0.4) ... update-alternatives: using /usr/bin/nodejs to provide /usr/bin/js (js) in auto mode Processing triggers for man-db (2.8.3-2ubuntu0.1) ... dag@Arokh:~/Desktop/tmp$ node -v v8.10.0
-
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?