how to configure streaming replica from ansible tower embedded database?
Ansible Tower utilizes PostgreSQL for application level data storage. The Tower setup process does not configure streaming replica configuration/hot standby configurations, which can be used for disaster recovery or high availability solutions. How to enable Streaming replication enabled and can be configured. How can the Tower database can be replicated to high availability instance in the local datacenter and/or to a disaster recovery instance?
See also questions close to this topic
-
How does Ansible guarantee multiple loops and import the contents of the loop into the corresponding SQL
How does Ansible guarantee multiple loops and import the Loop database into the corresponding SQL
- name: Export "{{ mysql_container}}" data shell: docker exec $(docker ps |grep -w mariadb |awk '{print $1}') mysqldump -uroot -p'{{ mysql_root_password }}' {{ item|quote }} > /opt/pass/db/dsb-api.sql register: shell_output loop: - dsb-api - gateway_management - information_schema - performance_schema - uaa
-
Fetch AWS SSM Parameter from another AWS account using ansible and using aws profile or arn
Trying to fetch secrets from another AWS account using ansible. Do we have any option to use I am role arn of the secret aws account to fetch parameters. Because every instance that we are creating will have linked with this secrets AWS accounts ARN.
IAMSECRETARN : arn:aws:iam::xxxxxxxx:role/service/xxx-xxxx-xxxxx-xxxxx-xxx-79392xx94110xxxx
--- - name: Testing shared Roles hosts: all tasks: - name: lookup a key which doesn't exist failing to store it in a fact set_fact: temp_secret: "{{ lookup('aws_ssm', '/xxx/xxx/xxx/ansible-tower/admin_password' region='us-east-1', aws_profile='xxx-xxx-xxx') }}" ignore_errors: true - name: SSM PARAMS debug: msg: "{{ temp_secret }}"
Instead of "profile" need to use the "arn" or any other method without secret and access key
-
ansible - send email task fails with the error
My ansible version is 2.4
My intention is to send a email with all the server having uptime more than 100 days. The below script works fine only if all the servers in the inventory are accessible.
If any of the server in the inventory is not reachable, the send mail task fails with below error:
TASK [send mail]
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to have been in '/ansible/uptime.yml': line 26, column 10, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: send mail \n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'stdout'"}
tasks: - name: check uptime ignore_errors: yes shell: uptime |awk '{print $3}' register: up_time - name: send mail hosts: localhost gather_facts: no become: no tasks: - name: send mail mail: host: relayserver port: 25 from: user@company.com to: manager@company.com subject: uptime body: |- {% for server, servervars in hostvars.items() %} {% if servervars.up_time.stdout |int >= 100 %} {{ server }} - {{"has"}} {{ servervars.up_time.stdout }} {% endif %} {% endfor %} delegate_to: localhost run_once: true
I need the mail task to ignore the servers with are not reachable and send the mail for only the servers which are accessible.
-
Variable column name on function
I'm new to pgsql but have I have 8 years of experience with MSSQL, what i'm trying achieve is: create a function to apply this remove invalid data from names, it will remove all special characters, numbers and accents, keeping only spaces and a-Z characters, I want to use it on columns of different tables, but I cant really find what I'm doing wrong.
Here is my code:
CREATE OR REPLACE FUNCTION f_validaNome (VARCHAR(255)) RETURNS VARCHAR(255) AS SELECT regexp_replace(unaccent($1), '[^[:alpha:]\s]', '', 'g') COMMIT
If I run
SELECT regexp_replace(unaccent(column_name), '[^[:alpha:]\s]', '', 'g') from TableA
my code runs fine. I don't know exactly what is wrong with the function code.
-
postgree group by order by query
i have database
kode wilayah area 001 | Wilayah 1 | Area Padang 002 | Wilayah 2 | Area Bandung 006 | Wilayah 3 | Area Bandung 008 | Wilayah 4 | Area Bogor 004 | Wilayah 5 | Area Jakarta
In the table above there is an area for 'Bandung' which is the same with a different code. how to make groupings area, and when grouping displays the area 'Bandung' with code 006
I made it like this but the postgree error
SELECT * FROM tbl_area GROUP BY area ORDER BY kode DESC
how to make a result like this
kode wilayah area 001 | Wilayah 1 | Area Padang 006 | Wilayah 3 | Area Bandung 008 | Wilayah 4 | Area Bogor 004 | Wilayah 5 | Area Jakarta
-
How to remove duplicate rows in postgresql with condition?
I want to remove the duplicated rows where the condition is the updated date is less than the maximum updated date. However, I was not able to delete them.
My attempt:
SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(PARTITION BY order_code, customer_id, id, sign_date ORDER BY updated_at DESC) AS Row FROM fulfillment_bill_transactions WHERE active =1 AND transaction_type= 2 ) dups WHERE dups.Row > 1
The result from my query shows the rows with the minimum updated date but not all updated dates except the maximum date.
Here is the example:
Table A:
ID Sign Date Customer id order code updated_at A 2021/01/01 001 AB 2020/01/02 A 2021/01/01 001 AB 2020/01/03 A 2021/01/01 001 AB 2020/01/12 B 2021/01/03 002 LL 2020/02/02 B 2021/01/03 002 LL 2020/02/03 B 2020/01/03 002 LL 2020/02/04 Desired result:
ID Sign Date Customer id order code updated_at A 2021/01/01 001 AB 2020/01/12 B 2020/01/03 002 LL 2020/02/04 -
Connect to HDFS HA (High Availability) from Scala
I have a Scala code that now is able to connect to HDFS through a single namenode (non-HA). Namenode, location, conf.location and Kerberos parameters are specified in a .conf file inside of the Scala project. However, now there's a new cluster with HA (involving standby and primary namenodes). Do you know how to configure the client in Scala to support both environments non-HA and HA(with auto-switching of namenodes)?
-
RabbitMQ Fetch from Closest Replica
In a cluster scenario with mirrored queues, is there a way for consumers to consume/fetch data from a mirrored queue/Slave node instead of always reaching out to the master node?
If you think on scalability, having all consumers call a single node responsible to be the master of a specific queue means all traffic goes to a single node.
Kafka allows consumers to fetch data from the closest node if that node contains a replica of the leader, is there something similar on RabbitMQ?
-
Kafka scalability if consuming from slave node
In a cluster scenario with data replication > 1, why is that we must always consume from a master/leader of a partition instead of being able to consume from a slave/follower node that contains a replica of this master node?
I understand the Kafka will always route the request to a master node(of that particular partition/topic) but doesn't this affect scalability (since all requests go to a single node)? Wouldnt it be better if we could read from any node containing the replica information and not necessarily the master?
-
Need information on Azure Storage - synchronous data replication
I am creating a disaster recovery plan for an Azure based application. In this application Azure Storage (BLOB, Gen purpose V2) has been used. We are using REST api to insert data inside the BLOB container. We are using GRS for redundancy. As per Azure documentation, the data is at first copied in the same region in 3 different availability zones synchronously. So my question is, when i upload a blob in azure storage using AZURE SDK or Rest api call and receive a success (200-OK) message, is it that the synchronous copy to all 3 availability zones in the same region is completed or only the copy to the first zone is completed and the remaining two are queued.
-
Is there a disaster recovery strategy plan if someone deletes whole azure devops organization?
What is disaster recovery options provided by Microsoft if we delete some resources from azure DevOps like Build/release pipeline , Repos or even the whole organization.?
Please specify some best practices?
-
Azure storage account fail over
Following this link for Azure storage fail over process all this link says about manual way of initiating the fail over process.
Is there way to do this failover process programmatically? without any manual intervention.
What is the clue or exception to trigger the fail over process?
Will Azure storage SDK raise any particular exception, in case of storage account unavailability?
How to replicate/simulate storage account unavailability to do development & testing?