php postgres connection string using high availability cluster with multiple nodes
I have setup a Postgresql HA Cluster using pg_auto_failover. I need to connect the application only to the primary node. According to an article I read on the internet, it seems is possible using the target_session_attrs=read-write parameter.
So I've tried the following:
$conn = pg_connect("host=172.18.75.99,172.18.75.100 port=5432 dbname=test user=postgres password= target_session_attrs=read-write") or die("Could not connect");
$status = pg_connection_status($conn);
if ($status === PGSQL_CONNECTION_OK) {
print "Connection status ok\n";
}
else {
print "Connection status bad\n";
}
However, the response is:
Warning: pg_connect(): Unable to connect to PostgreSQL server: could not translate host name "172.18.75.99,172.18.75.100" to address: Unknown host in C:\xampp\htdocs\test\index.php on line 38
Could not connect
If I'm trying a connection with only one node, it works:
$conn = pg_connect("host=172.18.75.99 port=5432 dbname=test user=postgres password= target_session_attrs=read-write") or die("Could not connect");
$status = pg_connection_status($conn);
if ($status === PGSQL_CONNECTION_OK) {
print "Connection status ok\n";
}
else {
print "Connection status bad\n";
}
This returns:
Connection status ok
Not sure how to make it work, using both of the nodes in the connection string.
See also questions close to this topic
-
How to create and start an infinite background script in php? [best practices]
I want to create a PHP script that will run non stop and execute some instructions every minute/hour (according to my need). It should never die.
How should I proceed to this one? How do I start the script itself?
What Iv'e done so far is created an infinite for loop and checking the time and if it's a new minute then calling my function. But when I call the link, it shows my browser is busy.
-
How can I download a EXCEL flle(in direcctory 1) residing in my local directory(any usual directory2) using CodeIgniter?
I am new to Codeigniter and was working to download a file let say that resides in c:/downloads(an excel file, CSV) to my usual download folder, I am using Wamp server and Codeigniter-3, anything I have gone through does not make any sense to me. Is there a way to do that?
Thanks for any contribution in advance.
-
How to insert multiple row data from a table into another table in MySQL using checkboxes
I trying to insert my data from table_request into table_list by click the approve/reject button under the table.
User will click the checkboxes for select all or select specific row.
The current problem is the about the foreach and insert statement are wrong. After I click approve, it can only insert table_request id into table_list.
Requesting the resolution.
Thanks
This is my table_request.php
<form name="bulk_action_form" action="action.php" method="post" onSubmit="return approve_confirm();"/> <table class="bordered"> <tr> <th><input type="checkbox" name="select_all" id="select_all" value=""/></th> <th>Name</th> <th>Remark</th> </tr> <?php $query = "select * from `table_request`;"; if(count(fetchAll($query))>0){ foreach(fetchAll($query) as $row){ ?> <tr> <td><input type="checkbox" name="checked_id[]" class="checkbox" value="<?php echo $row['id']; ?>"/></td> <td><?php echo $row['Name'] ?></td> <td><?php echo $row['Remark'] ?></td> </tr> </table> <input type="submit" name="approve_btn" value="Approve"/> </form>
This is my action.php
<?php session_start(); include_once('connection.php'); if(isset($_POST['approve_btn'])) { $idArr = $_POST['checked_id']; $Name = $row['Name']; $Remark = $row['Remark']; foreach($idArr as $key => $value) { $save = "INSERT INTO table_list(id,Name,Remark) VALUES ('".$value."','".$name[$key]."','".$Remark[$key]."')"; $query = mysqli_query($conn,$save); } $query .= "DELETE FROM `table_request` WHERE `table_request`.`id` = '$id';"; header("Location:table_request.php"); } ?>
Requesting the resolution.
Thanks
Here is my code for the table row
-
how to import data into pg_statistic table?
I am trying to export statistics of a table and want to put the same stats without data into another table. But it throws error "cannot accept a value of type anyarray" - is it possible to import stats via this or any other way in postgres?
edb=# copy (select * from pg_statistic where starelid in (select oid from pg_class where relname in ('t'))) to stdout with delimiter ','; 16446,1,f,0,4,-1,2,3,0,0,0,97,97,0,0,0,0,0,0,0,0,\N,{0.994006},\N,\N,\N,{1\,10\,20\,30\,40\,50\,60\,70\,80\,90\,100\,110\,120\,130\,140\,150\,160\,170\,180\,190\,200\,210\,220\,230\,240\,250\,260\,270\,280\,290\,300\,310\,320\,330\,340\,350\,360\,370\,380\,390\,400\,410\,420\,430\,440\,450\,460\,470\,480\,490\,500\,510\,520\,530\,540\,550\,560\,570\,580\,590\,600\,610\,620\,630\,640\,650\,660\,670\,680\,690\,700\,710\,720\,730\,740\,750\,760\,770\,780\,790\,800\,810\,820\,830\,840\,850\,860\,870\,880\,890\,900\,910\,920\,930\,940\,950\,960\,970\,980\,990\,1000},\N,\N,\N,\N 16446,2,f,0,4,-0.999,1,2,3,0,0,96,97,97,0,0,0,0,0,0,0,{0.002},\N,{0.99402994},\N,\N,{3},{2\,12\,22\,32\,42\,52\,62\,72\,82\,92\,102\,112\,122\,132\,142\,152\,162\,172\,182\,192\,202\,212\,222\,232\,242\,252\,262\,272\,282\,292\,302\,312\,322\,332\,341\,351\,361\,371\,381\,391\,401\,411\,421\,431\,441\,451\,461\,471\,481\,491\,501\,511\,521\,531\,541\,551\,561\,571\,581\,591\,601\,611\,621\,631\,641\,651\,661\,670\,680\,690\,700\,710\,720\,730\,740\,750\,760\,770\,780\,790\,800\,810\,820\,830\,840\,850\,860\,870\,880\,890\,900\,910\,920\,930\,940\,950\,960\,970\,980\,990\,1000},\N,\N,\N -- trying to create a copy of pg_statistics table edb=# create table pg_statistic_test as select * from pg_statistic where 1=2; ERROR: column "stavalues1" has pseudo-type anyarray edb=# copy pg_catalog.pg_statistic from '/tmp/pg_statistics.txt' with delimiter ','; ERROR: cannot accept a value of type anyarray CONTEXT: COPY pg_statistic, line 1, column stavalues1: "{1,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230,240,250,260,27..."
-
I was making a new migration to create an association between two entities in ROR. I use the wrong entity in the migration and need to delete it
I was making a new migration to create an association between two entities in ROR. I use the wrong entity in the migration and need to delete it. How to delete the pending migration?
I did
bin/rails db:rollback
, and now it's saying migration pending.Migrations are pending. To resolve this issue, run: bin/rails db:migrate RAILS_ENV=development You have 2 pending migrations: 20210308064215_devise_create_admins.rb 20210309031327_add_user_to_listing.rb ): ```
-
Run postgres and sql requests from one file
Could you help me create windows start.bat file which allow run psql.exe and do something like next:
Delete db1 if exist
Create db1
Connect db1
Create table t1
Delete db2 if exist
Create db2
Connect db2
Create table t2
I can do it step by step from the psql console, but don't understand how to do it from a batch *.bat file. For example if I write a string like:
psql -U postgres db1
, it connects to the db1, and stop execute other script cmds.. that's problem. -
Cnn string for pyodbc with "trusted_connection=yes" instead of userid & password
i'm using below code for cnnString, is there any other way without giving userID & password and pass trusted_Connection as true??
cnnString = 'Driver={SQL Server Native Client11.0};Server=XXXXX;Database=XXXXX;UID=XXXX;PWD=XXXXX' qry = 'select * from View_Dim_Supp' #or open('myqueryfilepath', 'r').read() cnn = pyodbc.connect(cnnString) df = pd.read_sql(qry, cnn)
-
NLog blob storage extension with dynamic connection string
BlobStorage target property is not changed as I expected.
I use this code. (of course le connection string is the real azureblobstorage one)
LogManager.Configuration.Variables["simple-log-file-name"] = "simple-log.txt"; LogManager.Configuration.Variables["blob-container"] = "logs"; LogManager.Configuration.Variables["blobconst"] = "DefaultEndpointsProtocol=https;AccountName=....";
Target is setup in nlog.config:
<target xsi:type="AzureBlobStorage" name="simple-log-target" blobName="${var:simple-log-file-name}" container="${var:blob-container}" connectionString="${var:blobconst}"....
At this point nlog fail to setup with
Error AzureBlobStorageTarget(Name=simple-log-target): Failed to create BlobClient with connectionString=. Exception: System.ArgumentNullException: Value cannot be null. (Parameter 'connectionString')
If I put the Aure connection string in nlog config target, it works: It writes to storage defined by nlog.config in container and blob that are set in runtime.
<target xsi:type="AzureBlobStorage" name="simple-log-target" blobName="${var:simple-log-file-name}" container="${var:blob-container}" connectionString="DefaultEndpointsProtocol=https;AccountName=...." ...
Is it possible to define this connectionString at runtime at all ?
-
Authentication : Password passed part of connection string parameters is not considered with PostgresSql 13.1
recently we have upgraded our postgres db to version 13.1. after that we are facing strange behaviour with Connection Strings
with PostgresSql 12.3 below command used to connect successfully to DB with out prompting for password
**postgres=# \connect "dbname=dm_test_db4_db user=test_db4 host=localhost port=5432 password=password"; You are now connected to database "dm_test_db4_db" as user "test_db4".*
with PostgresSql 13.1 input for password prompted and if we provide password connection is successful.
*postgres=# \connect "dbname=dm_test_db4_db user=test_db4 host=localhost port=5432 password=password"; Password for user test_db4: You are now connected to database "dm_test_db4_db" as user "test_db4".*
since these command is part of a sql script no input is given , script is failing with Authentication failure.
Here are my env details
postgres version details: C:\Program Files\PostgreSQL\13\bin>postgres.exe -V postgres (PostgreSQL) 13.1 OS:Windows
these are the settings in pg_hba.conf
- IPv4 local connections: host all all 127.0.0.1/32 md5 - IPv6 local connections: host all all ::1/128 md5
Any idea what is causing this behavior?
-
Connect to HDFS HA (High Availability) from Scala
I have a Scala code that now is able to connect to HDFS through a single namenode (non-HA). Namenode, location, conf.location and Kerberos parameters are specified in a .conf file inside of the Scala project. However, now there's a new cluster with HA (involving standby and primary namenodes). Do you know how to configure the client in Scala to support both environments non-HA and HA(with auto-switching of namenodes)?
-
RabbitMQ Fetch from Closest Replica
In a cluster scenario with mirrored queues, is there a way for consumers to consume/fetch data from a mirrored queue/Slave node instead of always reaching out to the master node?
If you think on scalability, having all consumers call a single node responsible to be the master of a specific queue means all traffic goes to a single node.
Kafka allows consumers to fetch data from the closest node if that node contains a replica of the leader, is there something similar on RabbitMQ?
-
Kafka scalability if consuming from slave node
In a cluster scenario with data replication > 1, why is that we must always consume from a master/leader of a partition instead of being able to consume from a slave/follower node that contains a replica of this master node?
I understand the Kafka will always route the request to a master node(of that particular partition/topic) but doesn't this affect scalability (since all requests go to a single node)? Wouldnt it be better if we could read from any node containing the replica information and not necessarily the master?