Postgres DB Hosted in VM in Compute Engine Hangs after utilizing high amounts of CPU
Postgres DB Hosted in VM in Compute Engine Hangs after utilizing high amounts of CPU.
Normally, the database works fine and everything is OK. Suddenly after every 7-8 months the VM instance suddenly gains high CPU utilization and it stops working. Then I have to manually shut down the VM and start it again. Then assign the new IP Addressess to my web app and it works okay.
Why does this occur? There is no abnormal usage of web app in these instances. This is at random.
do you know?
how many words do you know
See also questions close to this topic
-
ERROR: invalid byte sequence for encoding WITH psql
I've seen numerous issues in other posts with the copy command and:
ERROR: invalid byte sequence for encoding "UTF8": 0xfc
And the consensus in these posts appears to be to specify the encoding in the command you're doing the copy with. I have done so:
psql -h localhost -p 5432 -d BOBDB -U BOB -c "\COPY \"BOBTB01\" FROM 'C:\Temp\file.csv' with csv HEADER ENCODING 'WIN1252'"; Password for user BOB: **ERROR: character with byte sequence 0x81 in encoding "WIN1252" has no equivalent in encoding "UTF8" CONTEXT: COPY BOBTB01, line 76589**
So, that confused me and I changed to UTF8 to WIN1252 and having done so I get a slightly different error, the failure is on a different line and the text is slightly different.
psql -h localhost -p 5432 -d BOBDB -U BOB -c "\COPY \"BOBTB01\" FROM 'C:\Temp\file.csv' with csv HEADER ENCODING 'UTF8'"; Password for user BOB: **ERROR: invalid byte sequence for encoding "UTF8": 0xfc CONTEXT: COPY BOBTB01, line 163**
This is the encoding shown in the database:
show client_encoding; client_encoding ----------------- UTF8 (1 row)
The file is from a reliable source and I happen to have "R" installed which also does .csv import. The file was pulled into "R" without issue, that's making me think it's not the file but something else. Is there another switch or syntax that can bypass these issues perhaps?
I'm not sure what is wrong.
Can you help?
Thanks.
-
Whats missing on my Ruby 'Inverse Of' relationship
I know this topic has been addressed, but I have been at this for 2 days and I'm just stuck. I know inverse of does not create a new query, so should I use another method?
Question: How to set up an 'inverse of' with a has_one, belongs_to situation & same class..
Explanation: A user 'has_one :spouse' and 'belongs_to :spouse_from'. They are inverse of each other. When a User signs up, they can invite their significant other. For Example
- user_a invites & creates user_b
- user_b.spouse_id is set to user_a.id
- In a separate method I want to be able to update like.. user_a.spouse_id = user_a.spouse.id
The only association that works at this point is user_b.spouse.
Class User has_one :spouse, class_name: 'User', foreign_key: :spouse_id, dependent: :nullify, inverse_of: :spouse_from belongs_to :spouse_from, class_name: 'User', foreign_key: :spouse_id, inverse_of: :spouse, optional: true
-
Normalizing data in postgresql
Flag This application will read an iTunes library in comma-separated-values (CSV) and produce properly normalized tables as specified below. Once you have placed the proper data in the tables, press the button below to check your answer.
We will do some things differently in this assignment. We will not use a separate "raw" table, we will just use ALTER TABLE statements to remove columns after we don't need them (i.e. we converted them into foreign keys).
We will use the same CSV track data as in prior exercises - this time we will build a many-to-many relationship using a junction/through/join table between tracks and artists.
To grade this assignment, the program will run a query like this on your database and look for the data it expects to see:
SELECT track.title, album.title, artist.name FROM track JOIN album ON track.album_id = album.id JOIN tracktoartist ON track.id = tracktoartist.track_id JOIN artist ON tracktoartist.artist_id = artist.id ORDER BY track.title LIMIT 3;
Expected out put is this
The expected result of this query on your database is: title album artist A Boy Named Sue (live) The Legend Of Johnny Cash Jo
DROP TABLE album CASCADE; CREATE TABLE album ( id SERIAL, title VARCHAR(128) UNIQUE, PRIMARY KEY(id) ); DROP TABLE track CASCADE; CREATE TABLE track ( id SERIAL, title TEXT, artist TEXT, album TEXT, album_id INTEGER REFERENCES album(id) ON DELETE CASCADE, count INTEGER, rating INTEGER, len INTEGER, PRIMARY KEY(id) ); DROP TABLE artist CASCADE; CREATE TABLE artist ( id SERIAL, name VARCHAR(128) UNIQUE, PRIMARY KEY(id) ); DROP TABLE tracktoartist CASCADE; CREATE TABLE tracktoartist ( id SERIAL, track VARCHAR(128), track_id INTEGER REFERENCES track(id) ON DELETE CASCADE, artist VARCHAR(128), artist_id INTEGER REFERENCES artist(id) ON DELETE CASCADE, PRIMARY KEY(id) ); \copy track(title,artist,album,count,rating,len) FROM 'library.csv' WITH DELIMITER ',' CSV; INSERT INTO album (title) SELECT DISTINCT album FROM track; UPDATE track SET album_id = (SELECT album.id FROM album WHERE album.title = track.album); INSERT INTO tracktoartist (track, artist) SELECT DISTINCT ... INSERT INTO artist (name) ... UPDATE tracktoartist SET track_id = ... UPDATE tracktoartist SET artist_id = ... -- We are now done with these text fields ALTER TABLE track DROP COLUMN album; ALTER TABLE track ... ALTER TABLE tracktoartist DROP COLUMN track; ALTER TABLE tracktoartist ... SELECT track.title, album.title, artist.name FROM track JOIN album ON track.album_id = album.id JOIN tracktoartist ON track.id = tracktoartist.track_id JOIN artist ON tracktoartist.artist_id = artist.id LIMIT 3;
What am i doing wrong with the code?
-
Google Cloud Compute Engine http Connection Timeout
I have setup a compute engine VM with 2vCPU and 2GB RAM.I have setup nginx server and setup the firewalls permissions as shown in the diagram. When I try to access the angular files hosted on the server using the external IP I get the error "The connection has timed out" and when I try to use curl on the terminal, it displays the error "curl: (28) Failed to connect to IP port 80 after 129163 ms: Connection timed out".
Both the Http and Https firewall rules are enabled
Whe I run the command
sudo systemctl status apache2
netstat -tulpn | grep LISTEN
enter code here
Any ideas on what the issue might be will be really helpful
-
(Terraform) Error 400: Invalid request: instance name (pg_instance)., invalid
On GCP, I'm trying to create a Cloud SQL instance with this Terraform code below:
resource "google_sql_database_instance" "postgres" { name = "pg_instance" database_version = "POSTGRES_13" region = "asia-northeast1" deletion_protection = false settings { tier = "db-f1-micro" disk_size = 10 } } resource "google_sql_user" "users" { name = "postgres" instance = google_sql_database_instance.postgres.name password = "admin" }
But I got this error:
Error: Error, failed to create instance pg_instance: googleapi: Error 400: Invalid request: instance name (pg_instance)., invalid
Are there any mistakes for my Terraform code?
-
Apache beam FixedWindow doesn't do anything after GroupByKey transform
I built a pipeline which reads from confluent kafka it processes the records and then use side outputs to split them into rejected and approved pcollections, then the approved pcollections gets written to bigquery, but I want to persist the approved records and write them into a file on gcs.
The code is:
windowing=(aproved | 'Create_window' >> beam.WindowInto(window.FixedWindows(60)) | 'AddKey' >> beam.Map(lambda record: (none,record)) | 'GBK' >> beam.GroupByKey() | 'remove_key' >> beam.FlatMap(ret_key) | 'AddTimeStamp' >> beam.Map(lambda record: beam.window.TimestampValue(record,time.time())) | 'Write' >> WriteToFiles(path=MY_BUCKET,file_naming=destination_prefix_naming('.ppl')) )
This works when I test it reading from a file and using direct runner, but when I use dataflow and streaming it just doesn't do anything after the GroupByKey transform, it says on the graph that 20 element were added, but the next transform ('remove_key') never gets an element after that
-
how to switch my VM from shield mode to normal mode
After reset a VM integrity, this one is running into shield mode.
I cannot connect in SSH on this VM without going through the serial port.
The VM is in shield mode and nothing is working. How can I switch back to a normal mode to connect to the traditional ports of the VM?
I used this to update un integritry baseline
https://cloud.google.com/compute/shielded-vm/docs/integrity-monitoring#updating-baseline
-
Not Able to access virtual machine
We are not able to access our virtual machine by ip address https://35.188.89.148/ . We have tried multiple no of time but its keep showing same error every time that can not connect to the vm . Also we are not able to change the windows password of the virtual machine . Can you please have a look , its showing the same problem from last 15 days.