Not able to install libatk-bridge-2.0.so.0 on Amazon Linux AMI
Not able to install libatk-bridge-2.0.so.0 on Amazon Linux AMI. Gives following response
>sudo yum install libatk-bridge-2.0.so.0
Response: Loaded plugins: priorities, update-motd, upgrade-helper
No package libatk-bridge-2.0.so.0 available.
Error: Nothing to do
OS: Amazon Linux AMI release 2018.03 Kernel \r on an \m
See also questions close to this topic
-
Configure promtail 2.0 to read the files .log
Since I've updated to promtail 2.0, I'm unable to read the content of a log file in loki.
config-promtail.yml
server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /tmp/positions.yaml clients: - url: http://192.168.1.103:3100/loki/api/v1/push scrape_configs: - job_name: manuallog static_configs: - targets: - 192.168.1.103 labels: job: tomcat host: 192.168.1.103 path: /opt/error.log
I've also tried to use a different configuration in the scrape config, but with no luck:
- job_name: varlog journal: max_age: 12h labels: filename: /opt/error.log path: /opt/error.log
The error.log is not empty:
# cat /opt/error.log Disconnected from localhost
The Promtail version - 2.0
./promtail-linux-amd64 --version promtail, version 2.0.0 (branch: HEAD, revision: 6978ee5d) build user: root@2645337e4e98 build date: 2020-10-26T15:54:56Z go version: go1.14.2 platform: linux/amd64
Any clue? Am I doing anything wrong?
Many thanks,
-
Can't create message queue in Linux subsystem on Windows
I try to create message queue on linux subsystem in Windows 10. When I try to create message queue using this function:
queueId = msgget(*key, IPC_CREAT | IPC_EXCL | 0660); if (queueId == -1) { if (errno == EEXIST) { queueId = msgget(*key, IPC_CREAT | 0660); printf("Messege queue already exists, access acquired\n"); return; } else { printf("Couldn't create message queue. Process ended with error: %d\n", errno); exit(EXIT_FAILURE); } } else { printf("Message queue has been created with id: %d\n", queueId); }
I receive an error number 38 which is ENAMETOOLONG. What can I do in this case?
-
In C, is a global pointer initialized by malloc() in the data segment or the BSS?
From what I understand, the data segment is for the initialized global / static variables, and the BSS segment is for the uninitialized data segment. So for example:
int a = 10; // data segment int b; // BSS int main(){ int c = 10; // stack int* d = malloc() // heap return 0; }
However, in a pdf I found for one of my classes, it says that a global pointer initialized to the address returned by
malloc()
is in the BSS. Shouldn't it be in the data segment since the pointer is actually initialized to something? -
how to go multithreaded for puppeteer using worker-threads for web-automation purpose
hello so am doing some web automation and I want to open run puppeteer multithreaded what I mean like open the same page 10s of times and what I understood of what I read the worker thread is the best solution I guess? but I didn't get how to use it properly and I will put a sample code of what I did
const { Worker, isMainThread } = require('worker_threads'); const puppeteer = require('puppeteer') ; let scrapt = async()=>{ /* -------------------------------------------------------------------------- */ /* Launching puppeteer */ /* -------------------------------------------------------------------------- */ try{ const browser = await puppeteer.launch({headless: true }) ; const page = await browser.newPage(); await page.setUserAgent( `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36` ); let Browser_b = new Date() await page.goto('https://www.supremenewyork.com/') let browser_e = new Date() console.log(browser_e - Browser_b) } catch(e){ console.log(e) } let ex = [1,2,3,4] if (isMainThread) { // This re-loads the current file inside a Worker instance.asdasd new Worker(__filename); } else { for(let val of ex) { scrapt(); } }
this script opens 4 browsers but if I open more the pc lag ALOT since I think it's only using one thread not using them all? Thank u in advance and sorry for my stupidity
-
Join arrays of evaluated values in Playwright/Node/JS
I am trying to scrape a page with multiple searchresults with Playwright in Node. Scraping the different individual elements is easy. But how can I join them in an array so the individual elements are joined in one node?
This is what I have:
var myArray = []; const titles = await page.$$eval('//h3', elements => elements.map(el => el.textContent.trim().split('\n')[0])) const prices = await page.$$eval('.price', elements => elements.map(el => el.textContent.trim().split('\n')[0])) myArray.push(String(titles) + String(prices)); console.log(titles); gets me this : ['ProductTitleOne', 'ProductTitleTwo'] console.log(prices); gets me this : ['456','123'] console.log(myArray); results in this ['ProductTitleOne', 'ProductTitleTwo','456','123']
I want something like: [{'ProductTitleOne','456'}, {'ProductTitleTwo','123'}]
(or this: {{'ProductTitleOne','456'}, {'ProductTitleTwo','123'}} )The following attempts have also failed :
const combined1 = [].concat(titles, prices); let get_all = [...titles, ...prices];
Have investigated mapping, arrays, indexing, and have looked at many Stackoverflow questions, but still have no clue on how to solve this. If anyone has some pointers on how to solve this, much obliged.
-
Puppeteer removing commas "," while reading the HTML file
I am using puppeteer for changing in my html file and then saving it. I was having issue that it replaces double quotes with " ; . I somehow solved it. But now Puppeteer is removing all my commas "," with empty spaces, in other words it removes all the commas in the file.
I am using this default configuration for browser launching.
const browser = await puppeteer.launch({ headless: true, defaultViewport: null, args: [ '--start-maximized', '--no-sandbox', '--disable-setuid-sandbox', '--disable-infobars', '--window-position=0,0', '--ignore-certifcate-errors', '--ignore-certifcate-errors-spki-list', '--disable-web-security', '--disable-gpu', '--disable-dev-shm-usage', '--disable-setuid-sandbox', '--no-first-run', '--no-zygote', '--single-process', "--proxy-server='direct://'", '--proxy-bypass-list=*' ], }); const page = await browser.newPage(); await page.setDefaultNavigationTimeout(0); await page.setJavaScriptEnabled(false); await page.setOfflineMode(true); await page.setContent(html);
I didn't find any answer to this. Any help will be appreciated. Thanks.
-
Pupeteer throws and error refusing to start the browser (Chromium)
Running on UBUNTU SERVER 18 Happened after a server restart, maybe the chrome service is not auto-starting, how can I check that?
This is my code
function makeAndSendPdf(info, cb) { const url = 'http://localhost:7021/pdf'; let _browser; let _page; puppeteer .launch() .then(browser => (_browser = browser)) .then(browser => (_page = browser.newPage())) .then(page => page.goto(`${url}${getparams(info)}`)) .then(() => _page) .then(page => page.pdf({ path: path.join(serverLoc, `/pdf/${info.orderNum}.pdf`) })) .then(() => { _browser.close(); sendMailToCompany(info.orderNum,function(message){ if(message){ sendmail(info.email, info.orderNum, cb); } }); });
}
And this is the error
Error: Failed to launch the browser process!
-
Why are constants called "constant variable" in Google Chrome console?
I've noticed that when you try to change a const value the Chrome Console console returns the following error:
Uncaught TypeError: Assignment to constant variable.
Are there any special reasons for calling a constant a "constant variable"? It seems paradoxical to me.
-
Build chromium for Macos on m1 macmini box
Can I build macos chromium for both old intel core and new m1 core on m1 box? Or I have to get two devices for them? I tried to search for answer on google, but could not find it.
-
Centos -- Yum Update Error – “HTTP Error 403 - Forbidden”
I’ve recently spun up a Docker container running CentOS Linux version 7. In my office, we have a proxy server, so once the container was up, I consoled in and set the proxy manually:
[me@8adfa83bb9e2 /home/me]# [me@8adfa83bb9e2 /home/me]# export http_proxy="http://10.10.10.101:8888" [me@8adfa83bb9e2 /home/me]#
On a separate SO post, I learned about setting the proxy in the
/etc/yum.conf
file. So I added the following line to my/etc/yum.conf
file:proxy=http://10.10.10.101:8888
And then I did a “
yum clean metadata
”:[me@8adfa83bb9e2 /home/me]# yum clean metadata Loaded plugins: fastestmirror, ovl Cleaning repos: base extras updates 0 metadata files removed 0 sqlite files removed 0 metadata files removed [me@8adfa83bb9e2 /home/me]#
At this point, I figured I was home free. I did a “
yum update
”:[me@8adfa83bb9e2 /home/me]# [me@8adfa83bb9e2 /home/me]# yum update Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was 14: HTTP Error 403 – Forbidden ...and then a lot more stuff here...
Hmm. “HTTP Error 403”. That’s a new one for me; I’m used to running “
yum update
” and it just automagically works.This isn’t a DNS problem; the Docker container can resolve and ping
mirrorlist.centos.org
. I tried to use wget to pull down that URL, but the container doesn’t have wget installed. When I try the same thing from the host machine:me@hostmachine:/home/me$ me@hostmachine:/home/me$ me@hostmachine:/home/me$ sudo wget http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container [1] 7039 [2] 7040 [3] 7041 [2] Done arch=x86_64 me@hostmachine:/home/me$ Redirecting output to ‘wget-log’. [1]- Exit 8 sudo wget http://mirrorlist.centos.org/?release=7 [3]+ Done repo=os me@hostmachine:/home/me$ me@hostmachine:/home/me$ me@hostmachine:/home/me$ ls -l total 4 -rw-r--r-- 1 root root 382 Jan 21 19:55 wget-log me@hostmachine:/home/me$ me@hostmachine:/home/me$ me@hostmachine:/home/me$ more wget-log --2021-01-21 19:55:31-- http://mirrorlist.centos.org/?release=7 Resolving mirrorlist.centos.org (mirrorlist.centos.org)... 147.75.69.225, 18.225.36.18, 67.219.148.138, ... Connecting to mirrorlist.centos.org (mirrorlist.centos.org)|147.75.69.225|:80... connected. HTTP request sent, awaiting response... 503 Service Unavailable 2021-01-21 19:55:31 ERROR 503: Service Unavailable. me@hostmachine:/home/me$ me@hostmachine:/home/me$
(Yes, the host machine has the correct proxy settings. It is not a Centos machine.)
Soooooooo… It looks like the yum service is “unavailable” from my host system. But I’ve run “
yum update
” on many, many other Centos machines in my environment. No idea what might be different here. Has anyone seen this before? Thank you. -
High memory usage of yum on Amazon EC2
Recently my Java 11 web application was killed during the night due to not enough memory on an EC2 instance. I checked
/var/log/messages
and found the logs below:What puzzles me is why
yum
takes up so much memory...As I know, therss
column shows the used RAM in 4 KB pages, which means thatyum
takes up 237 MB of RAM. Does anyone know why this happens? -
/usr/bin/python: bad interpreter: No such file or directory (Removed python rpms now python doesn't work and yum doesn't work)
I was uninstalling OpenSSH with the following command:
for i in $(rpm -qa | grep openssh);do sudo rpm -e $i --nodeps;done
Then for some reason, I don't know why I thought this was a good idea, I ran this command to remove python:
for i in $(rpm -qa | grep python);do sudo rpm -e $i --nodeps;done
Now when I run sudo yum update I get the following:
bash: /bin/yum: /usr/bin/python: bad interpreter: No such file or directory
First line of /bin/yum reads:
#!/usr/bin/python
I then checked the /usr/bin directory for python
ls -lha /usr/bin | grep python
and got back nothing.
-
PuttyGen - generate ppk from pem - for putty to access AWS AMI
Local system: Fedora
Remote system: AWS AMI Linux
SSH client: puttyProblem: Putty using pem to access AWS AMI will result on error.
Error message: OpenSSH SSH-2 private key (old PEM format)
Solution: sudo puttygen pemKey.pem -o ppkKey.ppk -O private
Reference: https://www.puttygen.com/convert-pem-to-ppkAnother problem: putty cannot open the generated ppk
Error message: unable to open file
Solution ? -
How to Install OS level dependencies in lambda function
I want my lambda function to have libssl-dev and libffi-dev dependencies.
There are ways to install language-specific dependencies(pip and npm) in ZIP file.
But is including OS-specific dependencies possible like running a shell script to include in lambda layer?
Looking for solutions preferably without involving docker or steps to modify existing AMIs with these dependencies.
-
How can someone do a describe_images to find the latest Ubuntu image?
I'm looking to find a way in Boto3 to get the latest Ubuntu image from Canonical. Regular describe_images() doesn't have a parameter for the Canonical.
TIA