When a Python Script Runs and Ends With sys.exit(0) Gives Error: exited with status 2
I am trying to make keepalived configurations and a script which I use and ends directly with sys.exit(0) gives that error:
/usr/bin/script.py exited with status 2
What can I do to correct that?
My Script's code:
import sys
sys.exit(0)
My Keepalived conf file:
vrrp_script chk_myscript {
script "/usr/bin/script.py"
interval 2
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 20
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.121
}
track_script {
myscript
}
}
1 answer
-
answered 2020-09-24 05:58
Marcus Boden
You're missing the shebang, so the script currently doesn't get interpreted as a python script. Add
#!/usr/bin/env python
to the top of the script to fix that.
See also questions close to this topic
-
How to add a border to a python console-menu
I'm a bit new to Object Oriented stuff and having some issues using the console-menu module for Python. I made my menu with the constructor and added a few items to it, but i'm having a hard time figuring out how to add formatting to it. There is a MenuStyle class in the documentation that I think I need to use:
classconsolemenu.format.MenuStyle(margins=None, padding=None, border_style=None, border_style_type=None, border_style_factory=None)
The full documentation is available here: https://console-menu.readthedocs.io/en/latest/consolemenu.html It's pretty short and to the point. I just don't understand what to do. Do I need to construct the border object and then use it in the ConsoleMenu() constructor? or add it in later?
-
Getting specific items from binary files with an index in python
I'm using pytorch but getting trouble with reading large datasets. Considering the size of data, it is necessary to save them as files.
Rather than preprocessing data into batches and save them as files respectively, I'm seeking for a method that can read specific parts of only one or a few binary files, with an index file (if necessary) indicating the corresponding file and positions for seeking.
For example, supposing a dataset file containing a lot of images (maybe with different size), and an index file indicating the start position and the end position of each image. A dict-like or list-like container
data
is initialized with the index file. When I want to read the third image, I can simply usedata[2]
and it can jump to the specific position fast and return what I want.Is there an existing method that can handle it well? As far as I know, SQL or pandas might be a possible solution for fixed field length data, but I don't know if they can handle varient filed length and binary data.
-
Django: a canconical method for a package to offer new template tags?
I'm building a package of Django extensions and one thing I'd like to include in it now are template tags (and filters). The question is, how does a package provide template tags to a Django project?
I imagine in a Django app, in the templatetags directory adding a line to two to access them. Something like:
from mypackage.template import tags
say. Or if need actually running something to register them, possibly as built-in tags a la:
https://djangosnippets.org/snippets/342/
from mypackage.template import register_tags register_tags()
Before I experiment this to death (given I can't find any docs or example yet) I'm curious if there's a canonical approach to this as there are many Django extension packages. To be honest if I could reliably find one that provides template tags it could serve as an example of course.
Any tips in this regard are appreciated.
-
Bash script is throwing the error of "missing operand" when ran
I am new to bash and am attempting to take in 2 arguments, and Argument 1 is the name of the new directory where the copied file will go. Argument 2 is the file to be copied. Argument 3 is the name of the new file. However, I keep getting the mkdir missing operand error message when running it. Thank you for any help!!
#!/bin/bash dir=$1 oldFile=$2 newFile=$3 mkdir $dir cp $2 $dir cd $dir mv $2 $3
-
Why can't my xterm terminal opened via a bash script find the prettytable module?
I have a python file which uses the prettytable. I also have a bash script that opens and runs it using xQuartz. When I open xQuartz and run the file from there, it works as expected, however when I try to run it using the script, it is unable to find my prettytable module. What might be going on?
bash script line:
xterm -geometry 55x24+000+000 -hold -e /bin/bash -l -c "python3 server.py 1"
running
python3 server.py 1
on xQuartz terminal is fine. It also works if I runxterm
from the mac terminal and do the same. -
How to fix variables/echo output for batch renaming script
I created a quick solution to batch rename files using a reference .txt doc (match_names.txt) that contains 2 columns (first column has the original prefix, the second column has the replacement prefix).
I checked my output with echo and it failed..
The Goal
- FileA.sra --> NewName.sra
- FileA.sra_1.fastq --> NewName.sra_1.fastq
- FileA.sra_2.fsatq --> NewName.sra_2.fastq
Here is my bash script:
#!/bin/bash #Program renames .sra files using a .txt file with 2 tab columns #there are three files for each sample (.sra, .sra_1.fastq, and .sra_2.fastq) #look only at files with .sra extension for file in *.sra; do OLD=${file%%.*} NEW=$(awk -v "OLD=$OLD" '$1==OLD {print $2}' match_names.txt) EXTENSION=${file#*.} #use echo to quickly see if variables worked out echo "$OLD" #works as expected echo "$NEW" #works as expected echo "$ENDING" #works as expected echo "$OLD.$ENDING" #works as expected echo "$NEW.$ENDING" #does not work echo "$ENDING.$NEW" #but this works?? #after checking variables, my goal is to run the following in the loop # mv $OLD.$ENDING $NEW.ENDING done
Any help would be appreciated!
-
How to check sshfs is mounted or not?
I am using sshfs to mount my one ubuntu machine drive to another remote ubuntu machine drive.
Question is here to how to check using terminal command that X machines directory mounted with Y machines directory?
Looking for terminal command that confirms mounting is active and pointed to Remote location.
-
Ubuntu Server Not displaying all my storage
I have just installed Ubuntu server and when connecting through ssh it says usage of /: 3.1% of 195.86 GB.
This is using 1tb hard drive and for some reason I can't seem to make it recognized it. I tried using fdisk. it recognize the extra space but can't seem to figure out how to add that space
This is probably an easy answer but I not very used to of Linux yet.
-
Node script.js not giving any output on ubuntu terminal. see the screenshot
Look at the image. code:
file code: console.log("node js test")
command Line: node file.js not giving any output.
-
HBase regions not fail over correctly, stuck in "OPENING" RIT
I am using the hbase-2.2.3 to setup a small cluster with 3 nodes. (both hadoop and HBase are HA mode) node1: NN, JN, ZKFC, ZK, HMaster, HRegionServer node2: NN, JN, ZKFC, DN, ZK, Backup HMaster, HRegionServer node3: DN, JN, ZK, HRegionServer
When I reboot node3, it cause the regions-in-transaction (some regions are in OPENING). In the master log, I can see: master.HMaster: Not running balancer because 5 region(s) in transition
Anyone know how to fix this issue? Great thanks
-
Is there a redis pub/sub replacement option, with high availability and redundancy, or, probably p2p messaging?
I have an app with hundreds of horizontally scaled servers which uses redis pub/sub, and it works just fine.
The redis server is a central point of failure. Whenever redis fails (well, it happens sometimes), our application falls into inconsistent state and have to follow recovery process which takes time. During this time the entire app is hardly useful.
Is there any messaging system/framework option, similar to redis pub/sub, but with redundancy and high availability so that if one instance fails, other will continue to deliver the messages exchanged between application hosts?
Or, better, is there any distributed messaging system in which app instances exchange the messages in a peer-to-peer manner, so that there is no single point of failure?
-
Apache flink high availability not working as expected
Tried to test High availability by bringing down Task manager along with Job manager and yarn Node manager at same time, I thought yarn will automatically assign that application to other node but it's not happening 😔😔 How this can be achieved?
-
Why Ufw Logs [UFW BLOCK] for a Spesific IP in Keepalived Connection?
I have read questions and answers about the UFW Block and I figure out to solve that with the code below:
iptables -I INPUT -p 112 -d Ip_Address -j ACCEPT
But the question is the log is about my computer. I am connecting to a remote database server which their connections are managed by Keepalived because there is replication between on these servers. When I looked at /var/log/ufw.log I saw that the [UFW BLOCK] log has occured many times for my computer IP and for some other applications' IP. What should I do or what is the best practise to solve that issue? Should I execute the command above for all IP addresses that will connect to the database servers or what?
The all operation systems are Ubuntu 20.04 There are 3-Cluster Galera replication with MariaDB The database servers' connections are managed by Keepalived
-
Flood of cryptic keepalived log entries
I have a simple keepalived setup, which to what I can see seems to function as I expect. The problem that I have is that I get an endless stream of obscure log entries:
Feb 10 21:43:51 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:43:53 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:43:55 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:43:57 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:43:59 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:44:01 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:44:03 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:44:06 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:44:08 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255 Feb 10 21:44:10 serverA Keepalived_vrrp[2599401]: (haproxy_lb) invalid TTL/HL. Received 254 and expect 255
Any ideas on what this might be/how I can avoid getting flooded?
My config files for serverA and serverB are identical, and looks like:
vrrp_track_file track_graceful_failover { file /etc/keepalived/graceful } vrrp_script chk_haproxy { script "/bin/sh -c '/bin/ps -e | /bin/grep haproxy'" interval 1 timeout 3 rise 2 fall 2 } global_defs { enable_script_security notification_email { root@mydomain.com } notification_email_from serverA@mydomain.com smtp_server smtphost.mydomain.com smtp_connect_timeout 60 } vrrp_instance haproxy_lb { state MASTER interface eth0 virtual_router_id 91 priority 200 advert_int 2 authentication { auth_type PASS auth_pass 1215 } virtual_ipaddress { 10.1.9.3 } track_file { track_graceful_failover weight 1 } track_script { chk_haproxy } }
Thanks
-
Keepalived floating IP for a docker service
We have a docker-compose based application and running in two servers and can access with
server-ip:port
.To integrate Keepalived floating IP for this service, I tried adding docker service running interface to
keepalivd.conf
. Interface was take as described heredocker exec -it my-container cat /sys/class/net/eth0/iflink ip ad | grep 123
But keepalived service was not restarting
(keepalived1): Cannot find an IP address to use for interface vethfb0585d
How can I add this docker service to a keepalived IP address??