Why python script could not work in AWS ec2?
When my python code run in localhost,it work. But, it doesn't in AWS ec2? My code is simple: send a https request with post method. It works in localhost, but it occurs problem in AWS ec2.
I change another https request, it works.So the network is ok.
When i send a request in AWS ec2 ,i received below:
Content-Type: text/html Content-Length: 60416 Connection: close Date: Sat, 10 Aug 2019 13:02:31 GMT Server: nginx Last-Modified: Fri, 09 Aug 2019 14:05:06 GMT ETag: "5d4d7d92-ec00" Accept-Ranges: bytes Vary: Accept-Encoding X-Cache: Miss from cloudfront Via: 1.1 46dd9ae2d97161deaefbdceeae5f57ac.cloudfront.net (CloudFront) X-Amz-Cf-Pop: SIN2-C1 X-Amz-Cf-Id: XNkaD2emKes3BpaY3ZVSGb1bxlnsHD1KZeHCZPXnOcspTaYXXjVzKA== <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <meta http-equiv="Content-Type" content="text/html;charset=UTF-8"> <title>sorry，the channel has expired!</title> <link rel="stylesheet" type="text/css" href="/eportal/uiFramework/css/tip.css"> </head> <body> <div class="easysite-error"> <img src="/eportal/uiFramework/images/picNews/tip.png" /> <font class="easysite-404"></font> <font>sorry，the channel has expired!</font> </div> </body> </html> ```html
See also questions close to this topic
Module with globals or Class with attributes?
Currently I'm working with a lot of modules where the original developers used
globalvariables to control states and to exchange important information between functions, like so:
STATE_VAR = 0 def do_something(arg1): global STATE_VAR if arg1: STATE_VAR = 1 def say_hello(): if STATE_VAR: print("Hello!")
I have to create new libraries that communicate with these modules and, once I use pylint to check on my code, I get a lot of complaints about using
In my head, the structure should be something like this:
class MyClass: STATE_VAR = 0 @classmethod def do_something(cls, arg1): if arg1: cls.STATE_VAR = 1 @classmethod def say_hello(cls): if cls.STATE_VAR: print("Hello!")
This structure makes pylint happy for not using the
globalstatement, but at the same time, rubs me in the wrong way for the need to use clauses such as
from mymodule import MyClass, or have to contend with the ugly
mymodule.MyClass.do_something()type of call.
I wanted to develop my code that is both pythonic and also consistent with what is already in place (I might be overthinking this as well).
I've also stumbled upon this other related question that got no definitive answer to it.
So my question is: What is the best practice in this situation. Do I keep writing code that are modules using global variables to define state (keep consistency but let pylint mad) or should I follow the road of classes and OOP (and effectively go against the code already in place)?
Breaking subprocess loop from parent process
What is missing here to break the loop in tok2.py from tok1.py?
I try to send a string containing 'exit', read the sent value into my_input and break the loop in tok2.py?
Now tok2 runs forever.
Using Debian 10 Buster with Python 3.7.
import sys import time import subprocess command = [sys.executable, 'tok2.py'] proc = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) i=0 while proc.poll() is None: if i > 5: #Send 'exit' after 5th iteration proc.stdin.write(b'exit') print('tok1: ' + str(i)) time.sleep(0.5) i=i+1
import sys import time ii=0 my_input ='' while True: my_input = sys.stdin.read() if my_input == b'exit': print('tok2: exiting') sys.stdout.flush() break print('tok2: ' + str(ii)) sys.stdout.flush() ii=ii+1 time.sleep(0.5)
TypeError: '>=' not supported between instances of 'builtin_function_or_method' and 'int'
When i run the below code :
input("请输入1—100之间的数字：") n = input if n >= 1 and n <= 100: print("你妹好漂亮！") else: print("你大爷好丑") print("游戏结束啦！不和你玩了") if n >= 1 and n <= 100:
I get the following error:
TypeError: '>=' not supported between instances of 'builtin_function_or_method' and 'int'`
How to trouble shoot containers on ECS Fargate?
I created a task which uses a docker image from ECR repository and a service in ECS. The running environment is
Fargateso there is no ec2 instance running. After I configure all the resources, the status of the service is
ACTIVATE, but the status of the task is
STOPPEDwith the reason
Status reason CannotPullContainerError: Error response from daemon: Get https://773592622512.dkr.ecr.ap-southeast-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The docker image I put on the task definition is
I see empty log stream when I click
View logs in CloudWatch. I wonder how I can debug this issue? Do I have to deploy the container to ec2 in order to see more detailed error messages?
I have added this policy to the task role:
AmazonEC2ContainerRegistryFullAccess. Why can't it pull
How to sync local MySQL database to Amazon RDS database?
I am setting up a connection between MySQL Workbench and Amazon AWS RDS MySQL database. How can I sync the two databases?
The operation (Inserting, deleting and updating) directly happens to cloud (AWS RDS) database instead of storing first on the local database which is further sync to cloud. Because it is taking much time in storing procedure directly to the cloud.
I expect all operations happen first in a local database which then syncs to the cloud database.
How to delete AWS Cloudwatch log streams recursively?
I have been using AWS Lambda which generates logs into a AWS Cloudwatch log group,
For debugging purpose in a non-prod environment, I find it easier to delete all the log streams,
And run tests on the function to generate fresh logs streams.
I do not wish to delete the log group, because it requires me to set the expiry again.
I found there is a CLI command to delete a log stream
But I wanted to know if I could just delete them all recursively?
Without deleting the log group.
delete-log-stream --log-group-name <value> --log-stream-name <value>
scheduling scraping/downloading of file from site via RStudio Server on EC2 and RSelenium (?)
I would like to schedule daily downloads of a file (which is daily updated) from a website (download button at the very end of the page). So far I figured out that one possible avenue is to set up an EC2 instance on Amazon WS which runs RStudio Server (see e.g. here). Here the script could be triggered by e.g. the
I manage to download the file on my local machine with the RSelenium package (below the code).
library(RSelenium) #open server rd <- rsDriver(browser = "firefox", port = 4444L) #open browser ffd <- rd$client #navigate to target url url <- 'https://www.facebook.com/ads/library/report/' ffd$navigate(url) css_selector <- "._7vio" download_btn <- ffd$findElement(using = "css selector", css_selector) download_btn$clickElement()
However, running RSelenium in an AWS instance seems (at least for me) to be a rather intimidating task which I haven't yet figured out (installing a docker image with RSelenium on the EC2..e.g. like here or here).
Hence I was wondering whether there isn't an easier way which would allow me to run a simple script in RStudio Server without the need for RSelenium. The challenge I am facing is how to trigger a click on the download button without RSelenium (and Docker etc)? I gather that there might be 'some way' to get the file via a POST request. The developer tools reveal that the click on the download button trigger a POST/GET command, but I am here only poking and not really sure how to proceed.
PS: I know that there is also an facebook API, but the information provided is not identical to the info in the daily reports.
Executing a python function on an EC2 as a target to an AWS ALB
I have a ALB configured to an EC2 instance. I have a python function on the EC2 instance that should take HTTP requests in a streaming fashion, execute the function and provide a very low level 200 OK response to the request. This is what I have tried so far.
1) AWS ALB is configured 2) I have a path based route /myPythonFunc configured. 3) I have a EC2 instance as the listener to the path 4) I have a python function on the EC2 listener with the business logic. 5) I have a apache web server on the EC2 instance
How do I make the HTTP requests from the ALB execute the python function. My function is very light since I will be receiving 10,000 requests/sec. Auto scaling is the next step. However, I am not sure how to execute the python function for every HTTP request that comes through the ALB.
I checked uWSGI but it talks about serving web page through the function, which is not my requirement.
Any help is appreciated.
Please note: I could easily achieve this through AWS Lambda. However, the scale of messages is too high for Lambda. I have got it confirmed from AWS. So Lambda is out of consideration.
Strongswan not establishing connection
I'm creating a VPN using StrongSwan. It's my first time using this tool. I followed a tutorial to set up. I've hit a blocker whereby the peer connection times out. The status is
0 up, 1 connecting.
I have tried on different servers, the same issue happends.
conn conec-example authby=secret left=%defaultroute leftid=<public_IP_1> leftsubnet=<private_ip_1>/20 right=<public_IP_2> rightsubnet=<private_ip_2>/20 ike=aes256-sha2_256-modp1024! esp=aes256-sha2_256! keyingtries=0 ikelifetime=1h lifetime=8h dpddelay=30 dpdtimeout=120 dpdaction=restart auto=start
public_IP_1 public_IP_2 : PSK "randomprivatesharedkey"
Here is part of the logs:
Aug 18 17:29:01 ip-x charon: 10[IKE] retransmit 2 of request with message ID 0 Aug 18 17:29:01 ip-x charon: 10[NET] sending packet: from x.x to x.x.x.x (334 bytes) Aug 18 17:30:19 ip-x charon: 13[IKE] retransmit 5 of request with message ID 0 Aug 18 17:30:19 ip-xcharon: 13[NET] sending packet: from x.x tox.x.x.129 (334 bytes) Aug 18 17:31:35 charon: 16[IKE] giving up after 5 retransmits Aug 18 17:31:35 charon: 16[IKE] peer not responding, trying again (2/0)
I expected a successful connection after setting up this, though no success. How can I resolve this? Any ideas?