can't connect to AWS EC2 instances, although routing seems correct
i used AWS in the past (2014-2015), no issues at all. Then i switched to Digital Ocean. Now the new t3 instances look very interesting, so i launched some, but i can't connect.
I did the official troubleshooting guide (https://aws.amazon.com/de/premiumsupport/knowledge-center/ec2-linux-ssh-troubleshooting/) and still, no connections are possible.
Inbound SSH traffic from my IP addresses is enabled in the Security Group, and all outbound traffic.
The VPC has a Route Table that is connected to an Internet Gateway.
The Network ACL allows incoming and outgoing traffic from all IP addresses.
The guide says "If you have completed these steps and you are still unable to connect to your EC2 instance, make sure the SSH daemon is running on the EC2 instance, and that it is configured to listen on the default port (TCP 22).", but i tried several fresh instances, RHEL and Ubuntu, they should listen on 22.
... what else could be wrong?
Would appreciate any help a lot, thanks in advance!
Are you working on a proxy? Is SSH daemon running on the EC2 instance?
Try to connect by disabling firewall temporarily.
See also questions close to this topic
AWS lambda C++ runtime in SAM
Does anyone have a step-by-step tutorial on using SAM to package an AWS lambda function using the C++ runtime so I can run it locally? C++ is not one of the languages supported using
sam init --runtimeand I cannot work out the steps needed to package the Hello World function from https://aws.amazon.com/blogs/compute/introducing-the-c-lambda-runtime/
Nodemailer Connection timeout at SMTPConnection._formatError need ideas for debugging
I am running out of ideas on how to debug this issue. The issue is when I try to go through send email flow: create content => get base64 encoded file attached => send, the send hangs sometimes. not all the time.
- Our code is working in prod, but not in sandbox. However, sandbox sometimes works and sometimes doesn't.
- Using lambda for api, s3 for storing pdf file, data from dynamoDB, vue.js for frontend code.
- we have several servers for our email domain and when I did telnet domain 25 from my terminal on all the servers, they were successfully connected.
- node_modules deleted and re-installed
- re-deployed to lambda
Can anyone think of any other options I should look into?
How to create SNS topics in multiple regions with terraform?
Aim : Use terraform to create an SNS topic in multiple regions.
Right now it lets me create a single SNS in the default region or one specific region i input.
- providers.tf doesnt support iterating the region via a list
- sns topic creation in terraform doesnt take allow region arguments to be passed as well.
The only workaround i could do is do something found here -
But this involves a lot of code duplication and is not efficient or scalable.
Is there any efficient iterative approach to solve this problem ?
Is there a way I can use a Google Drive folder as a server for PDF files?
I need to create a search function for regularly updated test results on our website that is setup on a LAMP stack using EC2 and Wordpress. The test results are being posted to a folder on Google Drive at the moment and I don't want to have to manually transfer them to a folder on our server. Would it be possible for me to use the Google Drive folder itself as the 'server' that the user could run the search on?
would it be a good design to deploy a Spring batch application using AWS Batch service?
I am doing a POC for batch application which is expected to be deployed in AWS cloud.
This batch is application is heavy expected to compute huge calculation for 2m-10M of records monthly. So number of EC2 instances has to grow dynamically based on the load.
Actually I was thinking of creating a spring-batch application that will be deployed in AWS ECS service.
However I see new AWS-Batch service which is also helping to deploy the job.
My question is can I deploy my spring-batch application using AWS-Batch assuming my batch application is containerized using docker. is it a good approach or should I go with deploying in ECS? Please suggest.
in AWS, how do I setup access from ec2 to rds?
I have a Linux based ec2 instance running tomcat in a vpc. I've also got an rds instance running postgresql in the same vpc. In the security groups section for both ec2 and rds, I've selected the same group (launch-wizard-8). Both ec2 and rds are set to public access. I can get to the default tomcat page from my laptop because there's an inbound rule in launch-wizard-8 that has my ip address. I can connect to the rds instance (connecting to postgresql db) also because my ip is in the inbound rule. The problem is that my web app running in tomcat (ec2 instance) can't connect to the database (postgresql rds instance). I've confirmed that the app's config file has the correct url (the same that I use to connect from my laptop), and the log file of the app is showing that it can't connect to that url. Db name, user, and password is all the same when connecting from my laptop.
I was under the impression that if ec2 and rds were in the same vpc, they'd be able to communicate. What I did try is I added another inbound rule for the launch-wizard-8 security group that includes the ec2 public ip, and it still can't connect. Does anyone know what the issue is here?
How can SSH.NET on Windows 10 start a process on our remote Linux box that stays running?
This code blocks until the Linux process ends itself:
cSSH.Connect() cSSH.RunCommand("<our path>linux_process”) <<< BLOCKS UNTIL linux_process ENDS ITSELF cSSH.Disconnect() cSSH.Dispose()
So, we then put “&” at end of RunCommand string; it doesn’t block RunCommand but prematurely ends the Linux process when app returns from the function that called .RunCommand to start Linux process:
cSSH.Connect() cSSH.RunCommand("<our path>linux_process &”) cSSH.Disconnect() cSSH.Dispose() return <<<<<< CAUSES linux_process TO IMMEDIATELY END
Include in remote script fails when running it via SSH
I run a script locally, which when finished makes an exec call to a script on a remote server.
exec("ssh email@example.com \"php /full/path/to/script.php\"", $output, $return);
It presents me with this error:
PHP Warning: require(../resources/vendor/autoload.php): failed to open stream: No such file or directory in /full/path/to/script.php on line 3
I have tried charging the required script to it's full path with no success. Any ideas?
How to fix while loop in bash only executing once
I'm creating a bash script to ssh onto multiple servers. I'm using a while loop with a .txt file containing a list of servers to do so.
This is so we can see which servers are available to be connected to via ssh on a specific admin server. If the server times out, it cannot be accessed via ssh. If it connects, it can be accessed via ssh.
#!/bin/bash file=$1 while IFS= read -r line; do echo "$line" timeout 10s ssh sysadmin@$line "echo $line" if [ "$?" = 0 ]; then echo "worked" fi if [ $? -eq 124 ]; then echo "Timeout" echo "failed" echo "$line" fi echo "$line" done <"$file"
The expected result is that if it fails to connect to a server, it will print 'failed' and move onto the next server. If it works, it will print 'worked' and move onto the next server.
The actual result is that it will loop once and exit. How do I fix this so it will loop until the file has no more lines to be read?
How to enable clean url? (VPS - Debian 9)
I have a VPS Server (Debian 9) and I want to have clean url.
If I enter for example "examle.com/example", this pops up an error "Internal Server Error" instead of showing the page.
What do I need to do to make this mistake disappear and show the page?
Raspberry pi proxy server with multiple 3g dongles
I have a program that I use which needs to route web traffic through different IP addresses. So what I have done is I connected a USB hub with 10 3G dongles to a windows server. I then use a proxy server software called CC proxy to route the traffic to the different 3G dongles. This scenario works fine for what I need but I want to try and remove the dependency on needing a physical server. The goal is to use a VPS and remotely access the 3G dongles connected to a raspberry pi. Is this even possible?
I have tried to use 3proxy to route traffic incoming on eth0 (RPi ethernet) to eth1 (3G dongle). I then use the public IP address of the RPi in the proxy settings of chrome browser on the VPS. I am then prompted for log in details which are specified in the 3proxy.cfg file. The log in does be successful but there is no internet connection. Port forwarding on router is set up so that any traffic on port 3128 is forward to eth0 on RPi/
If I set 3proxy to route traffic from eth0 to eth0 (i.e, VPS will use RPis ethernet) then this works fine. I just don't understand why I cannot redirect traffic from eth0 to eth1. I know that eth1 works fine as if I disconnect eth0, I can browse the web just fine using eth1 and when googling "My IP" I can see that my connection is on 3 Ireland's mobile network.
Any help would be much appreciated.
192.168.0.66 (RPi ethernet eth0), 192.168.8.210 (3G dongle eth1)
#!/usr/local/bin/3proxy daemon pidfile /etc/3proxy/3proxy.pid nserver 126.96.36.199 nserver 188.8.131.52 nscache 65536 timeouts 1 5 30 60 180 1800 15 60 users root:CL:passwd log /var/log/3proxy.log D logformat "- +_L%t.%. %N.%p %E %U %C:%c %R:%r %O %I %h %T" archiver rar rar a -df -inul %A %F rotate 30 auth strong allow root proxy -p3128 -a -i192.168.0.10 -e192.168.8.21
Anyone know how to restore emails to webmail on cpanel?
Using a GoDaddy Windows VPS. The sent items suddenly disappeared in webmail and Outlook via IMAP. The sent emails are on the server in the .Sent folder, but they suddenly stopped populating in the webmail.
How to dispatch a job from one AWS region to another?
I am receiving emails from Amazon SES which is only available in a few regions. But most of my code/servers lie on a different region.
I am trying to receive emails on eu-west-1 (Ireland), and process those emails I receive in eu-central-1 (Frankfurt)
So far, I am able to successfully receive emails, store them in s3 (eu-west), and manually retrieve them from my servers in eu-central-1 (so reading a bucket across region isn't a problem).
But I am trying to automate this process, and in order to do this I need some kind of trigger to notify my workers in eu-central-1 whenever a new email/file becomes available for processing on my s3 bucket in Ireland. The natural thing I can think of is via a message queue. We've recently switched from SQS to Redis based queue, so I thought I could make it so after receiving an email and successfully encrypt/writing it to the s3 bucket, via a SNS + lambda function (in eu-west-1) I could enqueue a job on my redis instances of eu-central-1.
I'd like the system to be secure and isolated, that is ideally I don't want my redis server to be publicly available on the internet. So I was thinking of connecting my lambda function in a eu-west-1 VPC, to my VPC in eu-central-1 holding my redis servers. But if this is too tedious, I can probably make a compromise...
How can I configure a lambda function in some AWS region so it can push a job to a Redis instance in another region privately ?
Example of adding secondary CIDR block for a subnet in Cloudformation Template
I'm trying to create a new public/private subnet pair in my VPC stack using Cloudformation but I don't have any IP space left, so I want to add a Secondary CIDR block to my VPC and create the new subnet from that secondary block. Are there any YAML code examples of a template that does so? I can't find one anywhere, and I'm having trouble figuring out what to add where.
Creating a VPC Interface Endpoint for SQS in Cloud Formation
I was wondering if it is possible to create a Resource in my CloudFormation file to create a VPC Endpoint for SQS. I was able to do this for SQS and DynamoDB, but I believe it is because they were Gateway endpoints.
For now I have defined my SQS resource as:
SQSEndpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - 'sqs:*' Resource: - '*' ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .sqs SubnetIds: - !Ref PrivateSubnet - !Ref PublicSubnet VpcId: !Ref 'VPC' VpcEndpointType: Interface
Though, when I try to create the stack I get the error:
It seems like it is possible from reading this blog post from AWS. Though I can't find any examples or documentation. Any ideas?