How to use aws for running deep learning model when images are sent from raspberry pi
I want to send image captured by raspberry pi camera to cloud where it will be processed by a already trained deep learning model and the commands(model output) will be sent back to the raspberry pi.I have been searching a lot on the internet but I could not find anything. Please help me. Thank you
See also questions close to this topic
How do I launch an AWS EC2 instance using an AWS launch template with Terraform?
I am trying to build an AWS EC2 redhat instance using an AWS launch template with Terraform.
I can create an launch template with a call to Terraform's resource
aws_launch_template. My question is how do I use Terraform to build an EC2 server with the created launch template?
What Terraform aws provider resource do I call?
Many thanks for your help!
AWS A4B - What is room skill paramater?
I'm working with Alexa for business and I was wondering what the heck is the room skill parameter? I haven't had a use case for it yet but I was just curious as to what it is used for.
Thanks for your time!
Cloud-init runcmd syntx of a command that adding a line to file
I'm trying do this simple command but it keeps failing:
echo "secret_backend_command: /home/ec2-user/dd-get-secrets.py" >> /etc/datadog-agent/datadog.yaml"
I tried all the following:
- [ sh, -c, echo "secret_backend_command: /home/ec2-user/dd-get-secrets.py" >> /etc/datadog-agent/datadog.yaml ] - 'sh -c "echo \"secret_backend_command: /home/ec2-user/dd-get-secrets.py\" >> /etc/datadog-agent/datadog.yaml"' - sh -c "echo 'secret_backend_command: /home/ec2-user/dd-get-secrets.py' >> /etc/datadog-agent/datadog.yaml
any idea how to fix this quotes ?
note I tested with another file rather than datadpg.yaml because I'm getting Permission denied for this one.
examples I found in here didn't work
why on raspbian show no wireless interface found?
When i go on wireless show message : "no wireless interfaces found" How can I solve this problem? I even installed raspbian again, but it did not work
Raspberry GPIO Control over http
I am trying to find a way to control the GPIOs of my Raspberry Pi via a website.
A possibility would be to install a web server on the RPi to host the website and use ajax requests to execute my C programs that are controlling the GPIOs and this would also be my preferred way. But I am struggling with some parts: Some motors are also connected to the RPi (over a driver obviously) and I am worried that the user could press the button multiple times (before the motor is even finished). I would call the same CGI-program multiple times and everyone of them wouldn't know about the other program. Instead of turning the motor for example once the programs would make it turn more often. Also a context switch would be a problem then.
How could I solve these problems? Can I create a single service running in the background and taking care of the GPIOs, while I am sending every ajax request to this service?
cannot access ssh on raspbian from different network
I am succesfully able to connect to my raspberry running raspbian stretch lite via ssh on my home network.
When i move my raspberry to another home network i keep receiving "connection refused".
Ports on the router are forwarded,ssh service is running, everything is working as expected.
The only thing i cannot understand is why i can connect from my home lan and not from my parents home lan. And there is nothing on the internet (alias google) about something related. Any suggestion?
p.s. i think that this may be related to authentication keys (something i don't understand very well)
Generate .net model using CNN Machine learning
Can anyone please help me to generate(train) a .net model of CNN using images data, and make it usable for prediction, as we generate .h5 model. I am struggling to find resource that can help me, or if anyone have useful reference or tutorial that have guidance about this, please let me know.
No overfitting by increasing the number of epochs
I use a feed foreward neural network with one hidden layer for my thesis. Threby I have 600 training data and 104 input and output values. There I want to show now the properties of the neural network and also want to show the overfitting when I increase the number of epochs. To do so, I first wanted to find the optimum for learning rate and number of hidden nodes, where I got the following results:
Based on that I decided to choose a learning rate of 0.0125 and 250 hidden nodes. But by using this set of parameters, I still have no overfitting when I increase the number of epochs, which can be seen here:
In this plot I showed in blue my old set of parameters and in theory I wanted to show how it improve when I use the best set of parameters, but it's just varying a bit. I also tested it until epoch 1000 but the accuracy with this value was still 0.830.
Does someone has an idea why this happen?
Thanks a lot for your help!
Serving multiple deep learning models from cluster
But I am not able to find any article which targets need to serve several models distributed manner. Q.1. Does tensorflow serving serve models off from single machine? Is there any way to set up a cluster of machines running tensorflow serving? So that multiple machines serve same model somewhat working as master and slave or say load balance between them while serving different models.
Q.2. Does similar functionality exist for other deep learning frameworks, say keras, mxnet etc (not just restricting to tensorflow and serving models from different frameworks)?