How to pull existing statefile from s3 and push to different path in the same bucket?
I have aws resources configured including cognito resource and statefile has been pushed to s3 bucket. Now I would like to pull the existing statefile from s3 and push only cognito related information to different path in the same s3 bucket. How to achieve this?
See also questions close to this topic
-
I want to store media files in S3 bucket and access them using mysql table and retrieve them using springboot application.How to do that?
I want to store media files in the S3 bucket and access them using the MySQL table and retrieve them using the spring boot application. How to do that?
-
AWS S3 - Redirect - edited host URL is not changed in the redirect
Problem
I want to forward a call to my URL in Route53
myurl.com
to an other website let's saygoogle.com
and followed AWS described steps. When it comes to S3 configuration "Redirecting requests for a bucket's website endpoint to another host".I set it up as described: S3 -> Properties -> Static website hosting -> Redirect requests for an object -> Host =
bing.com
.Now my URL got's created
http://myurl.com.s3-website-eu-west-1.amazonaws.com
and I got forwarded tobing.com
.But when I changed it I still got forwarded to
bing.com
and not togoogle.com
what I configured as host.
What I tried
If I delete my bucket "myurl.com" and create a new one with the new host
google.com
I still got forwarded tobing.com
.Question
- Takes it some time to push this change and is this described somewhere?
- Is it a bug?
- Do I miss something?
-
How to upload data file to a Heroku-deployed Dash web app
I have been researching about uploading data (by external users) to my Dash app and it seems the only way is the dcc.Upload component (a drag-and-drop component on the UI side - https://dash.plotly.com/dash-core-components/upload) … To clarify it is this uploaded file that will be read into pandas and fed into the callbacks for analysis and visualisation.
I also read about Heroku simple-file-upload config (https://devcenter.heroku.com/articles/simple-file-upload) and the AWS S3 bucket (https://devcenter.heroku.com/articles/s3) as the necessary way to store static data uploaded to the app. Nowhere is it mentioned in the Dash dcc.Upload docs about the storage of the uploaded file, i.e. the web server part and the UI are not linked together in any documentation I could find.
Can anyone explain to a total web dev newbie, once deployed to Heroku, does the dcc.Upload require the set up of the Heroku simple-file-upload config or an S3 storage bucket ? If not, how does it deal with the storage of the file? Is there any other way for a user to upload data to be used in the web app?
PS I am not even sure the data file the user will upload is a static file or a dynamic one, as it will obviously be processed within the code for the analysis to happen (ie group, sort, filter, etc)
-
Terraform: The value only to the first resource
I've 3
sqs endpoints
, and I'm trying to use a value to the first created resource, however the value associating with other 2 resources as well.My resource is
aws_vpc_endpoint
it has been created withfor_each
expression, one of attributes isprivate_dns_enabled
which is bool, and it must be applied only to the first created resource, other created resources must be ignoredHow can I associate the value only to the first created resource, otherwise its conflicting with error:
Error creating VPC Endpoint: InvalidParameter: private-dns-enabled cannot be set because there is already a conflicting DNS domain for sqs.eu-west-1.amazonaws.com in the VPC
Maybe there is a function or patters, for my specific question
sqs.tf
... truncated ... private_dns_enabled = lookup(var.parameters[0].sqs[0], "private_dns", false) ... truncated ...
P.S. If set to
false
there is no problem, right now only 1 SQS Endpoint is created other 2 are failed -
Resolving broken deleted state in terraform
When terraform tries to deploy something and then times out in a state like
pending
ordeleting
the state will eventually update tosuccessful
ordeleted
but this never gets updated in the tf state so when I try to run something again it errors because the state doesn't match.Error: error waiting for EC2 Transit Gateway VPC Attachment (tgw-attach-xxxxxxxxx) deletion: unexpected state 'failed', wanted target 'deleted'. last error: %!s(<nil>)
What is the correct way to handle this? Can I do something within terraform to get it to recognise the latest state in AWS? Is it a bug on tf's part?
-
How to generate two keys with the same value in Terraform?
I'm attempting to generate 2
azurerm_key_vault_key
: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/key_vault_keyI need both of them to have the same value(to be the same key exactly but in different keyvault). Is it possible to achieve that? I can't find anyway to explicitly define the key's value so I could generate it beforehand. Is that possible to have 2 keys like that?