How do I set the duration of the output of a job in Elastic Transcoder using Django?
I want to automate using Amazon's Elastic Transcoder to trim a video file in Django. My code so far is:
def trim_video(key, new_key, duration):
pipeline_id = 'XXXXXXXXXXXX-XXXXXX'
region = 'XXXXXXXX'
transcoder_client = boto.elastictranscoder.connect_to_region(region)
create_job_result=transcoder_client.create_job(**{
'pipeline_id': pipeline_id,
'input_name': {'Key': key},
'output': {
'Key': new_key,
"PresetId": 'XXXXXXXXXXXXX-XXXXXX'
}
})
print('Job has been created. The output key will be ' + new_key)
This code will cause the file to be transcoded but will not trim it. What do I add in order to trim the video?
See also questions close to this topic
-
Django multiple ManyToManyField with same model
When I only have one ManyToManyField the model works but when I add a second the serializer doesn't save.
View:
@api_view(['POST']) def update(request, pk): poll = Poll.objects.get(id=pk) serializer = PollSerializer(instance=poll, data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data)
Model:
class Poll(models.Model): title = models.CharField(max_length=200) option1 = models.CharField(max_length=200) option2 = models.CharField(max_length=200) option1total = models.IntegerField(default=0) option2total = models.IntegerField(default=0) owner = models.CharField(max_length=150, null=True) option1votes = models.ManyToManyField(User, related_name="option1votes") option2votes = models.ManyToManyField(User, related_name="option2votes")
Example request:
{'id': 17, 'title': 'What is your favorite programming language?', 'option1': 'Javascript', 'option2': 'Python', 'option1total': 2, 'option2total': 25, 'owner': None, 'option1votes': [], 'option2votes': [14]}
-
HTML if div content is long show button
I have this django app I made, and it is a blog app. The blogs are quite long, so I want to be able to show a little and then more of the div's text if the blog is long. Here is my html:
<a style="text-decoration: none;color: #000;" href="{% url 'post-detail' post.id %}"> <div id="content-blog"><p class="article-content">{{ post.content|safe }}</p></div> </a>
I want something like this:
<script> content = document.getElementById("content-blog"); max_length = 1000 //characters if(content > max_length){ //Do something } </script>
So how would I get this to actually work. To summarize, I want this to check if a div is longer than 1000 characters, and if it is, to run the if statement above. Thanks.
-
Reload DataTable content with jQuery ajax call
I have a DataTable and I want to filter it's content depending on what user selects in form. Here is the sample of code I use:
$(document).on('click', '#filter_btn', filterList) function filterList (event) { event.preventDefault() var form_data = $('.filter-form').serialize() var url = window.location.origin + '/my-amazing-url/' $('#dataTable-x').DataTable({ ajax: { url: url, type: 'get', dataType: 'json', data: form_data } }) $('#dataTable-x').DataTable().ajax.reload() }
On server side Django returns following:
... data = self.get_queryset().values() return JsonResponse(data) ...
Yet nothing is changed. How should I modify the code? Thanks.
-
S3 - Access object inside "folder" using boto3
Dealing with S3 I know that we don't have such thing as a folder but I will call it that way to try exemplification my problem. The bucket name is "Exemple". Inside the bucket we have "/folder1", inside folder1 we have "/folder2" and inside folder2 we have "/goalFile.csv". I am sure that the problem is not in the access key or in the secret access key. Here it is what I have been trying:
import sys if sys.version_info[0] <3: from StringIO import StringIO #Python 2.X else: from io import StringIO #Python 3.X client = boto3.client('s3', aws_access_key_id='myKeyID', aws_secret_access_key='mySecretKeyID') bucket_name = 'exemple' object_key ="/folder1/folder2/goalFile.csv" csv_obj = client.get_object(Bucket=bucket_name, Key=object_key) body = csv_obj['Body'] csv_string= body.read().decode('utf-8')
i got the following error message: "ClientError: An error occurred (SignatureDoesNotMatch) when calling the GetObject operation: The request signature we calculated does not match the signature you provided. Check your key and signing method."
-
Read .pptx file from s3
I try to open a .pptx from Amazon S3 and read it using the python-pptx library. This is the code:
from pptx import Presentation import boto3 s3 = boto3.resource('s3') obj=s3.Object('bucket','key') body = obj.get()['Body'] prs=Presentation((body))
It gives "AttributeError: 'StreamingBody' object has no attribute 'seek'". Shouldn't this work? How can I fix this? I also tried using read() on body first. Is there a solution without actually downloading the file?
-
AWS Cost Explorer API not grabbing all data - NextPageToken possible issue?
I have a script that successfully grabs cost data from AWS using boto3 and the cost explorer API. When I input the dates to give me data from Jan-Dec, I only get data from Jan-April. I then tried April-July and August-October and November-December and that worked/gave me the data. I am trying to get Jan-Dec data all from 1 run of the script but im not getting an error message. I believe it has to do with nextpagetoken and the while loop not correctly working. how do i fix the nextpagetoken part to give me all results?
**Paginator does not comply with this specific api call
My code is very similar to this:
https://github.com/hjacobs/aws-cost-and-usage-report/blob/master/aws-cost-and-usage-report.py
results = [] token = None while True: if token: kwargs = {'NextPageToken': token} else: kwargs = {} data = cd.get_cost_and_usage(TimePeriod={'Start': '2020-01-01', 'End': '2020-12-10'}, Granularity='DAILY', Metrics=['AmortizedCost'], GroupBy=[{'Type': 'DIMENSION', 'Key': 'LINKED_ACCOUNT'}]} ,{'Dimensions': {'Key': 'LINKED_ACCOUNT','Values': ['123948500267568']}}]}, **kwargs) for info in data['ResultsByTime']: for group in info['Groups']: print(group['Keys'][0], info['TimePeriod']['Start'], group['Metrics']['AmortizedCost']['Amount'])#, group['Keys'][1]) token = data.get('NextPageToken') if not token: break
-
Amazon MWS "Access Denied" issue
I'm new to accessing Amazon's MWS API. I was trying to connect using Boto and received an "Access Denied" error. *On researching I found out that the host value in the connection.py file needs to be changed to marketplace I'm trying to access, which is Canada in my case. I have changed that from 'mws.amazonservices.com' to 'mws.amazonservices.ca' but I still receive the same error. Any help would be greatly appreciated.
My code is:
from boto.mws.connection import MWSConnection accessKey = "XXXXXXXXXXXXXXXXXXXXXXXXX" merchantID = "XXXXXXXXX" marketplaceID = "A2EUQ1WTGCTBG2" # Amazon.ca secretKey = "XXXXXXXXXXXXXXXXXXXXXXXX" mws_test = MWSConnection(accessKey, secretKey) mws_test.Merchant = merchantID mws_test.SellerId = merchantID mws_test.MarketplaceId = marketplaceID response = mws_test.list_orders(CreatedAfter='2020-05-09T00:00:00Z', MarketplaceId=[marketplaceID], OrderStatus=['Shipped', 'Unshipped'])
-
boto3 sqs incorrect url when not specified endpoint url
Do I always need to specify
endpoint_url
when creating boto3 client? Why can't I specifyQueueUrl
as method argument?# boto3==1.16.51 import boto3 client = boto3.client('sqs') messages = client.receive_message( QueueUrl='https://sqs.eu-central-1.amazonaws.com/325672072888/event-queue-test', WaitTimeSeconds=2, MaxNumberOfMessages=1, AttributeNames=["All"], )
Exception:
botocore.exceptions.ClientError: An error occurred (InvalidAddress) when calling the ReceiveMessage operation: The address https://eu-central-1.queue.amazonaws.com/ is not valid for this endpoint.
Seems like it takes default values for sqs queue. But why it does not take value from
QueueUrl
-
HLS FLAC Audio Stream
I have a FLAC audio file (24bit/192Kbps) from which I want to create a HLS packaged adaptive bitrate stream, the highest quality stream being the input format, so FLAC (24bit/192Kbps) and the lower format streams being AAC at different bit rates.
I can do this with AWS MediaConvert or AWS Elastic Transcoder with regard to the AAC outputs but it doesn't support creating the FLAC outputs as far as I can see.
Is there a reason I shouldn't be trying to do this? Assuming that it is a perfectly valid objective is there another tool/service to do the job or perhaps I need to code something up myself around ffmpeg?
-
How to get metadata ( MaxWidth, MaxHeight etc) of preset in AWS elastic trans coding
Using aws elastic transcoder in out ongong project. i would like get aws elastic transcoder preset id's meta data, about MaxWidth, MaxHeight ete.
-
Streaming video to browser via Amazon Elastic Transcoder
I'm going to stream videos from S3. I know it is simple. I did it by putting presigned URL inside
video src
attributes. There is no problem. Everything is being displayed. But, if I visit my website via IDM+ app's browser for android, my videos can be downloaded. But I don't want it. So I want to transcode the videos(adding watermark to video, making video partion's extension .ts)*. After that, I think, videos can not be easily downloaded via IDM+(because of multiple video portions).How can I display a transcoded video file in the browser? I only found checking job status, creating jobs and etc via amazon elastic transcoder. But nothing about displaying transcoded video in the browser in laravel/php.
*as I knew, if the video is transcoded, video extension would be .ts or something else. I'm new in the video streaming sphere. Take it into account :) Thank you!
Technology: Laravel 8, AWS S3, Amazon Elastic Transcoder.
Thank you in advance.