How to get non-decoded h264 stream from the webcam using ffmpeg?
I want to get the file which is non-decoded h264 format to use in another client application. I know how to stream to disk using below command from the docs.
Example to encode video from /dev/video0:
ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 output.mp4
High level Diagram
This is typical producer and consumer problem -
Webcam =============> ffmpeg to video stream into file. (producer) ^ | | Client ________________________________| (consumer) // reads only Non-decoded h264 format from a file.
ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 output.mp4 -c copy out.h264
out.h264is the received H264 bitstream, saved as a file.
I found this as solution
ffmpeg -pix_fmt yuv420p -y -f v4l2 -vcodec h264 -i /dev/video0 out.h264
See also questions close to this topic
Unable to import FFmpeg based MediaPlayer
I have installed
raspberrry pi 3(raspbain stretch). I get an import error when I try to do this:
from ffpyplayer.player import MediaPlayer
This is what I get:
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/ffpyplayer/player/__init__.py", line 10, in <module> from ffpyplayer.player.player import MediaPlayer ImportError: /home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/ffpyplayer/player/player.cpython-35m-arm-linux-gnueabihf.so: undefined symbol: x264_levels
How do I fix this?
Flask send_file not sending file
I'm using Flask with
send_file()to have people download a file off the server.
My current code is as follows:
@app.route('/', methods=["GET", "POST"]) def index(): if request.method == "POST": link = request.form.get('Link') with youtube_dl.YoutubeDL(ydl_opts) as ydl: info_dict = ydl.extract_info(link, download=False) video_url = info_dict.get("url", None) video_id = info_dict.get("id", None) video_title = info_dict.get('title', None) ydl.download([link]) print("sending file...") send_file("dl/"+video_title+".f137.mp4", as_attachment=True) print("file sent, deleting...") os.remove("dl/"+video_title+".f137.mp4") print("done.") return render_template("index.html", message="Success!") else: return render_template("index.html", message=message)
The only reason I have
.f137.mp4added is because I am using AWS C9 to be my online IDE and I can't install FFMPEG to combine the audio and video on Amazon Linux. However, that is not the issue. The issue is that it is not sending the download request.
Here is the console output:
127.0.0.1 - - [12/Dec/2018 16:17:41] "POST / HTTP/1.1" 200 - [youtube] 2AYgi2wsdkE: Downloading webpage [youtube] 2AYgi2wsdkE: Downloading video info webpage [youtube] 2AYgi2wsdkE: Downloading webpage [youtube] 2AYgi2wsdkE: Downloading video info webpage WARNING: You have requested multiple formats but ffmpeg or avconv are not installed. The formats won't be merged. [download] Destination: dl/Meme Awards v244.f137.mp4 [download] 100% of 73.82MiB in 00:02 [download] Destination: dl/Meme Awards v244.f140.m4a [download] 100% of 11.63MiB in 00:00 sending file... file sent, deleting... done. 127.0.0.1 - - [12/Dec/2018 16:18:03] "POST / HTTP/1.1" 200 -
Any and all help is appreciated. Thanks!
Libsourcey: Segmentation fault
I am running WebRTC Native Video Recorder demo Application,but getting Segmentation fault (core dumped).
here is the cmake command:
cmake .. -DCMAKE_BUILD_TYPE=DEBUG -DBUILD_SHARED_LIBS=OFF -DBUILD_MODULES=OFF -DBUILD_APPLICATIONS=OFF \ -DBUILD_SAMPLES=ON -DBUILD_TESTS=OFF -DWITH_WEBRTC=ON -DWITH_FFMPEG=ON -DBUILD_MODULE_base=ON \ -DBUILD_MODULE_crypto=ON -DBUILD_MODULE_http=ON -DBUILD_MODULE_json=ON -DBUILD_MODULE_av=ON \ -DBUILD_MODULE_net=ON -DBUILD_MODULE_socketio=ON -DBUILD_MODULE_symple=ON -DBUILD_MODULE_stun=ON \ -DBUILD_MODULE_turn=ON -DBUILD_MODULE_util=ON -DBUILD_MODULE_uv=ON -DBUILD_MODULE_webrtc=ON \ -DBUILD_SAMPLES_webrtc=ON -DWEBRTC_INCLUDE_DIR=/home/ubuntu/temp/webrtc-22215-ab42706-linux-x64/include \ -DWEBRTC_LIBRARIES=/home/ubuntu/temp/webrtc-22215-ab42706-linux-x64/lib/ \ -DWEBRTC_ROOT_DIR=/home/ubuntu/temp/webrtc-22215-ab42706-linux-x64 \ -DBUILD_MODULE_openssl=ON -DOPENSSL_ROOT_DIR=/usr/local/ssl -DOPENSSL_LIBRARIES=/usr/local/ssl/lib/ \ -DOPENSSL_INCLUDE_DIR=/usr/local/ssl/include/openssl/
and Resolution is 640*480 , bit rate 128000, profile level is 3.2 .
[libx264 @ 0x7f63dc001600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 LZCNT BMI1 [libx264 @ 0x7f63dc001600] profile High, level 3.2 Segmentation fault (core dumped)
I have tried to track with gdb and backtrace but couldnt find
[libx264 @ 0x7fffa4001600] profile High, level 3.2 Thread 19 "IncomingVideoSt" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffd76fd700 (LWP 21381)] 0x00000001000001e0 in ?? () bt #0 0x00000001000001e0 in ?? () #1 0x00007ffff5380c6c in x264_stack_align () from /usr/lib/x86_64-linux-gnu/libx264.so.155 #2 0x00007fffd76f710c in ?? () #3 0x00007fffd76f7110 in ?? () #4 0x0000000000000000 in ?? ()
How to resolve this error?
Send video with stream doesn't work in telethon package
Sending file with telethon done with this code:
from telethon.tl.types import DocumentAttributeVideo client.send_file('username','path')
I want to send video file with streaming support,According to telethon documentation I have to use the parameter
from telethon.tl.types import DocumentAttributeVideo client.send_file('username','path',allow_cache=False, supports_streaming=True,attributes=(DocumentAttributeVideo(1727,1280,720),))
duration = 1727
width = 1280
height = 720
But when I upload the file, it will not be streamed.
Secure online video streaming
I am working on a website in asp.net which streams online videos as a subscription based service. I am in look out for a secure video player which won't allow any user to download it to his PC/laptop in any manner (using browser plugins etc). Is there any online video player which is secure enough that it does not allow any kind of browser plugins to download it in one's computer/laptop?
Any help in this regard would be appreciated.
Server side video stream recording
I need to organize a short 5 second video stream from webcam, and record it on the server. The option of recording video on the client and sending it to the server with a post request does not suit me, because it's not safe and someone can spoof a video and send a fake. I need to record video directly on the server using WebRtc, because it's safe, but I have not found anywhere that will help me. I tried to use kurento media server, but it compresses with video codecs VP8 or H264, which spoils the quality of the video, and my neural network cannot process it. I need to save the video in original quality on the server. How can I do it? Or is there another way to transfer video to the server in its original quality, without the risk of getting a fake?
Apple Mac book pro 2011 No camera connected issue
I am not able to connect to my web cam on Macbook pro2011, I have tried All kind of restarts and also below commands in terminal but no luck
- sudo killall VDACAssistant
2.sudo killall AppleCameraAssistant
Please suggest me to trouble shoot this issue..
Thanks in advance.
All image pixel values become 205 after setting CAP_PROP_FRAME_WIDTH and CAP_PROP_FRAME_HEIGHT in OpenCV using webcam
I use a webcam with default resolution of 640x480, the environment is OpenCV 4.0, visual studio 17 and win10. If I read directly from webcam everything is fine, but after I set CAP_PROP_FRAME_WIDTH and CAP_PROP_FRAME_HEIGHT, I can still get image from webcam but all pixel values become (205,205,205). The code is simple:
VideoCapture cap(0); cap.set(CAP_PROP_FRAME_WIDTH, 320); cap.set(CAP_PROP_FRAME_HEIGHT, 240); Mat img; cap.read(img);
What might cause the problem? Thank you!
How to capture an image with Web camera
I want to capture image of person with Web camera. How to find that a person is present or photo taken from her photo? Thanks
Where is pixel format stored in H.264 MP4 file?
I'm working on a transmuxer that will convert an H.264/AAC RTMP stream to a valid MP4 file. I'm mostly done. I'm parsing the AMF tag, reading the AVCDecoderConfigurationRecord and AACSpecificConfig, I'm generating a valid moov atom, etc.
After discovering and fixing a few bugs in my code, I've got a mostly valid MP4 file. However when I attempt to read the video in
ffprobeI get the following error:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9fb4000b80] Failed to open codec in avformat_find_stream_info Last message repeated 1 times [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9fb4000b80] Could not find codec parameters for stream 1 (Video: h264 (avc1 / 0x31637661), none, 640x360): unspecified pixel format Consider increasing the value for the 'analyzeduration' and 'probesize' options
It's unable to find the pixel format. Skimming through my AVCDecoderConfigurationRecord parsing logic (which is used to generate the
avcCatom as part of the
avc1atom), I have the following:
// Parsed per: https://github.com/LiminWang/simple-rtmp-server/blob/master/trunk/doc/H.264-AVC-ISO_IEC_14496-15.pdf var info = parseAVCConfig(packet); // Fortunately my video sample has one of each of these // I may need to concatenate multiple in the future var sps = info.sps; var pps = info.pps; var avcc = box( types.avcC, new Uint8Array([ // Version 0x01, // Profile info.profile, // Profile Compat info.compat, // Level info.level, // LengthSizeMinusOne, hard-coded to 4 bytes (copied HLS.js) 0xfc | 3, // 3bit reserved (111) + numOfSequenceParameterSets 0xE0 | sps.byteLength ] .concat(Array.from(sps)) .concat([ // NumOfPictureParametersets pps.byteLength ]) .concat(Array.from(pps)) ) );
As you can see the
avccatom contains the profile, compat, and level -- but after that I just copy over the SPS and PPS directly from the AVCDecoderConfigurationRecord. Nowhere in the atom do I define a pixel format, so I assumed it was part of the SPS or PPS.
Looking at the spec for the AVCDecoderConfigurationRecord, there's nothing specifically called "pixel format", but there is a "chroma_format", "bit_depth_luma_minus8", and "bit_depth_chroma_minus_8" -- however these only exist if the profile is 100, 110, 122, or 244. My profile is 66 (and these bytes don't exist for me)
At the moment this proof of concept I'm doing only has to support a single video, so worst-case scenario I can hard-code the pixel format to
yuv420. But I don't even know where to put this information in the output MP4. Does it go in the
avcCatom? Or the
avc1atom? Or the
I've uploaded the two files to temporary storage. These links will expire in 24 hours (by then I'll get the files to a more permanent location)
- buffer.mp4: This is the file I'm creating, which does not work.
ffprobesays it cannot find the pixel format. http://cs-download.limelight.com/llnw/staff/sbarnett/buffer.mp4?acct_id=24&e=1544737337&h=d09e8bbad0633e0630a3e3a2327fb5bb
- test.mp4: This is a segment of the same video converted to MP4 by
ffmpegfor comparison. http://cs-download.limelight.com/llnw/staff/sbarnett/test.mp4?acct_id=24&e=1544737438&h=52653b1e57046828575231eff97ef5fa
- buffer.mp4: This is the file I'm creating, which does not work.
ffmpeg does not produce smooth videos from mkv h265
It's kind of subjective, but I'm not able to produce 100% smooth videos with ffmpeg. As input I use https://www.libde265.org/hevc-bitstreams/tos-1720x720-cfg01.mkv as a example.This is a h264 mkv video which is running really badly with my vlc player on my win7 laptop.Converting it to a h264 video let it play much better, but it still appears not to be 100% smooth. Especialy in Vegas 9 it even hangs like once a second.
Other h264 videos even with 1080p or bigger run perfect with vlc and much better with Vegas, so it is not my laptop.
its seems that there can be a lot of differences between h264 and h264?? what could i try to make them more smooth?
I'm using following command to convert the video:
ffmpeg.exe -i INPUT_FILE -ac 2 -vf scale=trunc\\(oh*a/2\\)*2:480 -c:v libx264 -sn -dn -map_metadata -1 -map_chapters -1 -profile:v high -level:v 4.0 -pix_fmt yuv420p OUTPUT_FILE
How to extract SPS and PPS from RTMP stream (avc1 encoded)?
I'm working on an extension to Node Media Server to save incoming streams to disk as MP4. For this conversion to MP4 I'm leaning heavily on the Apple QuickTime Movie Specification, The ISO/IEC 14496-14 Specification (which I discovered in the Rust-MP4 GitHub repository for free), and The HLS.js Source Code
I'm testing with a single video at the moment. Once this works I'll start experimenting with other videos. For my use case I only need to support H.264 video and AAC audio.
Currently when an RTMP connection is established, the first 3 packets I receive are consistently:
AMF metadatapacket (RTMP cid = 6) containing information like video width, video height, bitrate, audio sample rate, etc
audiopacket (RMTP cid = 4) containing 7 bytes of data. I assume this is the AAC config packet
videopacket (RTMP cid = 5) containing 46 bytes of data. I assume this is the AVC config packet
When writing the MP4
moovatom, there are two places where I need to utilize additional information not located in the AMF metadata (and presumably located in these two config packets):
- In the
esdsatom, The HLS.js source appends "config" data. I assume I just append the entire 7-byte payload from the audio config packet here
- In the
avcCatom, The HLS.js source append the "sps" and "pps" data. This is the root of my issue
Regarding the parsing of these 46 bytes, I found code in Node Media Server and HLS.js that seems to parse the same data. The difference between these two pieces of code is that Node Media Server expects an additional 13 bytes of data at the start of the packet. The packet I receive seems to contain these additional 13 bytes, so I simply follow their lead in extracting
levelinformation. The 46 bytes in particular are:
[0x17, 0x00, 0x00, 0x00, 0x00, 0x01, 0x42, 0xc0, 0x1f, 0xff, 0xe1, 0x00, 0x19, 0x67, 0x42, 0xc0, 0x1f, 0xa6, 0x11, 0x02, 0x80, 0xbf, 0xe5, 0x84, 0x00, 0x00, 0x03, 0x00, 0x04, 0x00, 0x00, 0x03, 0x00, 0xc2, 0x3c, 0x60, 0xc8, 0x46, 0x01, 0x00, 0x05, 0x68, 0xc8, 0x42, 0x32, 0xc8]
Breaking this down for the bytes I can easily parse (prior to the use of Exponential Golomb encoding):
[ 0x17, // "frame type", specifies H.264 or HVEC 0x00, 0x00, 0x00, 0x00, 0x01, // ignored. Reserved? 0x42, // profile 0xc0, // compat 0x1f, // level 0xff, // "info.nalu" (per Node Media Server source) 0xe1, // "info.nb_sps" (per Node Media Server source) 0x00, 0x19, // "nal size" // Above here are the bits exclusively seen by Node Media Server (specific to RTMP?) // Below here are the bits passed to HLS.js as "unit.data" (common to all AVC1 streams?): 0x67, // "nal type" 0x42, // profile (again?) 0xc0, // compat (again?) 0x1f, // level (again?) // Below here, data is not necessarily byte-aligned as Exponential Golomb encoding is used // ... ]
Now the problem I'm running into is during the creation of the
moovatom (and the
avcCatom, specifically) I need to know both the
ppsbytes. From the HLS.js source it looks like the
spsmay just be this video config packet minus the first 13 bytes. However how do I find the
ppsactually the last few bytes of this packet, and I should split it somewhere? Will this be delivered in another packet? If two video packets are to be expected, is there some way I should differentiate them so I know which one is
spsand which one is
If I can figure out this last little bit, then I should be completely done writing the
moovpacket (after which point I just need to figure out the proper format for the
mdatpacket and I should have working code)
Update: For the record, I just checked the fourth packet being delivered to see if it might contain
ppsdata. After reconnecting to the stream ~20 times the fourth packet was consistently a
videopacket (RTMP cid = 5), but the size of the packet ranged from 16000 bytes to 21000 bytes. I suspect this is legitimate video data.
Second Update: I just checked what the offset was in the video config packet when I finished parsing the SPS, and I was on byte 23 (
0x84). It's therefore likely that the PPS is in fact at the end of this byte array, but I'm not sure how many bytes are delimiters / headers (NAL type, NAL length, etc) and how many bytes are the actual PPS.