Capturing Video from Video Capture Card using Qt
I am looking for an API/library in Qt which helps to capture video/streaming video from connected video capture card like Black Magic Pro ? I may need to capture individual video frames also at later point. 1. Does QMediaPlayer class of Qt have this provision ? 2. Or OpenCV have this support ?
See also questions close to this topic
-
How change circular progress bar in QML Dial?
I have a problem with changing progress bar color in QML Dial Component. I tried to use Canvas but finally i did nothing. Any suggestions or examples?
Dial { value: 0.5 anchors.horizontalCenter: parent.horizontalCenter }
black progress bar
-
How to fix build errors from Qt header files (qchar.h and qbasicatomic.h) when building on mac?
I'm trying to run a Qt project on mac and have an iOS build, however, it doesn't build due to "errors" in qt header files such qchar.h and qbasicatomic.h
Mac specs: Qt Creator 4.5.0 (based on Qt Qt 5.10.0 Clang 7.0 64 bit) macOS High Sierra (10.13.6) Xcode (Version 9.4.1)
I've tried adding CONFIG+= c++11, bu that doesn't help.
Here is the compile issue:
** BUILD FAILED ** The following build commands failed: ProcessPCH SharedPrecompiledHeaders/precompile-egfdquqtyvgpwccxaewehzkwxdsd/precompile.h.pch ../trunk/precompile.h normal x86_64 c com.apple.compilers.llvm.clang.1_0.compiler (1 failure) make: *** [xcodebuild-debug-simulator] Error 65 10:18:05: The process "/usr/bin/make" exited with code 2. Error while building/deploying project SerialSuite (kit: Qt 5.10.0 for iOS Simulator) When executing step "Make"
-
Program crash when it running for create new class instance in vector
Good afternoon, I have a problem with my program, it crash when running. I used this code for create new instance class in vector.
main.cpp :
#include <QCoreApplication> #include <iostream> #include <vector> //header file where std::vector is defined #include <myclass.h> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); for(unsigned long long i = 0 ; i<=10;i++){ std::vector<MyClass> arr(i, MyClass(10,20+i)); std::cout << arr[i].ggg; } return a.exec(); }
myclass.cpp:
#include "myclass.h" MyClass::MyClass(){} MyClass::MyClass(unsigned long long g, unsigned long long gg){ ggg = gg; }
myclass.h:
#ifndef MYCLASS_H #define MYCLASS_H class MyClass { public: MyClass(); MyClass(unsigned long long g, unsigned long long gg); unsigned long long ggg; }; #endif // MYCLASS_H
I have this error in console : exited with code -1073741819 , I think the problem comes from
std :: vector <MyClass> arr (i, MyClass (10,20 + i));
but i don't know how to do.I do not know where the issue, I am a beginner in C ++. Thank you in advance for your help,
-
Problem when the player stays open for a long time or loses follow-ups
I'm having this problem when the video stays open for a long time:
Failed to set the 'duration' property on 'MediaSource': The 'updating' attribute is true on one or more of this MediaSource's SourceBuffers.
I tried using @videojs / http-streaming and videojs-contrib-hls but none resolved
this.props = { controls: true, preload: 'auto', autoplay: true, sources: [{ src: '', type: '' }], hls: { overrideNative: true, withCredentials: true, nativeTextTracks: true }, fluid: true, playbackRates: [] } <video preload="auto" id="preview-player" className="video-js" ref={node => this.videoNode = node} > </video> componentDidMount() { this.player = videojs(this.videoNode, this.props); }
I would need this error not to happen because the player breaks when it happens and I can not reproduce except with a reload on the page
-
Unity android streaming raspberry pi camera
i'm doing a project school about a robot with a raspberry pi (with camera) and arduino. this robot will be controlled with an Android App made on Unity 2D 2018.3.4, and it has two joystick (one for the motors and one for the camera). I want to stream the raspberry camera on the background of my app, but i don't know how to do it. Can someone help me? (i don't know much about Unity and C#)
I made the streaming of the Raspberry camera with UV4L and i have a site like http://192.168.1.10:8000/stream/video.mjpeg. i tried to use Unity Video Player but it doesn't work. I searched on the web for a code that would have helped me.. like this one:
using UnityEngine; using System.Collections; using System; using System.Net; using System.IO; public class WebStream : MonoBehaviour { public MeshRenderer frame; //Mesh for displaying video private string sourceURL = "http://192.168.1.19:8080/stream/video.mjpeg"; private Texture2D texture; private Stream stream; public void GetVideo(){ texture = new Texture2D(2, 2); // create HTTP request HttpWebRequest req = (HttpWebRequest) WebRequest.Create( sourceURL ); // get response WebResponse resp = req.GetResponse(); // get response stream stream = resp.GetResponseStream(); StartCoroutine (GetFrame ()); } IEnumerator GetFrame (){ Byte [] JpegData = new Byte[65536]; while(true) { int bytesToRead = FindLength(stream); print (bytesToRead); if (bytesToRead == -1) { print("End of stream"); yield break; } int leftToRead=bytesToRead; while (leftToRead > 0) { leftToRead -= stream.Read (JpegData, bytesToRead - leftToRead, leftToRead); yield return null; } MemoryStream ms = new MemoryStream(JpegData, 0, bytesToRead, false, true); texture.LoadImage (ms.GetBuffer ()); frame.material.mainTexture = texture; stream.ReadByte(); // CR after bytes stream.ReadByte(); // LF after bytes } } int FindLength(Stream stream) { int b; string line=""; int result=-1; bool atEOL=false; while ((b=stream.ReadByte())!=-1) { if (b==10) continue; // ignore LF char if (b==13) { // CR if (atEOL) { // two blank lines means end of header stream.ReadByte(); // eat last LF return result; } if (line.StartsWith("Content-Length:")) { result=Convert.ToInt32(line.Substring("Content-Length:".Length).Trim()); } else { line=""; } atEOL=true; } else { atEOL=false; line+=(char)b; } } return -1; } }
I expected that this code could solve my problem, but on the background i only have a grey screen.
-
What are the advantages of video streaming over direct play?
Assume that I have a computer and certain videos in a folder of that computer, and I've also set up a streaming service in this computer to support VOD (video on demand).
I understand that streaming a video means that repack the video then send to other devices, the other devices will decode the steam and play it, the advantage here is obvious, for some videos that are not compatible with your play devices, repack the file will eliminate the compatiblilty issue.
But, assume that my videos are all compatible with my play devices, and if I share that video folder and play those videos directly from that shared folder, I mean, what's the advantage compare to streaming? it's also a kinda of a 'streaming' right? and the play device also need to decode it and play.
I'm also very blur with
server decoding
andclient decoding
, typically, from my understanding that play directly is aclient decoding
which consumes more power/battery of your play devices (like mobile phones), but, does streaming meansserver decoding
?I'm a very newbee of video processing, any explanation would be appreciated. thanks!
-
BeagleBone Black Based Device V4L2 WebCam Versus EasyCAP utv007 Input
I have a device that is based on the beaglebone black platform.
I am trying to get an EasyCAP video capture device to provide input and show it on the framebuffer.
I have installed v4l2, mplayer and gstreamer. We are running Debian Jessie with the latest available kernel.
When I plug in my webcam and run the following line it shows video on the screen successfully:
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,width=320,height=240 ! videoconvert ! fbdevsink
When I plug the EasyCAP (utv007 based) capture device in with a valid video signal from an RCA source I use the exact same command and get nothing.
I also have a linux desktop computer running Debian Jessie. I have the same things installed (though x86 based of course).
I have this desktop booting into a graphical vesa based command line and not into any gui.
When I run the same command on this desktop with the EasyCAP device plugging in I see the video successfully.
I have capture the logs from our device when running the webcam and the easycap and both look exactly the same up until the webcam starts streaming data you can see that we are dropping some data, but it still works.
At that point in the logs for the capture there is just nothing. It is as if the easycap isn't sending any data even though on the desktop it works fine.
Can anyone tell me where to look to figure out why our platform won't show the EasyCAP video?
PS: a little extra information I also try mplayer and it will show the webcam output just fine on the framebuffer, but for the easycap device I just get a green screen.
-
cv2 "Frame offset points outside movi section" when read 'MJPG' video
- OpenCV => : 3.3.1
- Operating System / Platform =>: Windows 10 64 Bit
- Compiler => : Visual Studio 2015
I recorded a video with 'MJPG' codec. When read it, the system promoted 4 times of "offset points outside movi section" like this:
Frame offset points outside movi section.
Frame offset points outside movi section.
Frame offset points outside movi section.
Frame offset points outside movi section.
Total frame number is also not correct
Steps to reproduceimport cv2 cap1 = cv2.VideoCapture(pth_vid1) # pth_vid1, video file name whose length is about 1 hour coded with 'MJPG'. print('IR has frame numbers', cap1.get(cv2.CAP_PROP_FRAME_COUNT))
result:
IR has frame numbers 6312.0
This file absolutely has more frames than this.
-
For the PiCameraCircularIO class 'copy_to()' function, what would be the type for 'first_frame' for a stream outputting in Mjpeg?
I am trying to copy the contents of a PiCameraCircularIO stream where the output is in MJPEG format. I used the copy_to() function which requires the 'first_frame' type.
I thought it would be '.jpeg' however nothing was copied over.
def clip_buffer(): global ELAPSED_TIME global THREAD_IS_RUN global INTERV i = 0 while THREAD_IS_RUN: try: print('Thread is run') print('Making name') clipname = 'clip' + str(i) + '.mjpeg' print(clipname) print('waiting') camera.wait_recording(35) print('camera waited') output.buffer.copy_to(clipname) i+=1 print(clipname + ' clipped') ELAPSED_TIME += INTERV except: print(i)
'output' is the circularIO stream
I want the resulting clips to be of type mjpeg, but what I get are empty files.