This is a bit of a long post, but bear with me 🙂
Last week, I found out that OpenSolaris has recently added the Video4Linux2 APIs, and now provides a v4l2 driver for UVC compatible video cameras. That’s slightly funny, of course, because it has the word ‘Linux’ right there in the name. I think it’s really cool to see that the OpenSolaris guys aren’t suffering from NIH syndrome about that though.
To continue, I’ve had my eye on the Logitech Quickcam Pro 9000 for a little while, and it happens to be a UVC camera. This seemed like a good opportunity, so I ordered one. Nice webcam – I’m really pleased with it.
After that arrived, I was playing with V4L2 on one of the machines at work, and put a couple of patches from Brian Cameron into GStreamer‘s gst-plugins-good module to make our v4l2src plugin compile and work. The biggest difference from the Linux V4L2 implementation is that the header is found in /usr/include/sys/videodev2.h instead of /usr/include/linux/videodev2.h. That, and a small fix to gracefully handle ioctls that the OpenSolaris usbvc driver doesn’t implement, and I was up and away.
Coincidentally, Tim Foster was looking for talks for the Irish OpenSolaris User Group meeting last Thursday night. I thought people might enjoy a quick demo, so I volunteered.
I started off by showing that the camera shows up nicely in Ekiga, but I don’t have a SIP account so I wasn’t able to do much more there. Also, I really wanted to show the camera in GStreamer, since I’m a GStreamer guy. Note, for people who want to follow along at home I was using the version of GStreamer’s V4L2 plugin from CVS. It’s in the gst-plugins-good module, which is due for release on the 18th February.
I tried a simple pipeline to show the camera output in a window:
gst-launch v4l2src ! ffmpegcolorspace ! xvimagesink
This captures the video using v4l2 (v4l2src is the name of the element responsible), and feeds it via the ‘ffmpegcolorspace’ element to the ‘xvimagesink’ output. xvimagesink feeds video the XVideo extension, and ffmpegcolorspace provides for any format conversion between the frame buffer formats the camera supports, and what Xv can handle. Actually, this pipeline didn’t work by default on Tim’s laptop. For some reason, it tried to capture at 1600×1208 pixels, which the camera doesn’t support. It might work for you, not sure.
Anyway, the obvious fix was to explicitly choose a particular resolution to capture at:
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! xvimagesink
This is the same as the first pipeline, with the addition of the ‘video/x-raw-yuv,width=320,height=240’ bit, which in GStreamer jargon is called a ‘caps filter’ – it filters down the set of formats that are allowed for data transfer between the 2 elements. By default, the pipeline will ask v4l2src and ffmpegcolorspace what formats they have in common, and pick one. By filtering it down, I’m forcing it to choose 320×240. Doing that made a little window pop up with the video in it. It looked a little like this, although this one was actually from later:
Next, I thought I’d show how to save the incoming video to a file. In this case, as an AVI with MJPEG in it:
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=osug-1.avi
The difference here is, instead of feeding the video to an Xv window, it goes through a JPEG encoder (jpegenc), gets put into an AVI container (avimux) and then written to a file (filesink location=$blah). I let it run for 5 or 6 seconds, and then stopped it with ctrl-c. The result looked like this:
Apologies for the blurriness – I was waving the camera around and it didn’t really get a chance to focus.
Alternatively, I could have used something like this to record to Ogg/Theora:
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=osug-1.ogg
I can play the recorded video back with a pipeline like:
gst-launch filesrc location=osug-1.avi ! decodebin ! ffmpegcolorspace ! xvimagesink
This uses filesink’s cousin ‘filesrc’ to read from an existing file, feeds it to the nice ‘decodebin’ element – which encapsulates the ability to decode any audio or video file GStreamer has installed plugins for, and then feeds the result (a sequence of raw YUV video buffers) to ‘ffmpegcolorspace ! xvimagesink’ for colorspace conversion and display in a window.
Anyone who watched the clip might be wondering why there is a guy in the front holding up a bright orange folder. For the next trick, I wanted to show the nice ‘alpha’ plugin. By default, alpha simply adds a alpha channel with a given opacity to the video as it passes through. However, it also has a green-screen mode. Or, in this case, orange-screening.
First, I played the video I captured in totem, and paused it at a suitable frame. Then I used the ‘Take Screenshot’ item in the Edit menu to save it out as a png – which actually became the first photo above. Next, I opened the png in the Gimp and used the eyedropper to isolate the RGB triple for the orange colour. Somewhere around Red=244, Green=161, Blue=11
At this point, I used live video for the rest of the demo, but I didn’t think to capture any of it. Instead, I’ll use the short clip I captured earlier as a canned re-enactment. So, I ran the video through a pipeline like this (using v4l2src etc instead of filesrc ! decodebin):
gst-launch filesrc location=osug-1.avi ! decodebin ! alpha method=custom target-r=245 target-g=161 target-b=11 angle=10 ! videomixer ! ffmpegcolorspace ! xvimagesink
This pipeline uses the alpha plugin in ‘custom’ (custom colour) mode, to add an alpha channel based on the bright orange colour-key, and then uses ‘videomixer’ to blend a default transparent looking background in behind it. Here:
The colour-key effect breaks up a little in places, because the skin tones and the wood of the desk get a little too close to the orange of the folder. A better choice of colour and filming conditions are really needed to do it well 🙂
And now for the really tricky bit:
gst-launch filesrc location=osug-1.avi ! decodebin ! alpha method=custom target-r=245 target-g=161 target-b=11 angle=10 ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
filesrc location=bg.avi ! decodebin ! ffmpegcolorspace ! mix.
This pipeline adds two things to the previous one:
1) It adds a ‘name=mix’ property assignment to the instance of the videomixer element. This makes it easier to refer to the instance by name later in the pipeline.
2) It adds a second filesrc, reading in a file named bg.avi, decoding it and then feeding it into the element named ‘mix’ – which is the instance of the videomixer element. Adding a period to the name lets GStreamer know that the connection should be made to an existing instance of something with that name, rather than creating a new instance of some element as all the other links do.
To save the output, I replaced the ‘xvimagesink’ with ‘jpegenc ! avimux ! filesink location=osug-3.avi’ to produce this:
There you go – a quick demo of doing a couple of fun things with GStreamer that don’t require writing any code. Have fun!
Cool!
Very awesome.
Goes to show how many weird and wacky GStreamer elements there are that I haven’t discovered. 😉
The guy on the right looks dead impressed…
@Calum: I caught him at the wrong moment – he wasn’t checking his mail in the later vids (the ones I didn’t think to keep), I swear 🙂
Very useful blog post! I have a nearly identical Logitech cam, and these instructions helped me get gstreamer working with it.
The only piece I have not yet figured out is audio – capturing both audio and video simultaneously from the USB webcam.
Do you have any examples of that?
-jason
The trick is to add a 2nd input chain that is capturing from the audio device and feeding it to and encoder and then the muxer.
Something like (untested, so hopefully I haven’t made a typing mistake):
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! theoraenc ! oggmux name=mux ! filesink location=output.ogg alsasrc ! audioconvert ! vorbisenc ! mux.
Good information. Do you have any command line example to save audio video RTP streams sent over the network?
TIA,
-prb
when we give below command, one window pops up to play video. How can I name that popped window?
gst-launch filesrc location=PES ! ‘video/mpeg, width=(int)704, height=(int)480, framerate=(fraction)1/1, mpegversion=(int)2, systemstream=(boolean)false’ ! ffdec_mpeg2video ! xvimagesink
@Ramana: You can’t, using a gst-launch line. It’s only a testing tool, and gives you very little control over the window that xvimagesink creates.
To do what you want, you’ll need to write code that embeds the xvimagesink window using the XOverlay interface. Take a look at the play.py example in gst-python for a starting point:
http://webcvs.freedesktop.org/gstreamer/gst-python/examples/play.py?revision=1.9&view=markup
thaytan –
it’s really some cool stuff you’ve done… greatly helpful for me.
i was wondering if you could throw some light into a problem i have been having in packing an MPEG4 file into an AVI container. the command i’m using is:
gst-launch-0.10 filesrc location=test.mpeg4 ! video/mpeg, width=720, height=480, framerate=\(fraction\)30000/1001, mpegversion=4, systemstream=false ! avimux ! filesink location=mytest.avi
this does generate the AVI file alright, but when i play it on players such as mplayer or VLC, the display is way too scrambled. mplayer also reports errors on every frame such as “illegal dc vlc”, “header damaged” etc.
any help appreciated.
Hello.
I have a simple question: Is it possible to create a video file with only an image (like jpeg or png) with gstreamer and save it, for example, in a mpg file?
The result would be an static image playable.
Thanks.
Hello,
I have question:
I typed this command:
gst-launch filesrc location 111.jpg ! jpegdec ! filesink location 111.yuv
gst-launch filesrc location 111.yuv ! jpegenc ! filesink location 111bak.yuv
but there error occurs, when I convert hte 111.yuv to 111.jpg ,it is segment default.
How can I change yuv bitstream to Jpeg file ?
Thank you .
This was an awesome post, and helped me out a lot regarding some scripting I’ve been working on. Thanks so much!
Nice post and a good way to demonstrate the available elements in G-Streamer and how they can be used…
How to limit size of file that be sink from gstreamer ?
I dont know if it will be answered or not but nevertheless, my question is:
Well, I am relatively new to Gstreamer and would want to modify something called as Gamma correction for the image generated by my gstreamer output. THe reason for this is that the image generated is a bit darker. Moreover, I would like to know if there is a GUI which would allow me to set parameters for Camera Configuration?
i want to display a picture slide show using
gstreamer ??
any help
The original article is four years old, it helped me.
I was calling another program in my script to capture. I knew v4l had the tools but I didn’t stumble across the correct information to get it to work. I was looking up something else, stumbled across this page and you answered a question, but more importantly it helped me find the correct answer to other questions I had. I can eliminate that call to the other program, streamline my script and clean up the code.
Thanks
Hi every one..,
I want to display video using V2L2sink in GSTREAMER.
Can any one help me …?
Thank you
Hello,
Thanks for the post, its really great!
I have tried with Ubuntu, and it works.
Now i want to apply Gstreamer with Openwrt, I have installed all the ipk of gstreamer already (from this link http://downloads.openwrt.org/snapshots/trunk/imx6/generic/packages/packages/)
But since i tried to use your command “gst-launch…”, it does not work (it consoles gst-launch: not found).
Can you help me how to make openWrt work with gst-launch.. ?
Thanks alot
The binary is probably called gst-launch-1.0 in those packages.
Nice finds, I have been looking around for a chroma key filter in ffmpeg but came out with nothing, so I googled chroma key with gstreamer and found this. I had an Idea to use a raspberry pi and webcamera to do Live chroma keying and output the video fullscreen to hdmi output on Raspberry pi. I’ll copy this down and get some thinking heads together.