We’re having a fairly impromptu GStreamer summit this Sunday in Istanbul.
If you’re interested in discussion where GStreamer is at, and where it might go next, come along 🙂
We’re having a fairly impromptu GStreamer summit this Sunday in Istanbul.
If you’re interested in discussion where GStreamer is at, and where it might go next, come along 🙂
I just finished uploading the releases of 3 of the big GStreamer modules – the Good, Bad & Ugly plugins modules. These releases are really big, because all 3 haven’t been released since June last year! 8 months of sweet hacking has produced some nice stuff. Among my favourite features of these releases:
Tomorrow evening, Jaime and I’ll be landing in Belgium for FOSDEM, which is going to be awesome. I’m a big fan of our community gatherings and getting to meet up with y’all. I was sort-of looking forward to giving a talk with some other GStreamer guys on Sunday in the GNOME/CrossDesktop devroom, but confusion over whether I said we would or not means the slot has been filled. We may still end up doing something on Saturday instead, will have to see.
We’re planning on checking out Brussels during the day on Friday, and then meeting up with everyone at the pub on Friday night. If you’re coming too, make sure to say hi!
This is a bit of a long post, but bear with me 🙂
Last week, I found out that OpenSolaris has recently added the Video4Linux2 APIs, and now provides a v4l2 driver for UVC compatible video cameras. That’s slightly funny, of course, because it has the word ‘Linux’ right there in the name. I think it’s really cool to see that the OpenSolaris guys aren’t suffering from NIH syndrome about that though.
To continue, I’ve had my eye on the Logitech Quickcam Pro 9000 for a little while, and it happens to be a UVC camera. This seemed like a good opportunity, so I ordered one. Nice webcam – I’m really pleased with it.
After that arrived, I was playing with V4L2 on one of the machines at work, and put a couple of patches from Brian Cameron into GStreamer‘s gst-plugins-good module to make our v4l2src plugin compile and work. The biggest difference from the Linux V4L2 implementation is that the header is found in /usr/include/sys/videodev2.h instead of /usr/include/linux/videodev2.h. That, and a small fix to gracefully handle ioctls that the OpenSolaris usbvc driver doesn’t implement, and I was up and away.
Coincidentally, Tim Foster was looking for talks for the Irish OpenSolaris User Group meeting last Thursday night. I thought people might enjoy a quick demo, so I volunteered.
I started off by showing that the camera shows up nicely in Ekiga, but I don’t have a SIP account so I wasn’t able to do much more there. Also, I really wanted to show the camera in GStreamer, since I’m a GStreamer guy. Note, for people who want to follow along at home I was using the version of GStreamer’s V4L2 plugin from CVS. It’s in the gst-plugins-good module, which is due for release on the 18th February.
I tried a simple pipeline to show the camera output in a window:
gst-launch v4l2src ! ffmpegcolorspace ! xvimagesink
This captures the video using v4l2 (v4l2src is the name of the element responsible), and feeds it via the ‘ffmpegcolorspace’ element to the ‘xvimagesink’ output. xvimagesink feeds video the XVideo extension, and ffmpegcolorspace provides for any format conversion between the frame buffer formats the camera supports, and what Xv can handle. Actually, this pipeline didn’t work by default on Tim’s laptop. For some reason, it tried to capture at 1600×1208 pixels, which the camera doesn’t support. It might work for you, not sure.
Anyway, the obvious fix was to explicitly choose a particular resolution to capture at:
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! xvimagesink
This is the same as the first pipeline, with the addition of the ‘video/x-raw-yuv,width=320,height=240’ bit, which in GStreamer jargon is called a ‘caps filter’ – it filters down the set of formats that are allowed for data transfer between the 2 elements. By default, the pipeline will ask v4l2src and ffmpegcolorspace what formats they have in common, and pick one. By filtering it down, I’m forcing it to choose 320×240. Doing that made a little window pop up with the video in it. It looked a little like this, although this one was actually from later:
Next, I thought I’d show how to save the incoming video to a file. In this case, as an AVI with MJPEG in it:
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=osug-1.avi
The difference here is, instead of feeding the video to an Xv window, it goes through a JPEG encoder (jpegenc), gets put into an AVI container (avimux) and then written to a file (filesink location=$blah). I let it run for 5 or 6 seconds, and then stopped it with ctrl-c. The result looked like this:
Apologies for the blurriness – I was waving the camera around and it didn’t really get a chance to focus.
Alternatively, I could have used something like this to record to Ogg/Theora:
gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=osug-1.ogg
I can play the recorded video back with a pipeline like:
gst-launch filesrc location=osug-1.avi ! decodebin ! ffmpegcolorspace ! xvimagesink
This uses filesink’s cousin ‘filesrc’ to read from an existing file, feeds it to the nice ‘decodebin’ element – which encapsulates the ability to decode any audio or video file GStreamer has installed plugins for, and then feeds the result (a sequence of raw YUV video buffers) to ‘ffmpegcolorspace ! xvimagesink’ for colorspace conversion and display in a window.
Anyone who watched the clip might be wondering why there is a guy in the front holding up a bright orange folder. For the next trick, I wanted to show the nice ‘alpha’ plugin. By default, alpha simply adds a alpha channel with a given opacity to the video as it passes through. However, it also has a green-screen mode. Or, in this case, orange-screening.
First, I played the video I captured in totem, and paused it at a suitable frame. Then I used the ‘Take Screenshot’ item in the Edit menu to save it out as a png – which actually became the first photo above. Next, I opened the png in the Gimp and used the eyedropper to isolate the RGB triple for the orange colour. Somewhere around Red=244, Green=161, Blue=11
At this point, I used live video for the rest of the demo, but I didn’t think to capture any of it. Instead, I’ll use the short clip I captured earlier as a canned re-enactment. So, I ran the video through a pipeline like this (using v4l2src etc instead of filesrc ! decodebin):
gst-launch filesrc location=osug-1.avi ! decodebin ! alpha method=custom target-r=245 target-g=161 target-b=11 angle=10 ! videomixer ! ffmpegcolorspace ! xvimagesink
This pipeline uses the alpha plugin in ‘custom’ (custom colour) mode, to add an alpha channel based on the bright orange colour-key, and then uses ‘videomixer’ to blend a default transparent looking background in behind it. Here:
The colour-key effect breaks up a little in places, because the skin tones and the wood of the desk get a little too close to the orange of the folder. A better choice of colour and filming conditions are really needed to do it well 🙂
And now for the really tricky bit:
gst-launch filesrc location=osug-1.avi ! decodebin ! alpha method=custom target-r=245 target-g=161 target-b=11 angle=10 ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
filesrc location=bg.avi ! decodebin ! ffmpegcolorspace ! mix.
This pipeline adds two things to the previous one:
1) It adds a ‘name=mix’ property assignment to the instance of the videomixer element. This makes it easier to refer to the instance by name later in the pipeline.
2) It adds a second filesrc, reading in a file named bg.avi, decoding it and then feeding it into the element named ‘mix’ – which is the instance of the videomixer element. Adding a period to the name lets GStreamer know that the connection should be made to an existing instance of something with that name, rather than creating a new instance of some element as all the other links do.
To save the output, I replaced the ‘xvimagesink’ with ‘jpegenc ! avimux ! filesink location=osug-3.avi’ to produce this:
There you go – a quick demo of doing a couple of fun things with GStreamer that don’t require writing any code. Have fun!
I can’t remember if I’ve had to make a brown paper bag release, so we’ll call this the first one. On Monday, I released 0.10.16 of the GStreamer Core and Base modules, and earlier today I released 0.10.17 of the same after we started seeing some reports of crashes.
It turns out that 6 months ago (0.10.14) we broke our ABI by deprecating some parts of the GstMixer class structures that we shouldn’t have, because they were in public headers resulting in a couple of base classes (GstMixerTrack and GstMixerOptions) shrinking when compiling with GST_DISABLE_DEPRECATED. Noone noticed at the time, because those bits of the class structure weren’t being used by anyone, and noone noticed if sub-classes allocated more space for their class structures than they now technically needed to.
Since then though, all the modules that use that piece of ABI got re-compiled, and now rely on a smaller parent class size, so when we inadvertantly put the structure entries by disabling the GST_DISABLE_DEPRECATED define in release builds, the OSS, SunAudio and Pulseaudio mixer sub-classes all started crashing. Which we would have caught in the release candidate tarballs, except that none of us thought about the fact that our pre-release tarballs were still building with GST_DISABLE_DEPRECATED, where the final release was not. Our fix is to bless the ‘new smaller ABI’, since noone has cared in 6 months that it changed, and the newer Base release reflects that. Core got another release too with a small compilation fix for Cygwin, mostly to keep the version numbers in sync.
I say “it’s nice to be loved”, because the broken release was out less than 2 days, and we’re already nearing 60 dupes on the bug 🙂
I know this release is karma punishing me for being so bold as to try and write down GStreamer’s first time-based release schedule. It’s too carefully calculated to show up only in the final release tarball to be anything else.
Last night, I re-flashed my N800 with OS2008, and today my N810 turned up – I’m glad I elected to work from home today so I could play with it as soon as possible 🙂 I was a little worried because I have heard at least 2 people that said their order was cancelled due to problems, one even after the item was reported as shipped! That, plus the fact that the UPS tracking status only reported ‘billing information received’ the entire time.
I’m impressed with the N810 hardware. For the most part it’s very nice. I like the compactness, and have no problem with the slight weight increase over the N800. The hardware keyboard is cool, although I’m tempted to continue using my external Nokia keyboard because it allows a more normal typing style. There are a couple of hardware differences I’m not a fan of, but I understand why they did things this way. I liked having a standard SD slot rather than the mini-SD that the N810 has. I have several SD cards, but no mini-SD’s. Ditto for the micro-USB connector that’s replaced the more common mini-USB on the N800. The back cover feels a little thin when I remove it, but I don’t expect to be doing that often anyway. Everything else, I love.
I can see lots of nice changes in the OS2008 release too, although the firmware that came pre-installed had some pretty rough edges. I had to reflash the N810 immediately with the latest firmware, which is not something I’d generally expect from a piece of consumer hardware. With the shipped firmware, I couldn’t install skype from the ‘Install Skype’ option in the main menu (Unable to install Skype because libhildonfm2 >= 1:1.9.49 & libhildonmime0 >= 1.10.1 are missing). In the application manager, it had an update listed for the ‘map’ application, but would not actually allow me to make the update (Unable to install. Software contains updates to pacakges installed from a different source and is likely to harm the system).
Both problems disappeared after reflashing, but still, it marred the first impression.
Here are a few observations:
Overall, except for needing to reflash my new device immediately after getting it these are pretty minor blemishes on an awesome product – well done Nokia dudes, and thanks for the developer discount – I’m looking forward to doing some hacking!
My N810 is on its way. Thank you Nokia! I’m really looking forward to trying out the new hotness 🙂
In other news, we had a really fun holiday season, visiting Sweden and Norway. We spent Christmas in Vaxjo, Sweden, visiting our Australian friends that moved there in July, really enjoyed Stockholm, and then had a blast in Oslo hanging out with Christian Schaller and his family.
It’s our last night in Barcelona tonight. 11am tomorrow, we’ll be boarding a flight to Ireland, and Monday morning I’ll be presenting myself at the Sun Microsystems Dublin office to get settled in.
Accordingly, it’s been a week of saying goodbyes to people, which is always a little strange since I’ll be talking to almost all of them regularly still, on IRC or some other form of IM.
I’m leaving behind a (hopefully) good body of work at Fluendo. I’ve got my hand in a bunch of multimedia codecs, almost the entirety of the DVD player stack, every GStreamer module release for the last 2 years, and other things that escape me right now. It would have been nice to push the DVD player all the way through to a final product, but it’s not quite there yet. Someone else will have to finish it off. Whoever that turns out to be, I hope my notes are coherent enough for you 🙂
See you all in Dublin. My plan to become the new Face of Sun is well in hand. Next stop, Irish Accent
We’ve arrived safely in Lower Limpley Stoke safely. We didn’t see any of the trouble on the M5 that we feared, and the Avon river is no more than picturesquely swollen 2 streets from here.
We’re staying in a lovely hotel built out of a giant old house.
I was amused to notice that the railway bridge is mislabelled as belonging to “Limpley Stroke”, hence the title.
At Guadec, I enjoyed the LugRadio beers night, especially the part where several people actively tried to create a new EBay account in the pub for the sole purpose of trying to put Jono’s bag on auction and raise some cash to buy more beer.
I didn’t enjoy the part where 25 euro mysteriously disappeared from within a pile of papers on the desk in our hotel room. Nothing else was taken, including the n800 that was also sitting in the room, thankfully.
Tomorrow we’re exploring Bath, going to see Glastonbury Tor and Abbey, and hopefully getting to glimpse Stonehenge.
I closed another bug today, this one with prejudice. For the record, GStreamer developers do not support running the gst-ffmpeg wrapper against the system installed FFmpeg – you can do it, but if it breaks because FFmpeg changed something (as they do), you get to keep both pieces.
Update: Fixed the bug number.
It adds the required GStreamer parts to close Bug 370937 â€“ Exessive CPU Utilisation and fix 10 wakeups per second in the volume control applet – at least when an ALSA mixer device is chosen. Other mixers still require polling, so they’ll wake up. Hopefully that will stop too as we implement poll-based notifications in the other mixer elements in GStreamer, where we can.
Does anyone know if OSS provides a select/poll based way to know when someone changes the mixer settings? Google searches haven’t been helpful. aumix at least seems to be using SIGALRM, which isn’t promising.
The Sun Audio mixerctl mentions being able to get a SIGPOLL signal sent when someone changes the mixer settings. That’s not a good interface for GStreamer (or anyone, really) to use though – anyone know if we can achieve the same thing using poll()?
Our original plan was to drive down to Bath tomorrow and stay there for 2 nights exploring the area. With all the flooding, we’re probably not going to have much fun with that. Uraeus told me on the phone that they had a rough time on the M5, but made it to Bristol today.
At this stage, I’m inclined to try the drive anyway, but I might change my mind in the morning.
Or, A Tale Of Why Not To Fiddle.
I mentioned in a recent entry that I’d reinstalled my laptop, and also mentioned a few of the things I changed after the reinstall.
One thing I didn’t mention was changing the hdparm settings. By default, the system had DMA enabled, but not ‘unmask irq’, ’32-bit IO support’ or ‘multiple sector I/O’. I edited /etc/hdparm.conf to turn these on and set MultiSector IO to 16. I’ve been using these settings for a while – before reinstalling this laptop, and I’d copied them in turn from an older machine where I’d done measurements on each option to find the best setup.
I also mentioned tweaking some of the xorg.conf settings. In particular, AGPMode and AGPSize. Eric Anholt commented that probably neither of these is a good thing to do – that AGPMode might provide some speed improvement in 3D rendering, but usually not much, and can also destabilise some systems when turned too high. I had thought that AGPSize sets the maximum amount of System RAM the X server can allocate for AGP Memory to talk to the graphics card, and that this was dynamically allocated when needed. Eric tells me, however, that the entire AGPSize RAM is reserved at server startup.
About 3MB is used for the buffers to communicate with the card, and the rest is kept as texture memory. Since the card in this laptop already has 64MB onboard, it’s unlikely (with the 3D apps that I run) that it will overflow, which means that providing an extra 56MB of RAM for this purpose is pointless, and a waste of good RAM. I tried a bunch of 3D apps that I regularly use (Togra, q3a, Tron) and couldn’t discern any performance difference (naked eye only – no fps tests). In the end, I took both settings back out and let X.org use its defaults again.
The hdparm changes and the xorg.conf settings were the only system things I’d changed from the default install, and I knew that hibernation had worked once before changing them, but didn’t work later. I figured maybe one of the settings might be the culprit, so I also put the hdparm settings back to the defaults – and hibernation worked again!
After some test cycles, I found that setting MultiSector IO on breaks the hibernate for some reason – turning it back off instantly makes hibernate work flawlessly. Interestingly, some googling for this problem seems to indicate that at some point in the past at least one ThinkPad model needed multi-sector IO turned ON in order to hibernate.
While I was at it, I figured I should actually run some tests and see if the other 2 changes (-c1 and -u1) actually made some difference. I tested using bonnie, and found that -c1 seemed to provide a modest increase in disk throughput. -u1 seems to decrease io throughput slightly, but (going by the hdparm manpage) allows better system responsiveness during heavy IO by allowing other interrupts to be processed during a disk interrupt. Enabling -u on some systems can be really bad (read: filesystem corruption), which is why it is off by default. I decided to leave both these settings turned on.
The moral of the story of course is not to blindly copy settings or change them without actually measuring and having a good reason to – the same rules we use when performing any optimisations. Alternatively, the moral is to be a X/kernel hacker and know what’s best already 🙂