Thursday, December 2, 2010

Camerabin2

As some of you might know, there has been some plans for a new camerabin design on the wiki for some time.

Current camerabin uses a single pad to output data for the viewfinder, video recording and image capture. Two problems on this:

* Requires a mix of input/output-selectors and tricky switch handling code to keep buffers on their correct paths.
* Managing different caps on each output type (images/videos/viewfinder) isn't simple.

Those were the main reasons motivating us to rewrite this in a simpler way. So our adventure with camerabin2 begun. Long story short, we already have a prototype on gitorious and it has the minimum basic features: image capture, video recording and a viewfinder.

Here's a summary on the important changing parts, for more details refer to the wiki.

The key change
The big change is to have a source element (from now on called camera source) that has 3 source pads, one for each task: viewfinder, image capture, video recording.

Why doing camerabin2 and not refactoring camerabin?
The short answer is that it's a major design change we're taking here, writing from the ground up is probably safer and faster and won't bother people using current camerabin. Also, as it requires a new source element, we would cause major incompatibility with current sources.

Modules
We are aiming at a more modularized approach this time, so we have a viewfinderbin, a videorecordingbin and imagecapturebin, those are public elements that can be used outside of camerabin2.

The new 3 pad source
Thanks to Rob Clark's work from some time ago (he refactored camerabin into the new design as a proof of concept), we already got a working 3 pad source for testing our prototype. Truth be told, I haven't got really deep into the source internals working, but our goal is to provide a basecamerasrc which will make it easy (or at least, easier) to develop source elements with 3 pads.


Those are the main things I'd like to post here. I'm trying to schedule a meeting with developers interested on using camerabin2 (or that use camerabin) to discuss features, problems, requests and any camerabin2 related topic. Somewhere in the next days would be great. Nokia, Empathy and Cheese developers already showed interest on this. If you do, too, ping me on IRC (thiagoss at #gstreamer at freenode)

[Edited] Forgot to mention that the camerabin2 branch on gitorious already contains an example application under tests/examples/camerabin2.

Tuesday, May 11, 2010

gst-opencv design choices

While continuing wrapping new OpenCV functions into GstElements yesterday, I faced an interesting design choice on the mappings of OpenCV functions' parameters to GstElement's properties.

Take a look at cvSmooth docs. You can see that it has a type parameter, followed by param1, param2, param3 and param4 that have different semantics if different type is used. The question is how to expose those in the 'cvsmooth' GstElement?

I could think of 3 different choices here:

1) Go straightforward and use the same API as OpenCV
As a result, we should have an element with the properties named after the OpenCV parameters:
"cvsmooth type=blur param1=5 param2=3 param3=0.0 param4=0.0"

This results in a very not intuitive API, but we keep it aligned with OpenCV's, making it easy to people that already know one API to use the other one. The element docs would mostly point to OpenCV's docs. Resulting code is simple and easy to maintain.

2) Have multiple elements: cvsmoothblur, cvsmoothgaussian, cvsmooth...
We could have each smooth algorithm (type) into a separate element and have its properties reflect the semantics of this type. For example, we would have cvsmoothblur, cvsmoothmedian and one for each type. The properties of each one would named accordingly to its semantics, instead of some paramX.

This provides a nice API but might increase the number of elements for every function that has this type or a similar parameter. I don't know how common this is. This might be a good solution if there are a few of those. A downside is that switching the type has to use hot-swapping but I don't think this is a common use case.

3) Expose properties for each semantics and use them only if their type is selected.
We still keep it to one element, but we add one property for each semantic a parameter can assume. Those would only be used it we have its corresponding type is selected.

For example: param3 might be the "gaussian standard deviation" or the "color sigma" if type is gaussian or bilateral respectively. We add those 2 properties (standard-deviation and color-sigma) that are only going to be used if their types are selected.

This makes those lines possible:
"cvsmooth type=gaussian standard-deviation=5.0" or
"cvsmooth type=bilateral color-sigma=1.0"

Code is a little messier than options above.


Given those options, I really don't like option 3. I'm considering 1 or 2. From a quick look at some pages of OpenCV's transformations API I could see that this is not very common, and when it happens, only one parameter has a 'variable semantic', looks like I picked the trickiest one as my example.

So, which option would you chose?

Thursday, May 6, 2010

Hacking in gst-opencv

It has been years since I last used OpenCV. We (me and friends working on a lab at the university) used it to process images on batches or to process frames live from a webcam. Things could have been much easier if I knew GStreamer back then. Said so, I decided to take a look at gst-opencv to see what we already can do with it.

There are a few features wrapped as elements at this moment and they work quite well, but it could have a much larger feature set and it seems no one has been recently working on this. Given those and having a little spare time these days, I decided to start hacking on gst-opencv and trying to put it together with the other modules. I'd prefer to have a gst-opencv module, but adding it as a new plugin into gst-plugins-bad is also an option. What do you think?


Current features

[Edited: It seems the videos can only be seen directly on the post at blogspot]

Some nice stuff can already be done with the current elements. Let me show some.

I recorded this video outside some minutes ago:


We can use edgedetect on it and see its edges:
Command: gst-launch uridecodebin uri=youruri ! queue ! ffmpegcolorspace ! edgedetect ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=result.ogg



Or we can segment it with pyramidsegment and have a nice effect (some people would enjoy this in PiTiVi?) or use it in machine vision stuff?
Command: gst-launch uridecodebin uri=youruri ! queue ! ffmpegcolorspace ! pyramidsegment ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=result.ogg




OpenCV already ships some face detection profiles for you (at Ubuntu, it goes into /usr/share/opencv/haarcascades/), so you can use them with facedetect element, or train your own classifiers to use with it. I stood with the default and tried on some pictures, here are 2 of them:




I think it works pretty well :)
You can disable the circles and just get messages with the faces' positions and do whatever you want with them.

Other than those, there's also 'textwrite', 'templatematch' and 'faceblur' elements.


Current work

I've been working on a simple base class that will make it easier to map simple 1 to 1 OpenCV functions into elements and providing some common properties (like ROI and COI) and GstBuffer-IplImage conversion. This will help covering more functions and should be enough to get me acquainted again to the API, after it I can go for the fancier stuff.

For example, take cvSmooth function, we should only have to write code to map its parameters into properties and a simplified chain function that already works on IplImages instead of GstBuffers.


Repositories

gst-opencv's main repository is at github, I have my personal branches here. From time to time I ping Elleo to upgrade at github, but I hope we can get this upstream in the next weeks.