Installing Flumotion on Debian Jessie.

Being on the edge sometimes hurts.

I grabbed https://github.com/inaes-tic/flumotion and https://github.com/inaes-tic/flumotion-ugly

just to be greeted with

AttributeError: 'EPollReactor' object has no attribute 'listenWith'

Instead of force-installing an older (<=11) python-twisted I fetched http://twistedmatrix.com/Releases/Twisted/11.1/Twisted-11.1.0.tar.bz2 and http://twistedmatrix.com/Releases/Web/11.1/TwistedWeb-11.1.0.tar.bz2.

Uncompressed, did python setup.py build && python setup.py install.

And the thing worked.

 

Walk.

Today I  woke up almost as tired as I went to bed yesterday. Most of the people is not working because of a multi day holiday. Or something like that.

Like yesterday I (unsuccessfully) tried to figure out why WebVfx refuses to play nice with gstshm. So I went for a walk to clear my mind.

One of the nicest things about living in Berisso is that I have really really close almost virgin fields and beachs, an island, “normal” city stuff and industrial/maritime landscapes. Today I went to Ensenada, there are many places that look like a still from movies such as Tank Girl or Mad Max; toyed around the docks and abandoned ships. Also met a woman that kinda looked like Lori Petty these days. Scary.


Ver mapa más grande

Google says it was a 12.5Km trip. I took me a bit longer but I tried really hard to slow down and enjoy it instead of just walking.

Back at home I’m out of ideas and this is still broken. I guess it’s time to panic.

 

Using WebKitGTK as the UI for GStreamer applications.

Lately I’ve been thinking a lot about how can I make nice and easily customizable interfaces for video applications. My idea of ‘nice’ is kind of orthogonal to what most of my expected user base will want, and by ‘easily customizable’ I don’t mean ‘go edit this glade file / json stage / etc’.

Clutter and MX are great to make good looking interfaces and like Gtk have something that resembles css to style stuff and can load an ui from an xml or json file. However, they will need sooner or later a mix developer and a designer. And unless you do something up front, the interface is tied to the backend process that does the heavy video work.

So, seeing all the good stuff we are doing with Caspa, the VideoEditor, WebVfx and our new magical synchronization framework I questioned:

Why instead of using Gtk, can’t I make my ui with html and all the fancy things that are already made?

And while we are at it I want process isolation, so if the ui crashes (or I want to launch more than one to see side by side different ui styles) the video processing does not stop. Of course, should I want more tightly coupling I can embed WebKit on my application and make a javascript bridge to avoid having to use something like websockets to interact.

One can always dream…

Then my muse appeared and commanded me to type. Thankfully, mine is not like the poor soul on “Blank Page” had.

So I type, and I type, and I type.

‘Till I made this: two GStreamer pipelines, outputting to auto audio and video sinks and also to a webkit process. Buffers travel thru shared memory, still they are copied more than I’d like to but that makes things a bit easier and helps decoupling the processes, so if one stalls the others don’t care (and anyway for most of the things I want to do I’ll need to make a few copies). Lucky me I can throw beefier hardware and play with more interesting things.

I expect to release this in a couple of weeks when it’s more stable and usable, as of today it tends to crash if you stare at it a bit harder.

“It’s an act of faith, baby”
Using WebKit to display video from a GStreamer application.

Using WebKit to display video from a GStreamer application.
Something free to whoever knows who the singer is without using image search.

 

 

That thrill.

Lately I’ve been working with a lot of technologies that are a bit outside of my comfort zone of hardware and low level stuff. Javascript, html-y things and node.js.  At first it was a tad difficult to wrap my head around all that asynchronism and things like hoisting and what is the value of ‘this’ here. And inheritance.

Then, out of a sudden I had an epiphany and I wrote a truly marvellous piece of software. Now I can use Backbone.io on the browser and the server, the same models and codebase on both without a single change. Models are automatically synchronized. On top of that there’s a redis transport so I can sync models between different node instances in real time without hitting the storage (mongo in this case). And the icing of the cake is that a python compatibility module is about to come.

Modifying microphone directivity.

So, we have some Logitech C920 cameras. They are really good for their price and sport a couple of microphones with echo cancellation and an omnidirectional pattern. Which is quite great for its intended use but a major pain if what you want to perform voice activity detection. Basically, all the cameras trigger when someone speaks. It can be worked around but things are a lot easier when the sound from one camera doesn’t leak that much into the others.

Not wanting to replace or modify the internal microphone array if there was another way I decided to test if with some absorbent foam the response could be shaped to something more useful.

Utilísima un poroto.

I cut a couple of rectangular prisms with cavities that more or less match the shape of the cameras. My supply of plushy fabric was rather limited and so I planned a bit more carefully how to divide it and make the crevices. After that I just cut it in four equal pieces and held everything with hot melt glue and some stitches.

Results.

I don’t have proper facilities like an anechoic chamber. Testing was done using a 1KHz tone and recording the sound from the back, 45 and 90 degrees ccw (shouldn’t matter) and facing the front of the camera. While there’s an improvement over the original pattern, the directivity achieved is not enough so we’ll pursue an alternate way of capturing sound (either a multichannel soundcard or modifying the internal mics).

Debugging USB3.0 issues when dealing with USB2.0 devices

Some time ago we needed to connect as many usb cameras as possible to a single computer and capture full hd video and audio. Most of our systems despite having a lot of connectors on the inside they really have one host controller and a hub.

While the available bandwidth may be more than enough using a compressed format the amount of isochronous transfers is rather limited. Our minimal use case called for three C920 cameras. On a normal system (one host controller behind a hub) the best we could achieve was two at 1280×720@30fps with audio and a third without audio, and only one at 1920×1080@30fps with audio.

So, we need to add more controllers. Usb 2.0 add-on cards are a thing of the past but luckily they were replaced with the faster USB3. Most of the usb 3 controllers also feature an usb 2.0 controller and hub for older devices but some (very rare) have a dedicated usb 2 controller for each port.

Given this I went ahead and bought two cards of different brand and different chipset each.

One of them had a NEC PD720200. It worked like a charm but sadly only has one usb 2 controller.

The other sported a VIA VL800. This one has one usb 2 controller per port (this can be seen with lsub -t). That lovingly discovering didn’t last for too long as the controller crashed all the time, at best it would stop responding but sometimes it locked my system hard. The guys at Via have a very interesting definition of meeting the specs. I’ve spent a whole weekend patching kernels trying to make it behave. Now I have a quite expensive and sophisticated paperweight.

Testing procedures:

I ssh’d to the target machine and ran in several consoles:

watch -n1 ‘dmesg | tail -n 16’ to have a log should the system crash hard.

watch -n1 ‘grep Alloc /sys/kernel/debug/usb/devices’ to monitor bus usage.

– 3x gst-launch-1.0 v4l2src device=[camera] ! queue ! ! fakesink sync=true alsasrc device=[camera soundcard] ! queue ! fakesink sync=true to capture from each device. video_caps is something like “image/jpeg,width=1920,height=1080,framerate=30/1” but I tried a couple more.

It is really wonderful how much computing power we have nowadays. The first time I compiled a kernel it took a good four hours. On my current machine (not quite new…) it takes about forty minutes from a clean tree and around ten from an already compiled one.

debugging_usb3-0

Esas pequeñas cosas…

El otro día estaba contento.

Estaba contento porque no me cobraron peaje. Porque estaban arreglando la autopista. Y porque tardé casi una hora en hacer un tramo que no demora mas de veinte minutos en un dia normal.

El domingo necesitaba aire fresco para poder pensar tranquilo y me fui caminando hasta Los Talas… Qué lindo tener el campo cerca, sentir el olor a eucalipto quemado, tener verde donde quiera que mires.

On GStreamer performance with multiple sources.

I’ve made a couple of experiments with Tetra. Right now the code that manages disconnection of live sources (say, someone pulls the cable and walks away with one of our cameras) kind of works, it certainly does on my system but with differnet sets of libraries sometimes the main gst pipeline just hangs there and it really bothers me that I’m unable to get it right.

So I decided to really split it on a core that does the mixing (either manually or automatic) and different pipelines that feed it. Previously I had success using the inter elements (with interaudiosrc hacked so its latency is acceptable) to have another pipeline with video from a file mixed with live content.

Using the inter elements and a dedicated pipeline for each camera worked fine, the camera pipeline could die or dissapear and the mixing pipeline churned happily. The only downside is that it puts some requirements on the audio and video formats.

Something that I wasn’t expecting was that cpu utilization lowered, before I had two threads using 100% and 30% (and many others below 10%) of cpu time and both cores on average at 80% load. With different pipelines linked with inter elements I had two threads, one at 55% and a couple of others near 10%; both cores a tad below 70%.

Using shmsrc / shmsink yielded similar performance results but as a downside it behaved just like the original regarding the sources being disconnected, so for now I’m not considering them to ingest video. On the other hand latency was imperceptible as expected.

Using the Gstreamer Controller subsystem from Python.

This is more or less a direct translation of the examples found at gstreamer/tests/examples/controller/*.c to their equivalents using the gi bindings for Gstreamer under Python. The documentation can be found here. Reading the source also helps a lot.

The basic premise is that you can attach a controller to almost any property of an object, set an interpolation function and give it pairs of (time, value) so they are smoothly changed. I’m using a pad as a target instead of an element just because it fits my immediate needs but it really can be any Element.

First you need to import Gstreamer and initialize it:

#!/usr/bin/python
import gi
import sys
from gi.repository import GObject
gi.require_version('Gst', '1.0')
from gi.repository import Gst
from gi.repository import GstController
from gi.repository import Gtk
from gi.repository import GLib

GObject.threads_init()
Gst.init(sys.argv)

Then create your elements. This is by no means the best way but lets me cut a bit on all the boilerplate.


p = Gst.parse_launch ("""videomixer name=mix ! videoconvert ! xvimagesink
videotestsrc pattern="snow" ! videoconvert ! mix.sink_0
videotestsrc ! videoconvert ! mix.sink_1
""")

m = p.get_by_name ("mix")
s0 = [pad for pad in m.pads if pad.name == 'sink_0'][0]
s0.set_property ("xpos", 100)

Here I created two test sources, one with bars and another with static that also has an horizontal offset. If we were to start the pipeline right now ( p.set_state (Gst.State.PLAYING) ) we would see something like this:

captura_testinterpolation

So far it works. Now I’d like to animate the alpha property of s0 (the sink pads of a videomixer have interesting properties like alpha, zorder, xpos and ypos). First we create a control source and set the interpolation mode:

cs = GstController.InterpolationControlSource()
cs.set_property('mode', GstController.InterpolationMode.LINEAR)

Then we create a control binding for the property we want to animate and add it to our element:

cb = GstController.DirectControlBinding.new(s0, 'alpha', cs)
s0.add_control_binding(cb)

It is worth noting that the same control source can be used with more than one control binding.

Now we just need to add a couple of points and play:

cs.set(0*Gst.SECOND, 1)
cs.set(4*Gst.SECOND, 0.5)
p.set_state (Gst.State.PLAYING)

If you are not running this from the interpreter remember to add GObject.MainLoop().run() , otherwise the script will end instead of keep playing. Here I’ve used absolute times, to animate in the middle of a playing state you need to get the current time and set the points accordingly, something like this will do most of the cases:


start = p.get_clock().get_time() # XXX: you better check for errors
end = start + endtime*Gst.SECOND

Avoiding too much bookkeeping

You can get the controller and control source of an element with:

control_binding = element.get_control_binding('property')
if control_binding:
    control_source = control_binding.get_property('control_source')

Installing a NextWindow Fermi touchscreen under Ubuntu 13.04 (Raring)

So, last week we bought an HP AIO 520-1188 to use with Tetra. It is a really nice machine, wonderful sound and display quality, very easy to disassemble. It came with an integrated tv tuner, infrared control and wireless keyboard and mouse. Strangely, it used only the necessary amount of packaging.

To actually use the touchscreen one needs to install the nwfermi packages found at https://launchpad.net/~djpnewton.

The kernel driver is managed with dkms, for it to build I replaced the ocurrences of err with pr_err and commented out the call to dbg(). The sources are installed by default at /usr/src/nwfermi-0.6.5.0. After that changes do a

dkms build -m nwfermi -v 0.6.5.0
dkms install -m nwfermi -v 0.6.5.0

The xorg input driver needs to be recompiled as the last version on the ppa is for a different ABI version of Xorg. I grabbed the sources from https://launchpad.net/~djpnewton/+archive/xf86-input-nextwindow/+packages.

The requisites to build it are installed with:

apt-get install build-essential autoconf2.13 xorg-dev xserver-xorg-dev xutils-dev

(In the guide it says to install xorg-x11-util-macros, its contents are now in xutils-dev)

After that do
chmod +x autogen.sh ; ./autogen.sh
make
make install

The old (and nonworking) driver is still present, so we remove it:
rm /usr/lib/xorg/modules/input/nextwindow_drv.so

Reboot the system and you are set to go.

The provided debs worked fine with a stock Debian Wheezy.

I had no luck in making the userspace daemon work on a 64 bit distro (so for now I’m limited to a tad less than 4G of ram), but I think it’s a matter of time.

Gstreaming…

For a little more than a month I was working with GStreamer on a cool project. Almost everybody told me that GStreamer is really nice if all you want to build is a player but things tend to get difficult really soon for other uses.

For the first week I struggled to do even the simplest stuff but after that it became quite manageable and I barely had to think. Except when dealing with dynamically removing and adding elements. And renegotiation errors. Fuck. I remove a source. I add another one, exactly like the former, and bam! “streaming task paused, reason not-negotiated (-4)”. Bummer. I resorted to go PLAYING – READY – PLAYING but it feels plainly wrong.

Also, I don’t know the difference between sinc, sync and sink anymore.