Chasing misterious 500 errors within php.

For the last couple of days I’ve been investing time learning doing some (serious) things with Drupal and I quite like it, given that for my previous gig involving php I had to manually compile and patch php 5.2 in order to work with a monstrosity made with Textpattern and CakePHP (and a spice of hand crafted databased code).

Last morning I was almost ecstatic reading about Features and went on to make a new one just to try it out.

I select a few components, hit “Download feature” and after a while, nothing. Same happened with “Generate feature”.

On the error.log I see:
2014-10-29 07:49:31: (mod_fastcgi.c.2543) unexpected end-of-file (perhaps the fastcgi process died): pid: 11992 socket: unix:/tmp/php.socket-3
2014-10-29 07:49:31: (mod_fastcgi.c.3329) response not received, request sent: 1106 on socket: unix:/tmp/php.socket-3 for /some_site/index.php?q=admin/structure/features/create, closing connection

That was a bit odd, since the memory limit is set to an ample 256M and it died long before the time limt.

Just to be sure I tried using Apache instead of Lighttpd but no dice.

On the system log I see:

php-cgi[13015]: segfault at bf7c6fcc ip b738201a sp bf7c6fd0 error 6 in libpcre.so.3.13.1[b736d000+3f000]

With that clue I edit php.ini and shave a couple of zeros out of pcre.recursion_limit from the default of 100000. After restarting the server everything worked fine.

I shudder thinking of something that really needs a call stack 100 thousand levels deep. But on the other hand I cut my teeth on a micro with 68 bytes of ram.

Adventures in smps carnage I.

A while ago while cleaning the trash pile I thought that it’d be nice to mod one of the many computer supplies to have a variable output. So I picked up the less crappy, replaced the transformer with a one with better turns ratio to achieve a higher voltage output and put a pot on the feedback loop.

At first it kind of worked but with a lot of unstable points and weird modes. Then I realized that I fed the feedback from about 50K when the nominal was near 10K (and also there is considerable input current there). A simple emitter follower took care of that, now there only remains plain oscillations.

The operating point moves a lot considering that I want the output to be adjustable between 5V and 50V and without a fixed load. The original compensation scheme was a plain integrator plus a zero, I can make things a little better slowing it down a lot but what’s the fun on that.

So instead of blindingly doing things I set out to measure the loop response using Middlebrook’s method. I cobbled up a quick python program with Gtk and GStreamer to generate the test signals with a computer soundcard. Initially I expected to just sweep the frequency and measure some points manually on the scope but there is a lot of 50Hz induced interference that together with switching residuals make that task impossible, I really need to perform a synchronous detection in order to get a meaningful result. That means I’ll have to make room for some more quality time coding to get the scope samples in an automated fashion. The usb protocol is documented here ( http://elinux.org/Das_Oszi_Protocol#0x02_Read_sample_data ).

The setup is a far cry from the ones depicted in the famous AN70 by Jim Williams. I used an H-Field probe to rule out magnetics as an interference source. I expected the output filters and the transformer to be troublesome but their effects on the point of injection are negligible. On the other hand, long wires on the feedback path (even twisted) and the snap recovery diodes aren’t a good match.

 

The root of all evil.

I just love when I forget to add ‘volatile’ and the compiler happily optimizes away a chunk of code.

After staring for a while at the screen trying to figure out why it doesn’t work as expected I went for a quick nap. When I got back I noticed several warnings about it that were invisible to my eyes before.

Installing Flumotion on Debian Jessie.

Being on the edge sometimes hurts.

I grabbed https://github.com/inaes-tic/flumotion and https://github.com/inaes-tic/flumotion-ugly

just to be greeted with

AttributeError: 'EPollReactor' object has no attribute 'listenWith'

Instead of force-installing an older (<=11) python-twisted I fetched http://twistedmatrix.com/Releases/Twisted/11.1/Twisted-11.1.0.tar.bz2 and http://twistedmatrix.com/Releases/Web/11.1/TwistedWeb-11.1.0.tar.bz2.

Uncompressed, did python setup.py build && python setup.py install.

And the thing worked.

 

Walk.

Today I  woke up almost as tired as I went to bed yesterday. Most of the people is not working because of a multi day holiday. Or something like that.

Like yesterday I (unsuccessfully) tried to figure out why WebVfx refuses to play nice with gstshm. So I went for a walk to clear my mind.

One of the nicest things about living in Berisso is that I have really really close almost virgin fields and beachs, an island, “normal” city stuff and industrial/maritime landscapes. Today I went to Ensenada, there are many places that look like a still from movies such as Tank Girl or Mad Max; toyed around the docks and abandoned ships. Also met a woman that kinda looked like Lori Petty these days. Scary.


Ver mapa más grande

Google says it was a 12.5Km trip. I took me a bit longer but I tried really hard to slow down and enjoy it instead of just walking.

Back at home I’m out of ideas and this is still broken. I guess it’s time to panic.

 

Using WebKitGTK as the UI for GStreamer applications.

Lately I’ve been thinking a lot about how can I make nice and easily customizable interfaces for video applications. My idea of ‘nice’ is kind of orthogonal to what most of my expected user base will want, and by ‘easily customizable’ I don’t mean ‘go edit this glade file / json stage / etc’.

Clutter and MX are great to make good looking interfaces and like Gtk have something that resembles css to style stuff and can load an ui from an xml or json file. However, they will need sooner or later a mix developer and a designer. And unless you do something up front, the interface is tied to the backend process that does the heavy video work.

So, seeing all the good stuff we are doing with Caspa, the VideoEditor, WebVfx and our new magical synchronization framework I questioned:

Why instead of using Gtk, can’t I make my ui with html and all the fancy things that are already made?

And while we are at it I want process isolation, so if the ui crashes (or I want to launch more than one to see side by side different ui styles) the video processing does not stop. Of course, should I want more tightly coupling I can embed WebKit on my application and make a javascript bridge to avoid having to use something like websockets to interact.

One can always dream…

Then my muse appeared and commanded me to type. Thankfully, mine is not like the poor soul on “Blank Page” had.

So I type, and I type, and I type.

‘Till I made this: two GStreamer pipelines, outputting to auto audio and video sinks and also to a webkit process. Buffers travel thru shared memory, still they are copied more than I’d like to but that makes things a bit easier and helps decoupling the processes, so if one stalls the others don’t care (and anyway for most of the things I want to do I’ll need to make a few copies). Lucky me I can throw beefier hardware and play with more interesting things.

I expect to release this in a couple of weeks when it’s more stable and usable, as of today it tends to crash if you stare at it a bit harder.

“It’s an act of faith, baby”
Using WebKit to display video from a GStreamer application.

Using WebKit to display video from a GStreamer application.
Something free to whoever knows who the singer is without using image search.

 

 

That thrill.

Lately I’ve been working with a lot of technologies that are a bit outside of my comfort zone of hardware and low level stuff. Javascript, html-y things and node.js.  At first it was a tad difficult to wrap my head around all that asynchronism and things like hoisting and what is the value of ‘this’ here. And inheritance.

Then, out of a sudden I had an epiphany and I wrote a truly marvellous piece of software. Now I can use Backbone.io on the browser and the server, the same models and codebase on both without a single change. Models are automatically synchronized. On top of that there’s a redis transport so I can sync models between different node instances in real time without hitting the storage (mongo in this case). And the icing of the cake is that a python compatibility module is about to come.

The bragging tax.

This is no news but I don’t get people. I really don’t.

When a potential client approached me for a quote normally I gave two estimates. One if I am allowed to write something about it and another one (substantially higher) if they refuse.

I never said a word about open sourcing it, naming names or something like that.

Most of the time I explain, as politely as I can, that nobody is going to ‘steal’ they wonderful idea. And also that it is just a very simple variation on stuff found on textbooks and, the only original thing they did was to put a company logo on it.

It is such a shame that I honour my word in these cases.

Debugging USB3.0 issues when dealing with USB2.0 devices

Some time ago we needed to connect as many usb cameras as possible to a single computer and capture full hd video and audio. Most of our systems despite having a lot of connectors on the inside they really have one host controller and a hub.

While the available bandwidth may be more than enough using a compressed format the amount of isochronous transfers is rather limited. Our minimal use case called for three C920 cameras. On a normal system (one host controller behind a hub) the best we could achieve was two at 1280×720@30fps with audio and a third without audio, and only one at 1920×1080@30fps with audio.

So, we need to add more controllers. Usb 2.0 add-on cards are a thing of the past but luckily they were replaced with the faster USB3. Most of the usb 3 controllers also feature an usb 2.0 controller and hub for older devices but some (very rare) have a dedicated usb 2 controller for each port.

Given this I went ahead and bought two cards of different brand and different chipset each.

One of them had a NEC PD720200. It worked like a charm but sadly only has one usb 2 controller.

The other sported a VIA VL800. This one has one usb 2 controller per port (this can be seen with lsub -t). That lovingly discovering didn’t last for too long as the controller crashed all the time, at best it would stop responding but sometimes it locked my system hard. The guys at Via have a very interesting definition of meeting the specs. I’ve spent a whole weekend patching kernels trying to make it behave. Now I have a quite expensive and sophisticated paperweight.

Testing procedures:

I ssh’d to the target machine and ran in several consoles:

watch -n1 ‘dmesg | tail -n 16’ to have a log should the system crash hard.

watch -n1 ‘grep Alloc /sys/kernel/debug/usb/devices’ to monitor bus usage.

– 3x gst-launch-1.0 v4l2src device=[camera] ! queue ! ! fakesink sync=true alsasrc device=[camera soundcard] ! queue ! fakesink sync=true to capture from each device. video_caps is something like “image/jpeg,width=1920,height=1080,framerate=30/1” but I tried a couple more.

It is really wonderful how much computing power we have nowadays. The first time I compiled a kernel it took a good four hours. On my current machine (not quite new…) it takes about forty minutes from a clean tree and around ten from an already compiled one.

debugging_usb3-0

Hoy programo porque estoy deprimido.

Hoy programo, un módulo para MLT Framework que permite vincular varias instancias de melt usando el mismo protocolo que shmsink / shmsrc de GStreamer. Un plugin para ingresar material de melt en un pipeline de gst.

Descubrí que hace rato no tengo el área del cerebro para entender multithreading. Descubrí que todos los frameworks / librerías de vídeo en su interior albergan los mas oscuros y horribles secretos (aunque algunas no se esfuerzan en ocultarlos mucho). Por suerte el hardware es barato, no sé por qué me esfuerzo en hacer algo zerocopy si total vos lo vas a copiar de gusto cuatro o cinco veces.

On GStreamer performance with multiple sources.

I’ve made a couple of experiments with Tetra. Right now the code that manages disconnection of live sources (say, someone pulls the cable and walks away with one of our cameras) kind of works, it certainly does on my system but with differnet sets of libraries sometimes the main gst pipeline just hangs there and it really bothers me that I’m unable to get it right.

So I decided to really split it on a core that does the mixing (either manually or automatic) and different pipelines that feed it. Previously I had success using the inter elements (with interaudiosrc hacked so its latency is acceptable) to have another pipeline with video from a file mixed with live content.

Using the inter elements and a dedicated pipeline for each camera worked fine, the camera pipeline could die or dissapear and the mixing pipeline churned happily. The only downside is that it puts some requirements on the audio and video formats.

Something that I wasn’t expecting was that cpu utilization lowered, before I had two threads using 100% and 30% (and many others below 10%) of cpu time and both cores on average at 80% load. With different pipelines linked with inter elements I had two threads, one at 55% and a couple of others near 10%; both cores a tad below 70%.

Using shmsrc / shmsink yielded similar performance results but as a downside it behaved just like the original regarding the sources being disconnected, so for now I’m not considering them to ingest video. On the other hand latency was imperceptible as expected.