While I was generally noodling about this, I came across the Philips Ambilight system, which projects coloured light behind a television which matches the visual content. This definitely does not interfere with the viewers experience, but equally, has no interactive component and conveys no useful information.
It occured to me that a valid second screen device would not be 'shouty', with a high resolution, information packed display which requires the viewer to look away from the first screen. It should also carry more information that just 'mood lighting'.
Persistence of vision is a well known phenomenon is exploited by a variety of display technologies, (including the cathode ray tube), to make the eye perceive a image which is actually generated by say, a row of LEDs which change periodically while being moved over a rail or in a circle. These display's are a staple of the hacker community and are available commercially SpokePov. They do suffer from one major drawback, that is, they do not scale very well. To maintain the illusion of a flicker free image, the POV device needs to redraw at least 10-20 times a second. Hmm, this could get very dangerous if you want a large display. For instance, the Stupidly huge POV display, is 2m across and has bars that are moving at 140mph. Not really the thing for a a quiet evening in front of East Enders.
It was then that it occurred to me that if we could make the image persist for longer, the speed of movement could be much slower, and thus the device could be scaled up nicely. And the mechanism for longer persistence? Luminous (glow in the dark) paint. Thus was Belshazzar born.
Belshazzar is a character from the Old Testament, a Babylonian king who, rather imprudently dis's the Almighty by drinking from vessels from the Temple. Oh dear. While having his feast a disembodied hand appears and writes in words of fire upon the wall, predicting his downfall.
OK, so we have a concept, and a neat biblical reference to name it by, how does it work?
I wanted Belshazzar to scale up to very large display sizes, so I needed a mechanical layout that wasn't going to be particularly limited in any direction.
The first component of the system was a rail which would carry the actual POV array.
I settled on using 15mm thick walled aluminium tubing for this as it was relatively inexpensive and could be used as scaffolding for a number of other projects. On the tubing/rail rides a carriage, which can roll backwards and forwards, driven by a stepper motor. The wheel for the carriage were machined on the lathe from 2mm acetal and I put V grooves in them so that they would centre on the round tubing section.
In this way, the display length was only limited by the length of a single rail and the luminous background.
Not all luminous paint is created equal. The most common paint used zinc sulphide, usually doped with small amount of copper to create a green glow. Lately, much brighter and longer lasting paints have been created based on strontium nitrate, (no, not the radioactive stuff). So far, in tests, in the operating regime exists with the Belshazzar display, I've not seen a huge difference in the glow from either technology. I assume that this is because the short term glow (say 10-20 seconds) from both paints is approximately the same, and it is only over longer periods (10's of minutes) that the strontium compounds start to, if you will pardon the expression, shine.
Again, the intent was to make the system as scalable as possible so to drive the LED's I chose the Texas Instruments TLC5940, which is a 16 channel PWM chip, which can be daisy chained to drive several hundred LED's. So, although the current version only drives 16 LED's, scaling up is a matter of wireing. For driving the TLC, I turned, unsurprisingly to an Arduino. The stepper motor was reclaimed from an old printer, and driven by a Sparkfun EasyDriver.
The first problem in designing the software chain for Belshazzar was how to actually draw images with the rig. Once, again, I didn't want to be limited in image size by the available memory on the Arduino, so I chose to design a simple protocol to buffer data to the rig and have it run under control from a program on the Macintosh.
For this, as for most things, I turned to Ruby, after a few different attempts, I landed up using the Ruby gem for ImageMagick, RMagick to ingest arbitary JPEG's, posterize and descale them down to 16 rows high. Then it is just a matter of sending the data to the Arduino column-wise.
The software for the Arduino was pretty straightforward. It implements a small set of commands that can be issued by the host program. The only trick that was found to be necessary was to implement a simple buffering mechanism by always immediately acknowledging commands upon receipt, so that the next command was always available in the serial buffer when the current command was complete. It was found that if the system didn't buffer commands the latency across the serial link was enough to create a tiny hiccup in the mechanical running of the system that caused unwanted levels of vibration.
The command that can be issued to the display head as as follows: