How to make an Amazon Echo Dot work with your old HiFi system (with a little help from a Raspberry Pi)

So I imagine many people connect their Echo to a modern Bluetooth speaker which achieves the convenience of always being on and the eco-friendliness of not using much energy when idle. Well, I haven’t got one of those. I have an old JVC stereo which you have to power on manually. Luckily I also have a Raspberry Pi which, as you will know, is able to solve pretty much any real world problem.

1. Convenience

I do not want to have to press the power button on the HiFi to hear the Echo.

So you’ve plugged the Echo into your hi-fi (and not even bothered to remove the Airplay device which you never wanted but couldn’t afford Sonos), now what?

You need to use the awesome power of the Raspberry Pi to….control an infrared LED. I admit that controlling a single LED seems a waste of the Pi’s abilities but at least it’s cheaper than forking out for a Logitech Harmony hub.

Don’t have an infrared LED? No problem. Just desolder one from a remote control.

Then follow some instructions on how to get your Raspberry Pi transmitting IR signals.

If you can’t find the Lirc codes for your remote then you can use Lirc to record the signals from your remote. Of course you’ll need an IR receiver to do that. Don’t have one? Just desolder one from a set-top box.

Eventually, you’ll end with something that looks like this:

I should stress that electronics is not my forté.

I followed Amazon’s tutorial on how to create a Smart Home Skill. If you want to do it in Java and need some inspiration then look in my GitHub repository. I’ve even started creating the domain model which doesn’t seem to exist in the Alexa skill jar. ProTip™: When using Java in an AWS Lambda start an instance with a lot of memory so it loads faster.

I did have a problem enabling the skill from the Alexa app on my phone. In the end, I got it to work by using Firefox and pre-logging into my Amazon account (it took quite a few attempts though).

The last piece of the puzzle is to “web-enable” the IR functions of your Raspberry Pi so that the AWS Lambda function can call it. To get going quickly I used a Python project from this guy. You’ll need to create a hole in your home network by mapping a port to the Pi. I will deal with this security issue another day (I don’t want China turning on and off my home electronics).

At this point, I can now say “Alexa, turn on the kitchen hifi”. Convenience achieved.

Bit of delay switching on the hi-fi as the AWS Lambda takes time to start.

I have also considered attaching a PIR sensor to the Pi so that it turns on the hi-fi when you walk in the room.

2. Eco-friendly

I often forget to turn off electronics when I go to bed – I don’t want to leave the HiFi on all night

Now that all the hard work has been done this part is pretty simple. The simplest approach I could think of was to use systemd timers to schedule a curl call late at night. One minor issue being that I wouldn’t know whether the HiFi was currently on or off. Luckily sending the ‘AUX INPUT’ command over the infrared switches on the HiFi. So I can guarantee that the HiFi will be off by first sending this command followed by the ‘POWER’ command a few seconds later. This means the HiFi will briefly be turned on even if it was already off but that’s acceptable.

Another way might be to set up an Alexa “scene” so that I can turn off lots of electronics when I go to bed.

 

For my next project, I need to buy another Echo and another Raspberry Pi so I can control my TV. If a can control my Roku then hopefully I can make Alexa play a movie from my:- Netflix; Amazon; Now TV; Plex services.

“No SQL” with Liquibase and jOOQ

Not NoSQL but accessing a conventional database with “no SQL”. I don’t hate SQL but I do avoid writing it if possible.  Java and SQL don’t go well together. Large volumes of SQL can become a maintenance nightmare if you’re changing your domain model around.

Yes, I know. The model should hardly change as it should have been perfectly architected from the very beginning of the project……..don’t get me started.

I have always used Hibernate/iBatis in conjunction with Liquibase but wanted to try something else. Even the infallible Hibernate can become a little tedious sometimes. My friend Google led me to jOOQ. It’s a fully-featured database framework with all the bells and whistles you can think of. Any database architect will appreciate its SQL-centric approach.

I had a whirl with it’s code generation feature and was impressed. With Liquibase versioning the database and jOOQ reverse engineering it to generate Java objects I felt assured I had a solid, fault-free build (as far as mapping Java objects to database tables goes). One minor niggle is that if you’re using Maven you will find that the jooq-codegen-maven plugin will run at the generate-sources phase whereas the liquibase-maven-plugin normally runs at process-resources. The problem being that we want to manipulate the database before reverse engineering it.

This is easily fixed by changing the liquibase-maven-plugin phase to generate-sources and then adding a concise [cough] bit of Maven to copy the required Liquibase files into the target directory (earlier than it would normally):

 

How to create your own (gas) smart meter

Smart meters are all the rage these days so I thought I would have some fun turning my gas meter into one. As my gas bill is three times as much as my electricity bill this might even have some use (haven’t found much use for my electricity smart meter yet).

Firstly, set up a cheap webcam to look at the meter. Every expense was spared with a £30 Tenvis camera:7894

The quality isn’t that high but it should be good enough for some basic text recognition. Note that I had to carefully position the camera to reduce glare from the IR LEDs reflecting off the meter’s glass. I also increased the contrast on the camera as this would make feature extraction more reliable.

Now all I had to do is run the image through some OCR software and read the characters. Simple, right?…..well….sort of. Although this is a controlled environment I don’t know of any software that will reliably extract text from this image. There were a number of OCR APIs at my disposable (Tesseract, Java OCR) but I first had to simplify this complex scene. I did have the advantage that the meter is under the stairs so the image would not be affected by any lighting changes.

How to extract just the digits from the image? I could have just manually defined a boundary area on the image around the digits. But this didn’t seem particularly robust (if the camera moved position, say) and I was keen to use some computer vision cleverness.

I thought it was most probably a good idea to deskew the image first before running any feature extraction algorithm. An ImageDeskew class (found in Tess4J) provided some Hough transformation goodness. This does rely on your image having some distinct horizontal/veritcal lines in it.deskew

Onto feature extraction, I came across the BoofCV library which had some good examples and got me thinking about the best way to extract the characters from the image. I chose to use its binary segmentation capability to try and find areas of interest in the image – http://boofcv.org/index.php?title=Applet_Binary_Segmentation

I ran the binary image extractor using code taken from this example.contours

The white line represents a bounding region and the red lines internal bounds inside the outer one. From this image you can see that 2 of the characters are part of the outer boundary and 2 are part of the inner (plus some other erroneous regions). But these were not the only contours found in the image so how did I classify this image to be the one of interest? Luckily, the other detected contours were completely implausible and had far too few or far too many internal boundaries.

So taking the maximum area of the outer boundary we can extract a pretty good image of just the digits.

bounding

Not perfect but hopefully good enough for Java OCR.

I assumed an OCR extractor would want black text on a white background so I created a negative of the image:

negative

Then it was just a matter of running the OCRScanner‘s scan method (with the necessary training images added of course). Java OCR does have a character extractor but I created my training images manually with IrfanView:

0a
1b
2b
3a
4a
5a
6a
7e
8b
9b

These steps might have come across as being pretty straightforward but it did, in truth, require a bit of code tweaking to make it work for my particular setup (especially to Java OCR). In Hindsight manually specifying the region on the image where the digits are would most probably be more reliable/straightforward.

And what is the current reading? I created a RESTful service to get the webcam image, run the OCR and then provide a JSON result. So assuming my service is able to connect to the web camera at my house and that OCR recognition is working correctly you should get an image and corresponding reading below:


It’s a kinda magic!

Now all I need to do is run the stored values through a graph drawing framework (D3!) and see if I can find anything interesting. Comparing it to inside/outside air temperature might be a good start.

Interactive psychological testing with Google Web Toolkit

I recently helped a friend set up on online psychology test. Not being at all familiar with psychological experiments I was directed to an existing software suite for inspiration – PEBL Test Battery.

Any of these tests could be converted into an online version with some nifty Javascript or Flash. But thanks to Google Web Toolkit and HTML5’s canvas feature it’s even easier to create interactive tests.

Although GWT makes it so easy to write web applications it doesn’t help you follow any design patterns (e.g. MVC or MVP).  I used GWTP to implement a Model-View-Presenter front-end (following an MVP pattern with this framework is much easier and it’s well documented). Although GWT now has an API to handle HTML5 canvas it doesn’t seem to be documented anywhere (well I couldn’t find anything other than Javadoc). So I used Vaadin’s GWTGraphics library which is well documented. I think it’s also more cross-browser friendly (or, to be more precise, it fights all the different versions of Internet Explorer – Microsoft like to keep themselves and everyone else busy).

Trail-making task demo here »

JVM Future Memory

Get the heap space to within an inch of its life and the JVM goes into “Quantum” mode.

JVM Future Memory

JVM Future Memory

As I understand it, the JVM will detect a PermGen dump before it even happens! This will be able in Java v5000.