Solved: Control Raspberry Pi media center (Kodi) with Roku TV’s remote over HDMI CEC

TL;DR if you already figured out the documented stuff: Enable 2-way CEC (keypresses from TV remote to attached device) in the Roku secret menu: Quickly (within ~ 5 seconds) press Home 5 times, rewind, down, fast forward, down, rewind, and “Enable CEC Remote Control”.

The options for viewing local content on my new Roku TV are somewhat lacking due to the limited set of file formats internally supported1. But long story short, Santa just brought me a Raspberry Pi 5, and setting it up as a lightweight home media server scratches several itches. One thing I’d like to avoid, though, is the neverending proliferation of device-specific remote controls and the training that goes with them, especially for the wife/kids/parents/guests. The TV already has an easy-to-use remote supporting the basic “10 foot interface” (directional arrows to scroll through media titles, OK/enter and playback controls)… wouldn’t it be great if you could just flip to the RPI media center’s input and then use that same remote, already in your hand, on its interface?

Long story short, yes, on many modern TVs you can totally hook up a Raspberry Pi running Kodi or a similar home theater interface over HDMI and, through the magic of HDMI-CEC, have the TV forward the remote keypresses over the HDMI cable. However, some (most? all?) Roku devices don’t enable this out-of-the-box and you need to enter a secret menu to fully enable it.

Setup steps:

Roku TV side:

  • First off, make sure HDMI-CEC is enabled in the first place. On my TV this was under Settings -> System -> Control other devices (CEC). The options in this menu are pretty basic, but it will show a list of detected devices by HDMI port, so you can at least rule out basic issues (bad/unsupported cable, not enabled Pi side, etc.) and see if the device is being detected here.
  • Confirm your Pi / playback device is plugged into an HDMI port that supports CEC; on some, particularly older Roku devices, there may be multiple HDMI inputs but only some of them support CEC (check for special marking on the port labeling). Note, this is unrelated to ARC/eARC, and you do not need to be plugged into the/a ARC port for it to work.
  • The secret ingredient! By default, many Roku devices only enable very basic, mainly 1-way (attached device to TV) communication, enough to e.g. switch to the correct input or control volume from the attached device. To enable 2-way communication of remote-control button presses, enter the Roku “TV Secret Screen”: Navigate to the home screen, then quickly (~ 5 seconds), on the remote press: Home 5 times, rewind, down, fast forward, down, rewind. This presents a secret menu with some decidedly developer-y stuff, and mixed in among it, the option to “Enable CEC Remote Control”. Select this option. When enabled, the text will change to “Disable CEC Remote Control”.
“Welcome to warp zone”

Raspberry Pi side:

  • On Raspberry Pi 5, there are two (micro-)HDMI ports, but only the leftmost one (HDMI0, nearest the USB-C power input) supports CEC. Be sure to plug into this one. You may need to reboot the Pi when changing outputs to ensure output is enabled to the correct port.
  • CEC is not natively supported on many desktop video cards or SBCs, likely including at least some older Raspberry Pi flavors. For these devices, there are workarounds such as the Pulse Eight USB adapter (preferred solution for Kodi/LibreELEC) or homeassistant-addon-pi-cec for HomeAssistant (software solution, sends the CEC equivalent data via TCP/IP instead).
  • Make sure libCEC is installed and any related settings are enabled in your preferred media software. I’ve only tested this with the specific combination of Kodi on the LibreELEC distro, but for the RPI5 distribution as of 12/27/2023 all the CEC stuff is basically set up and working out-of-the-box. For me, once the basic CEC link was functional (enabled in the TV’s non-secret menu), I was getting some notification sounds and a brief popup on the top-right corner of the screen refencing “Pulse Eight”, even when using the RPI5’s native CEC support.
    • For my particular Roku TV/remote combo, all the (supported) buttons were recognized and correcly mapped already, but YMMV for more obscure TVs/remotes To this day the official guidance for dealing with remotes with unrecognized button mappings is to enable a debug log, press all the keys in a specific order and manually chew threw 50-80KB of diagnostic log to find out what event codes were received, then manually poke these into an XML file somewhere. Reading this logfile natively from the Kodi interface is a…less than fun experience.

With this, the Roku TV’s remote smoothly navigates the Kodi UI. Not all remote buttons are passed through, but the most important ones at least are covered.

What works:

  • Directional (up/down/left/right) and “OK” buttons
  • Back button
  • Playback control buttons (play/pause/rewind/fastforward)

The remaining buttons, including reserved (“*”, Home, Power), volume and shortcut buttons are still handled by the TV itself and not passed through over CEC (at least, I could not see any unmapped codes by eyeballing Kodi’s debug log). It probably goes without saying that e.g. the Netflix button still just takes you to Netflix and can’t be repurposed for a custom Kodi function. I’d love to be proven wrong about this though.

There are many distinct devices referred to as “Roku TVs”; this was tested on the Best Buy exclusive “Class Select Series 4k” (50R4A5R), but the same secret menu is reported to be the missing ingredient for several others, including TCL and HiSense branded units.

Huge hat tip to this reddit thread for the hidden menu, which references an even older thread with the solution, neither of which came up readily with the usual search terms. Posting here as a note to my future self and in the vain hope it rises above the AI-generated noise in future searches.

  1. See . In particular, older formats like MPEG2, let alone the usual mix of proprietary AVI/MOV/etc. are not supported. The developer API is consists of a scripting language that mostly deals with skinning and finding URLs, so 3rd party player apps (“channels”) are likewise limited to these formats – you can’t just port VLC to it and call it a day. ↩︎

MAPFRE Data Breach, or, “What’s a MAPFRE and why do they have my information?”

So, I had this brilliant idea for a legitimate passive income opportunity: Start a company with an online presence and terrible information security. Buy personal information in bulk, store copies of it on our servers, then sit back as eeevil hackers steal it. Repeatedly. Each time it happens, offer the affected customers 12 months of free “cancel anytime” credit monitoring and identity theft protection. This new company is totally legit and sells NFTs or personalized coffee mugs or something, and its business model totally does not include selling “qualified leads” for free trial offers at credit monitoring and identity theft protection companies…

* * *

What? Sorry, wrong post. Actually, today we were going to talk about this letter my wife and I each got the other day from an insurance company I’ve barely heard of, offering 12 months of free credit monitoring and identity theft remediation services1. The letter states that an “unknown party” used the company’s online quoting platform and existing information to filch the recipient’s driver’s license number, and “may” have obtained the VIN and other details about vehicles you own. The letter goes out of its way to specify that the information used to access the additional information was “already in the unknown party’s possession”.

Especially now that driver’s licenses in the US are national IDs for things like air travel, handing out driver’s license numbers to criminals with easy access to multiple sources of stolen and just-plain-sold-on-the-open-market personal information to combine on millions of citizens is kind of a big deal. This information makes for an easy pivot from basic public information to full-blown identity theft, with US drivers licenses valuated at between about $20-$500 a pop on the criminal market (2022-2023), and easy to parlay from a mediocre fake to a really good one.

Needless to say, we both received an identical letter, and neither of us are MAPFRE customers, nor have we engaged with this company at all via requesting quotes or similar. So what gives?

As far as I can tell from public sources, here’s what happened.

In the US, the price you pay for auto insurance depends on a number of personal factors, including your age, gender, marital status, creditworthiness(!) (in most states), and of course, driving history. MAPFRE, a multinational insurance company, does a brisk business in the US state of Massachusetts (it acquired MA-based Commerce Insurance Group in 2007). At some point it added a “low-friction” online quoting tool to its website: Simply enter basic publicly-available information such as someone’s your name and street address, and it automatically pulled in a bunch of more-personal information from a data broker, including at a minimum your driver’s license number and details about the vehicles you own. The existence of such data brokers and the whole nonconsensual monetization thing – with the sheer volume of personal data they can sell with neither a dime to its owner nor a gnat’s fart from our regulatory bodies – are a rant for another day, but what’s taken this from sketchy business-as-usual to class-action lawyers collectively high-fiving each other is a small implementation detail, Auto-populate.

As I write this, at least two lawsuits have been filed seeking class-action status:

According to the CONWAY complaint, the online quote tool didn’t merely fetch the private data for transitory use server-side, but actually pushed it to the client, where it was displayed in auto-filled form fields in the web browser. Apparently, one or more crooks soon discovered that this quote tool was basically an unmetered pipeline to an unspecified data broker, which would ingest cheap “junkmail list” name & address databases at one end, and spit lucrative identity-theft data out the other. For one wild weekend of July 1-2 2023, the “unnamed party” used this auto-populate pipeline to make off with information from an estimated 266,142 random individuals2, regardless of whether they were MAPFRE customers or not. Particularly if the “lone actor” narrative of the insurer is to be believed, this rather strongly suggests that the data exfiltration was automated and that basic safeguards such as meaningful rate-limiting were not in place.

Eliding the more speculative and purple-prose bits of the complaints, CONWAY alleges that the insurer added “a feature to its existing online sales platform whereby an individual’s driver’s license number
would auto-populate for anyone that would enter a bare minimum of publicly available information
about that individual
“, but “did not impose any security protocols to ensure that website visitors
entered and accessed PI only about themselves. MAPFRE did not impose effective security
protocols to prevent automated bots from accessing consumers’ PI
.” It goes on to cite several resources indicating that cybercriminals targeting auto insurance companies specifically for drivers’ license information, including through similarly flawed online quote tools, is widespread and generally known in the industry. Likewise, Ma cites mainstream news sources to back up a claim that “[d]rivers’ license numbers have been taken from auto–insurance providers by hackers in other circumstances, including Geico, Noblr, American Family, USAA, and Midvale all in 2021, indicating both that this specific form of PI is in high demand and also that Defendants knew or had reason to know that their security practices were of particular importance to safeguard consumer data.

Together, the filings paint the picture that auto insurance companies really should know better – which is to say, have specific knowledge that cybercriminals have been actively, and successfully, targeting auto insurance companies to collect driver’s license information specifically. In addition, both complaints cite the long delay between MAPFRE becoming aware of the breach and notifying customers: the breach occurred over July 1-2 2023. According to the complaints, two customer plaintiffs (Ma) and the non-customer plaintiff (Conway) received notifications between August 22-29. For myself and better half, also non-customers, the letter is dated October 19, 2023 and received sometime later.

A further wrinkle: The DPPA

The United States does not have a consumer privacy law. Instead, we have a bizarrely sparse patchwork of laws that narrowly target specific industries. So, there is one that covers sharing of medical information by the medical industry, separate ones covering children (but online), etc. One such oddly-specific privacy law is the federal Drivers Privacy Protection Act (DPPA), which, like many such laws, came about as a reaction to a specific pattern of abuse. And this one is written in blood. According to the Electronic Privacy Information Center, the law’s origins can be traced to a series of stalking cases enabled by easy access to personal records through state motor vehicle departments, who would typically hand this information out without restriction for a small fee. It was most likely catalyzed by the 1989 death of actress Rebecca Schaeffer, who was stalked and killed by an “obsessed fan” who, through a private investigator, obtained her address from her California motor vehicle record.

While the DPPA is aimed at state motor vehicle departments and their employees, it also covers resale or redisclosure by “authorized recipients”, which are likely to include car insurance companies and intermediate sources. Meanwhile, subsequent cases have established that DPPA plaintiffs are not required to prove actual damages to recover liquidated damages for a violation of the DPPA and could choose to accept actual or statutory damages. Whether it applies in this particular case, and whether a company successfully skirts it if the identical information was received from a source other than a state DMV, are questions for the courts to decide.

So, TL;DR: a personal data breach that will keep unemployment fraud artists and lawyers busy for a while can most likely be traced back to a simple case of not thinking it through, and the same kind of diffusion of responsibility that lets car thieves unlock a vehicle and drive it away by popping out a side mirror or headlight. I can just imagine a naive web app developer fresh out of school, with more clever than caution and a manager harping behind them for a conversions boost, patting themselves on the back over this sweet automation mere days before everything hit the fan. I guess my brilliant legitimate business plan is safe for another day.

  1. It is strangely telling that the letters were actually sent out by Experian (a huge data broker), not MAPFRE, and that the year of identity-theft protection offered is an Experian product. Whether it’s more telling to see data brokers double-dipping on both selling off (and leaking!) private information and separately selling services to clean up after the mess, or that this is a commonplace enough occurrence to have an integrated workflow for mass-mailing breach notices to a customer list pre-filled with unique identity-theft-protection-service activation codes, now that is beyond my pay grade. ↩︎
  2. p75 ↩︎

Spookifying Haunted Mirror build using Stable Diffusion

For this Halloween, I built this haunted mirror display for the porch that turns any trick-or-treaters extra spooky. Using the voodoo power of AI, those who gaze into the mirror will be treated to a visage of their best Halloween self. See below for code and build tips if you’re interested in making your own!

Build details in a nutshell:

The main active bits are a webcam, OpenCV, a computer running the Stable Diffusion AI image generator, and a monitor hidden behind 2-way (partially mirrored) mirror glass. A Python script running on the computer driving the screen lies in wait, grabbing frames from the webcam and looking for faces. If a face is detected (someone is looking at the mirror), the frame is kicked off to Stable Diffusion using the ‘img2img’ mode and a suitably spooky text prompt, weighted to make eerie modifications to the image while preserving the subject’s pose, costume and overall scene. When it finishes, the viewer’s real reflection is replaced with the haunted version.

To contribute to the effect, a simple Arduino-based lighting effect provides the scenes a faire electrical disturbance that no horror film is complete without. This actually serves a couple practical purposes. Mainly, it helps sell the ‘mirror’ effect on the 2-way glass (strong front-side lighting for ‘mirror’ phase, which is abruptly cut when it’s time for the hidden screen to show through). But since the image generation takes a few seconds, this also shows something is happening, ideally keeping the hauntee’s gaze until it’s ready.

The below details what I did to build the one shown here, but there’s plenty of room for improvisation.

Software/Electronics guts:

First things first, for the actual code, see this GitHub repo. Beware it’s pretty rough-n-ready, improvements (pull requests) welcome. There are two parts, the Python script for the PC and an Arduino sketch to run the optional lighting effect.

This build is based around the AUTOMATIC1111 Stable Diffusion Web UI, which must be downloaded separately. This is probably the neediest part of the build, requiring a crapton (some GBytes) of disk space and a moderately powerful video card capable of running it (NVidia, AMD, Intel; ~4GB+ VRAM) and at a reasonable speed. I’d recommend getting that working before proceeding with the rest. From here on, I’ll just refer to this bit as Stable Diffusion or SD for short.

Required parts for the core effect:

  • LCD Screen (TV/computer monitor)
  • Webcam
  • Computer with a moderately powerful GPU
  • 2-way mirror (see below)
  • Optional: 2nd computer (low-power SBC or old laptop is fine) to run the display, if you don’t want to lug your beefy desktop outside

Assuming you have at least some familiarity with Python, grab the code from the GitHub links above (if this is new to you, try the Code -> Download ZIP option), and see the README for each for setup instructions (dependencies, etc). Again, I strongly recommend getting the Stable Diffusion Web UI working first and generate a few test images. Make sure you have a recent Python 3 installed (The WebUI setup may handle this for you if you don’t). While this is normally used via a graphical Web browser interface, we will be using a builtin API to control it headless from another script. Be sure to add the –api and –listen parameters to its command-line configuration (again, see README).

The script does the actual image display and can, but does not have to, run on the same computer as SD. Unlike Stable Diffusion, this end of things is fairly lightweight and can likely be run on your favorite compound-fruit-themed SBC or other old hardware you have kicking around, and talk to the SD backend over a local wifi network. Either way, all image processing happens locally by default; no pictures of other peoples’ kids are sent off to random 3rd-party servers1. OpenCV is used for the face detection and image display, using a Haar cascade classifier for the actual detection. This is not exactly state-of-the-art, but pretty lightweight and gets the job done. It doesn’t have to be perfect; the face detection is only used to trigger frame capture and isn’t used in the replacement image generation.

Before running the script, adjust a few settings near the beginning (again see README), including IP address and port of the Stable Diffusion instance (if different), COM port to communicate with the optional lighting effect, some image sizing parameters, and of course the SD parameters themselves. The out-of-the-box defaults are a good starting point, but one thing you will definitely want to tweak is the ‘steps‘ parameter, which basically trades between image quality and processing time, and is heavily dependent on your GPU hardware. Higher is better/slower. In my testing, a value of 8 is about the lower limit to produce good image results, and brought the image generation time down to ~3-5 seconds on my admittedly dated NVIDIA GeForce GTX 1080 video card. Embellishments like the lighting effect or reading material stuck to the mirror (“Found this walled up in my Salem attic. Totally not haunted! – Grandma”) may help paper over the delay, but hungry trick-or-treaters tend to be goal-oriented and won’t wait around very long for something to happen.

One thing I didn’t do so far (patches welcome) is add any support for scaling, cropping, etc. the webcam image so that the displayed image perfectly matches up with the physical reflection. This depends on camera placement, its output (physical zoom) and the display size. In my case, the large monitor worked out about right for scale, and vertical offset (from the camera being at the top of the monitor instead of the center) was mostly mitigated by just angling the camera slightly downward. Since the processed image is based on a frame captured several seconds ago, I figured getting it to perfectly line up with the subject was a lost cause for me, but folks with extremely fancy-fast GPUs may want to give it a go. Likewise, I didn’t put much effort into tweaking the resulting image aspect ratio to perfectly fill the screen. It came pretty close out-of-the box, and I couldn’t find a way to change OpenCV’s gray window background in the time I was willing to spend, so I just covered the gray bars on the edges of the screen with black paper.

OK, onto the actual diffusion parameters. Knowing a bit about how SD works under the hood is probably helpful, but not required to get decent results. This effect uses the ‘img2img’ mode, where instead of starting with a pure random noise image to clean up using the trained model and text prompt, a controlled amount of noise is added to a chosen source image instead. The amount of starting noise can be tuned using the denoising_strength parameter, ranging from 0.0 (unmodified source image) to 1.0 (pure random noise). In my experimenting, a value around 0.45 was a fairly narrow sweet spot where spooky elements were added well but the overall source image was preserved2. For me at least, the victims seeing what is obviously themselves in the mirror rather than a random jump-scare image was a big part of the appeal; striking the desired balance in your own setup may take some experimentation. Likewise, the weighting of the text prompt is adjustable (cfg_scale) and can range from 0 (ignored) to 30 (hyperfocus). For the text prompt, values in the range 5-15 seem to work well, and high values tend to produce a kind of burned-out cartoonish look. As for the actual text prompt, extremely generic prompts like “ghost”, “spooky”, “creepy”, etc. seem to work well across input images and mostly affect faces, while more specific prompts had more mixed results. Sadly, prompts like “face covered in spiders” didn’t work as well, because that would have been awesome.

Model is selectable here as well, but loading a new model on the fly takes a rather long time and will cause the first run of the script to fail reliably until it finishes. Model selection can also be done via the browser UI. The SD setup will automatically reload the last-used model on startup, avoiding the issue going forward. Just beware, if you were already using SD and dabbling with any, ehem, ‘NSFW’ models, you’ll probably want to double- or triple-check that a safe model is being loaded before any hapless trick-or-treaters get an eyeful.

For my setup, I tried the dreamlikeDiffusion and Dreamshaper models and both produced pretty solid results. The latter tended to alter the background more for the same denoising_strength. Oddly, both of these seemed to work better than a model specifically suited to the theme (creepy-diffusion), which tended to produce a small set of pretty samey monster faces.

Sample output from Dreamshaper

Parts for the lighting effect (optional but recommended):

  • Arduino-compatible (or whatever) microcontroller board with a serial port
  • Breadboard / Veroboard / etc. (optional)
  • 5V white LED modules – I used “star” types
  • Aluminum L bracket (heat spreader, for higher power LEDs)
  • Ping-pong balls (light diffusers), if you don’t like the look of the stock LEDs
  • Medium-power NPN transistor (enough to drive your LEDs)

Again, the light effect is not just for flavor – it hints to the viewer that something interesting is happening while the SD backend chugs, helps modulate the 2-way mirror effect, and gives you more flexibility on ambient lighting. The max and min brightness are configurable, giving another knob to turn to ensure the camera is happy, the mirror is properly reflective when the screen is dark, and control bleed-through for monitors with poor contrast or the brightness cranked up to compete with bright ambient lighting.

For an easy bolt-on lighting assembly that doubles as a heatsink, I spraypainted a piece of aluminum L-bracket black and glued the LED stars to it with some JB-Weld. The bracket was reclaimed from a past project and already had some mounting holes drilled, as well as a suitable notch in the middle for the webcam to poke through.

Don’t ask me why, but I really wanted that canonical ‘vanity mirror’ lighting look with the little round bulbs, and didn’t find any low-voltage LEDs like that available off-the-shelf, so I ended up getting a handful of 3W ‘star’-style modules (high-power LED mounted to a circular aluminum PCB), cutting the ends off of some Ping-Pong balls and gluing one over each LED.

For mine, I used a spare Arduino-compatible board I had left over from a previous project (Sparkfun RedBoard Turbo), with a 5V wall-wart power supply. I used ‘warm white’ 3-watt LEDs specifically sold as “5V”, which included an appropriate current-limiting resistor for 5V operation right on the module. The driver circuit I used is basically one NPN power transistor with a non-critical base resistor (300 ~ 1k ohms). The schematic is shown below, with the gray boxes representing the LED modules with internal components. These LEDs came from the usual overseas sources with some reviews hinting at ‘optimistic’ power ratings. But between the transistor (which drops ~0.7 volts) and the chunky aluminum bracket I glued them to, I didn’t run into any excessive heating or other issues, and they were still plenty bright.

The Arduino script is dirt-simple and just receives one of three ASCII characters over the serial port to set the lighting mode (Lit, Flicker, or Dark) and modulate the LED brightness via a PWM pin. In the Flicker mode, the brightness is linearly faded between randomly-chosen levels. This gives it the feel of the smooth ‘flickering lights’ trope of old horror movies, where someone was probably off-camera literally flicking switches, but the hefty movie-set incandescent bulbs of the era took some time to respond.

Mirror and Frame:

  • Frame – buy or build
  • 2-Way Mirror (or acrylic sheet and mirroring film)
  • Screws or other mounting hardware to attach the mirror

I wanted a frame that looked suitably dated and worn out. The nearby big-box arts & craft store had some faux antique gold/bronze frames that looked the part, but were pretty expensive for a one-off project (>$80) and would have needed modding to fit the desired monitor size, so I ended up just buying some moulding from the hardware store with an antique-looking pattern embossed into it along with a couple cans of spraypaint, watching some ‘antique frame look’ how-to videos and crossing my fingers. All told, this probably didn’t actually save any money, but was more fun.

Have I done this before? Hell no! But that’s the beauty of Halloween decorations, they’re temporary, it’ll be dark, and they’re supposed to look kind of decrepit, so if you really #@$% it up, you can just say it’s intentional.

The frame shown was made from “6702 11/16 in. x 3-1/2 in. x 8 ft. PVC Composite White Casing Molding” by Royal Mouldings, and starts off white. An up-front caution, especially with the fairly wide stuff shown here, it takes unintuitively more linear material than you’d think by eyeball – remember you need to account for the longest (outside) edges of 4 mitered sections, so math it out ahead of time if there’s any doubt (for the monitor shown, a single 8-foot piece was not enough and I had to go back for seconds).

The frame was shaped using a hand saw and a cheapo miter box, and it probably shows. Needless to say, the edges don’t quite mesh but, eh, it’ll be dark. To give it something approximating an ancient bronze look, I first gave the white vinyl moulding a coat of black spraypaint, then immediately started rubbing it off with a paper towel soaked in rubbing alcohol. The results were extremely inconsistent, exactly as desired. The paint dried very quickly and this method soon became ineffective, so I lightly sanded the surface with fine (320-grit) sandpaper to lighten up some of the edges and high spots in the pattern. Honestly, this looks pretty dilapidated already and I could have probably stopped there.

Next, I sprayed over this with what I hoped was a light coat of a dark metallic bronze. In retrospect I think I overdid it a bit. The idea was for the lightened-up spots to show through and give an inconsistently tarnished look, with the highlights on the raised edges giving that natural “someone’s made a half-assed effort to polish this for the last century” look of bright & shiny on the high spots and more grungy everywhere else. Since that didn’t really show through as well as I liked, I ended up lightly dry-brushing some random raised parts of the pattern with some Testor’s gold model paint. Dry-brushing is exactly what it sounds like, taking a very small amount on a paintbrush and spreading it around until the brush is bone-dry and it will spread no more (more or less). Kind of a subtle effect, but made it look a bit brighter and less uniform.

For the mirror part itself, I used a plain sheet of hardware-store acrylic and some reflective “one way mirror” privacy film intended for windows. The instructions for the film called for liberally spraying the ‘glass’ surface with soapy water, peeling off a protective film to expose the adhesive side, shimmying it into place on the wet glass, then chasing out any liquid & bubbles with a squeegee. It might be due to the acrylic being slightly bendy or me doing a half-assed job, but after chasing out what I thought was all the liquid, what remained coalesced into watery bubbles throughout the surface overnight. This warped the reflection and kind of amplified that “ancient and decrepit” vibe, so I left it that way.

To assemble the frame and affix the ‘glass’, I used some flat right-angle brackets from the hardware store between the mating sections of moulding, securing the sections together and pinching the glass to the frame. Since the frame was large and getting kind of heavy, I added a set of L-shaped corner braces around the outside edges for good measure. Finally, to attach to the monitor I added a broad L-shaped scrapwood bracket that just hangs over the top edge of the screen. The trash-picked monitor I’m using is pretty nice, but has some weird Dell mounting system rather than standard VESA mounts.

  1. A bit academic given they have probably passed by a dozen Ring video doorbells on their way to your porch, but you don’t have to uncomfortably answer any questions about likeness rights, GDPR, commercial use or whatever. Cloud-hosted GPUs and 3rd-party SD hosts are a thing, but setting this up is left as an exercise to the reader. ↩︎
  2. This was the simplest, but there are more knobs to turn here. The prompt editing feature in particular may be useful. ↩︎

Solved: Use Roku TV (Class Select Series 4K Smart TV) without internet access

We begin today with a rant on the cancer on society that is hardware-as-a-service and forced obsolescence, in a time when terms like “climate crisis” are casually bantered about on the evening news… possibly as a diversion to avoid talking about actual cancer, but that’s neither here nor there :-(

Anyway, what you came for: Yes, there is a way to bypass the mandatory(!) online registration on the RokuTV (“Class Select Series 4k”, model number 50R4A5R, Best Buy exclusive1), allowing you to set it up as a pleasantly dumb TV for your remote cabin-in-the-woods without Internet access, but the option is a little hidden. If you’re not interested in fudging up a temporary hotspot and signing up for a Roku Account just to watch your already-paid-for local content on your already-paid-for new TV over a physical HDMI cable (on practical or ideological grounds; I won’t judge), read on.

Obligatory disclaimer: This is tested and works for the above model number (50R4A5R) as of OS versions 10.5.0 (as-shipped) ~ 12.0.0 (latest as of 8/2023); your mileage may vary for other/newer revisions.

TL;DR: Enter dumb mode / bypass registration:

On the verrrrry first setup screen after poweron, you have the option of setting up for “home use” or “store use2“. Select “store use”, and Bob’s almost your uncle. To finish dumb TV setup:

  • In the System menu, find the Store Setting menu, and untick the “Show store marketing messages” option. This disables promotional messages about the TV (for store showroom display, duh) that appear along the sides of the screen.
  • Optional but recommended: Go into the “Picture Options” menu and reset the brightness and color settings for home use. Needless to say, the “store mode” defaults are optimized for dazzling wandering shoppers with outlandishly vibrant (if not terribly accurate) colors and the brightness cranked up to 11, at the expense of (probably) the backlight’s natural lifespan and (definitely) your electric bill.

With this done, the “Class Select Series 4K” Roku TV actually presents a fairly pleasant dumb-TV experience, with no further nags about internet access (you can manually proceed through the internet/registration flow via a menu option), and an uncluttered UI that shows only the physical inputs (4x HDMI, 1x composite, 1x antenna designated “Live TV”), without a dozen tiles for streaming services you can’t access. The one annoying misfeature of the “store mode” is a video+sound promotional-imagery screensaver of sorts that automatically kicks in if you linger at the home screen for a few minutes without choosing an input.

Beware, in this mode, there is still an option to set up network stuff in the menus. If you even touch that, you’ll get sucked into the mandatory-internet registration flow, which requires a factory reset (or completing the online registration), to get out of.

“Back, back, back, back….”

If you already selected “Home Use”…

When I tried setting up this TV in “home” / smart mode, upon getting a valid wifi password it immediately ran an update, then rebooted into the mandatory Internet registration flow, with no way to get out of it (e.g. back out to the home/store setup menu). I’m not sure if this is an artifact of the update3 process or just the intended behavior, but there IS a way out. You can factory reset to back out of the Internet setup and get back to the initial unboxing experience by using a pen or other small tool to press and hold the RESET button on the back (a small, but marked, tactile switch just above the HDMI1 input) for approximately 10 seconds.

Factory reset notes:

  • On this model, the RESET button has up to 3 distinct functions. A short press simply resets the TV’s internal computer. A mid-length press (~ 10 seconds) performs a factory reset, erasing the Flash containing your settings (presumably including Internet access credentials) and restarting the original out-of-the-box setup experience. A very long press (> 20 seconds or so) enters Recovery Mode, which allows you to reflash the TV’s firmware from a file on a USB thumbdrive, but is distinct from factory reset.
  • A factory reset clears configuration settings only; it does not restore the OS version it was shipped from the factory with. You will retain the most recently installed update, with any features or warts that brings. By design, downgrading to an earlier version (even if you have the file for it) is blocked, so if you really need the Dumb TV experience, it’s probably best not to let it touch the internet, even temporarily, in case this option is removed in a later update.

Bonus: “Secret” Menus

Mainly notes to my future self.

Enter Developer mode (mainly to sideload custom apps): Press Home 5 times, then Left-Right-Left-Right-Left.

“Secret Screen 2” (mainly fiddle with homescreen ads): Press Home 5 times, then Up-Right-Down-Left-Up.

There are several more such screens that display diagnostic information occasionally useful to mortals, e.g. wifi connection strength/details, HDMI versions of currently attached devices, etc. Search the web for the “Roku Secret Menus”, albeit with a grain of salt: despite what any clickbait-monger will tell you, these are officially documented developer features.

Mini-Rant Time: Unnecessary Internet Dependencies

“But Tiiiiiiim! You’re just going to plug an Internet-connected game console and a Chromecast into it anyway, what’s your big beef about mandatory online registration on the screen? I know you don’t actually own a cabin in the woods either (I Googled your address)!”

Eh yes, I did (with a few elbows from the better half) enter wifi credentials, go through the Roku signup pound-o-flesh and will probably end up using the “smarts” (built-in Netflix and Youtube apps, etc.)4 for as long as they last. And that’s kind of the point. The TV this replaces, an early Insignia (store brand) cheapie, is being e-wasted after 15 years due to a hardware failure, not because some yahoo decided to turn off a server somewhere. We know keeping the “smart” bits alive through the ever-changing modern Internet infrastructure, security patches and uneasy business relationships with competing platforms is a Red Queen’s race, and that implies a very finite support lifetime for the “smarts” – but that’s what HDMI ports are for, and they should still work when the last “streaming partner” breaks the pre-installed app with an API tweak and you pull the (internet) plug.

Unfortunately, it’s becoming increasingly trendy for hardware vendors to build bizarre and unnecessary Internet dependencies into purely local interactions, either to directly turn it into a paid subscription (“Hardware-As-A-Service”), have the option to do so later, skirt the First Sale doctrine and nerf the secondhand market, etc. There is a huge distinction between using the internet connection to provide an ongoing service (say, streaming content), or simply send a cryptographic allowed-to-operate code (or “don’t die yet” code) to allow continued use of the hardware you already paid for. Yes, this is actually a thing now, defining not remotely disabling the hardware you already bought as the service being provided. While it’s not quite heated seats as a service or subscription airbags, a mandatory internet dependency to unlock a TV (for local content!) for the first time is a firm slide in that direction. At best, this “one-time” step means you’re a flipped Flash bit or factory reset (like, to clear your wifi password prior to resale) away from bricking the unit, if the manufacturer so much as decided to turn off the activation service or change the URL. At worst, it’s an update away from straight-up refusing to work (HDMI ports and all) without a persistent connection.

Mini-Rant 2: Why Showroom?

One only needs to search for “showrooming” and “reverse showrooming” for a long history of competing claims between brick-and-mortar and online retailers about who is stealing whose business. One only needs to visit an actual showroom to see how full of crap both are. For my first TV-buying foray in well over a decade, going in circles trying to compare products one-by-each from …questionably curated… retailer websites (that copy-pasted verbage between similar models with different specs and had conflicting information on basic things like types of inputs) led me to go to a couple physical stores to lay eyeballs on the actual products for ground truth. It’s easy enough to peek at the back panel and confirm the inputs and outputs, right? Ah, no, the showroom TVs are mounted edge-to-edge, flush against a high wall so you can’t get a look at the back panel to see the inputs or even confirm the model number you’re looking at. There’s not a remote to be had, so you can’t test out the competing UIs (some of which are trash), or see what sorts of settings are available. And you certainly can’t run through the setup flow for any surprises like “won’t turn on without internet” or paid 3rd-party advertising on the home screen (yes, “less paid advertising in the UI” is a bullet-point feature now). So, there I am literally standing in the store in an aisle of powered-on TVs, searching up unboxing videos from Youtubers with entirely too much personality, since this is providing more information about the showroom products than the actual showroom.

  1. According to a Linus Tech Tips video on this unit, the backlight PCB in this model is identical to the one found in some Onn branded TVs, a Wal-Mart exclusive brand. As in the white-goods world of rampant rebadging, the story of who the actual manufacturer of a given model is, and how many ostensibly competing brands share identical guts, is complicated. ↩︎
  2. Beware, there is such a thing as TV screens and monitors made specifically for the hospitality market – think hotel room TVs – and for commercial use as billboards and promotional signage, so units specifically sold as being for “store” or “business” use can sometimes be a different animal than a store/demo mode. Many “hospitality” TVs expect to be connected to a custom, or at least industry-specific, server backend for such glorious tasks as automatically switching back to the hotel’s news/promotional channel whenever you turn it on, pay-per-view billing by room number, etc., and may not function as a normal display without it. ↩︎
  3. Immediately after the initial connection and update, there were absolutely no options from the “Connect to the internet” dialog other than connecting the internet (i.e. no options to “undo”, erase stored credentials, back out to reset the home/store setting, factory reset…). However, after factory resetting and selecting the “network setup” option, a ‘back’ arrow appeared that wasn’t there before, and allowed undoing this selection and backing all the way out to the home/store selection. ↩︎
  4. The privacy and monetization ship has kind of sailed once Google services enter the picture, and whether your viewing selections are logged from a smartphone app or the display end is kind of academic. Not that I’m exactly thrilled about that either, but this is specifically a rant about cloud-tied hardware. ↩︎

Fixing Dell Ultrasharp (U3014/U3014T) Monitor Not Working (Now with Firmware!)

So, I found this lovely behemoth in my work’s e-waste pile and yanked it out, figuring on harvesting a nice big diffuser and backlight for a different project. But a quick search on the partnumber showed this thing is pretty impressively specced, and maybe I’d rather have a go at getting it running again instead. Plus, after many moons of mostly shuffling paper and “FTEs” (the exciting new term for humans who actually do work), it would be an excuse to actually get my hands dirty with actual electronics again!

it turns out all it needed was a little love. That and a new motherboard. Yes, in the end I took the easy way out (I know you were expecting a valiant tale of BGA reballing or fabricating my own rectifier diodes from corroded razor blades), but maybe some of the information below might be helpful to someone in a similar boat. Even if it’s as simple as lifting the back off and swapping boards, a handful of quick tests might help you determine which board needs replacing. As of this writing, replacement boards run ~$60-$100USD, so not a bad deal. Plus, you can generate a bit less landfill fodder!

A couple TL;DR troubleshooting tips:

  • Only the power supply and interface board (mainboard) (and obviously, front-panel button/LED board) are needed to show basic “signs of life” in response to the power button. Everything else, including the LCD panel (T-Con board), backlight board and even that silly card-reader daughterboard, can be detached without affecting basic operation.
    • This means if the unit doesn’t power on, it’s pretty much either the power supply or interface board.
  • Likewise, the backlight board and card reader board can be failed/disconnected and you should still get video if the mainboard and power supply are working. Obviously, the LCD panel (T-Con board) must be connected to check video output, and without a working backlight you’ll need to shine a good flashlight into the front of the LCD at point-blank range to see if it’s doing anything. Without a video input, a “No input source” type dialog will bounce slowly around the screen for a couple minutes.
  • Basic life/bootup signs from the interface board (lights, serial output, detection by the PC) are not a guarantee that the interface board is in good working order.

Assessing The Patient

Having never seen this monitor actually running before, I wasn’t sure what to expect. Is there a power LED? Should anything light up automatically when plugged in? Should something happen if you press the power switch, even if there is no video source? Should the little unmarked whitish nubbins (menu buttons) on the side near the power switch visibly do anything when pressed?

What’s supposed to happen is when plugged in, the little white nubbins light up white with a brief animation pattern (or at minimum, do so if you press the power button, which also lights up). Anyway, this unit did none of these things. Despite various fiddling with the nubbins, power button, plugging in video sources, etc., there were no visible signs of life. So lets get inside!

Actually opening the unit is a bit of a trick; there are almost no visible screws and the whole thing just snaps together. The actual, official way to open this product is to wedge some plastic spudgers (and probably a couple kitchen knives after you run out of spudgers) into the seam between the front bezel and back panel, and start prying the two apart until it feels like you’re going to break something. Alternately, as shown in this iFixit guide, try pulling at the bezel where it contacts the LCD. Eventually, the front bezel can be detached and carefully flipped out of the way (but not very far; it’s still tethered by a pair of non-removable ribbon cables to the power/menu button assembly on the bezel, which is plastic-welded in place and hidden behind a sticker of sorts). At this point you can either be very delicate with it until you can get the rest of the innards out and get to the removable side of the ribbon, or do what I did and just snap the button board off from the little plastic welds, then tape/glue it back in place at the end. This assembly is actually two boards, the plastic-welded menu buttons and a physical power button board, which is held on with two small screws.

Either way, with the bezel removed, you can place the unit screen-side-down (on a soft, non-marring surface!) and detach the stand to reveal four sizeable Phillips-head screws. Remove these and the back panel can be lifted straight off to reveal the LCD panel and the electronics boxes that are literally just taped to the back of it. Granted, it’s aluminum tape and not the stuff you hang postcards with, but still. Just peel the tape back to liberate these boxes, consisting of a large main box and a smaller box holding a daughterboard with an SD card reader and a couple spare USB ports. Unless you are specifically repairing the card reader, it’s easiest to just detach this and set it aside for now as it doesn’t affect the operation of anything else.

The remaining cables to the main electronics box can be removed at this point if you have nimble enough fingers; otherwise, just be gentle with them until you can flip the box over for easier access.

Ultrasharp U3014t main electronics box with 3 main boards (cardreader assembly not shown)

Inside the box are 3 large boards with well-defined functions. In the following, all locations and directions (top, bottom, etc.) are with respect to the photo above.

  • Power supply board (L2231-2M 48.7T401.02M), top-left, generates voltages needed by the rest of the system
  • Backlight driver board (L2259-1 48.7T408.011), bottom-right
  • Mainboard or “interface board” (L2121-2 48.7T409.021), top-right, handles pretty much everything else between input from video sources to output to the LCD panel.

You can do a “which board is bad” level diagnosis without further disassembly at this point, but if you do need to remove the boards, in addition to the obvious screws there is another mating a heatsink on the component side of the power board to the metal box for additional heatsinking, and a couple of screw-in posts in the DVI connector pinning the interface board to it as well. Removing the boards makes it much easier to disconnect the cables as needed. Beware that with the protruding video connectors and ribbon cables, getting the interface board in and out is a bit of a Tetris game – watch the little ribbon cables and don’t force it.

The “Which board is bad” diagnosis

Power supply board

The power supply board is “on” and generating all of its voltages regardless of the power button, as long as the unit is plugged in. Needless to say, mains voltage will be present on at least portions of the board when powered, and potentially some while after on charged capacitors, so be extremely careful doing any diagnosis. The HV section is at least clearly marked off with a thick white line. With the interface board connected, you may hear a faint ‘ticktick, tick, tick’ sequence from somewhere near the mains cable right when it is powered up. There are a total of 4 cables exiting the righthand side, with easily accessible solder pads on the back (green/solder) side, outputting various low (<40V) DC voltages. From the back (green/solder) side, going down the line top to bottom (square solder pad first), the voltages should be:

(P804, 6 pins):

  • GND
  • GND
  • GND
  • 15.5V
  • 15.5V
  • 15.5V

(P803, 4 pins):

  • GND
  • GND
  • 12.5V
  • 12.5V

(P802, 5 pins):

  • GND
  • GND
  • 12V
  • 37.5V
  • 37.5V

(P851, 8 pins):

  • 5.0V
  • 5.0V
  • 5.0V
  • GND
  • GND
  • GND
  • 3.0V
  • 0.0/2.4V (appears to be a feedback signal telling whether anything, nominally a sound bar accessory, is plugged into the 12VDC output jack)

Note these are nominal values and could vary a bit on different units, especially the larger voltages. Adjacent pins on the same cable with the same voltage are physically bridged together on the board and don’t need to be separately probed. Finally, all boards are grounded to the large metal box when screwed down, so it can be used as a convenient probe ground. If all of these look good, have a look at…

Backlight driver board

This board accepts two cables on the left; they are labeled on the component side only: P101 (5 pins, from P802 on the power supply), and P102 (4 pins, from the interface board) – and outputs one to the panel’s LED backlights on the right. Pinout of P101 matches P802 on the power supply; pinout of P102 (again, starting from the square solder pad) appears to be:

  • NC/Unknown (leads to a non-populated circuit on my board)
  • PWM (brightness control)
  • GND

To quickly test, assuming the power supply voltages are OK, you can disconnect P102 coming from the interface board and pull the ENABLE and PWM pins up to ~3V. I just used a resistor divider on the 12V power supply pin (12V -> 1k -> 330R -> GND) and connected its center tap to both pins. (Despite some oft-repeated Internet wisdom, simply disconnecting the control cable is not enough to force it awake, at least for this particular model.)

With this temporary hack done, the panel should visibly light (easily seen in a darkened room) whenever the unit is plugged into mains power, bypassing the interface board (including power button). If it lights, next possible culprit is…

Interface board

This is the brains of the whole thing, and unfortunately, if it’s got a problem (particularly under the media processor BGA), component-level diagnosis and repair may prove to be a tall order. What follows is pretty specific to my unit, but a functioning interface board should at minimum respond to / light the power button and animate the menu LEDs, regardless of whether the backlight board, LCD panel (“t-con” or timing controller board, hiding under a thin metal bracket on the panel itself) or card reader assembly are connected.

On my particular unit, the power button and LEDs appeared completely non-responsive. (Pressing a moist finger on the bottom of the LED/button board caused the LEDs to glow dimly, this doesn’t count.) After confirming the power button physically works and had continuity at the ribbon connector, I started poking around and found a debug serial port(!) on the non-populated connector P320 (nestled between the DVI and HDMI ports), that emitted text at startup. Its pinout, again from “top to bottom” in the photo (starting from the board edge) on the back/solder side of the board is:

  • GND
  • Rx (UART in from host)
  • Tx (UART out to host)
  • Vcc (3.3V?)

Beware, after buttoning everything up I discovered an inconsistency in my notes and can’t be 100% sure the serial port isn’t 5V; doublecheck before hooking up anything you care about. Connection parameters are the extremely standard 115200-8-N-1. The Tx pin needs a pullup resistor to Vcc (1k worked fine for me) to produce sharp rising edges on the signal and avoid garbled serial decoding. However, even with the signal cleaned up, there are consistent stray characters in the output where it looks like linebreaks should be. In any case, comparison of the startup output between the “dead” and replacement board show some differences.

“Bad” board output:

Ø Dual Flash Bank is disabled.!Init PCTL. April 27 2012<DDR3 - NT5CB64M16DP-CFFine tune WDQD: 0x%xine tune RDQSD: 0x%x¹,IFM measured RC-OSC Clock Value = 0x%xÝã'Calibrated RC-OSC Trim Value = %d8%Measured RC-OSC Clock = %d000HzõeUploaded LPM: %d bytes
cOtpSramInit OKÍLPM DC off!

“Good” board output:

 Dual Flash Bank is disabled.!Init PCTL. April 27 2012<DDR3 - NT5CB64M16DP-CFFine tune WDQD: 0x%xine tune RDQSD: 0x%x¹,IFM measured RC-OSC Clock Value = 0x%x¹'Calibrated RC-OSC Trim Value = %d7%Measured RC-OSC Clock = %d000Hz/jÔUploaded LPM: %d bytes
cOtpSramInit OKÍLPM AC on!:+STHDMIRX-REL_BSP_1.7H.5 - E24CT5_HDMIRXRLPM GPIO0 : 0 => DP_MINIê%**Athena Product Version 2.%d**Ä2¼DP1.2 Lib Version 1.1È09:46:43  Nov  7 2012]!HDCP Repeater Lib Version 0.7i10:34:12  Sep  5 2012yAudio_Cus_Mute %d zAudio_Cus_Mute %d zVWD Gen3: May, 2012ìAudio_Cus_Mute %d ø	*****SYS Lib Ver - V1.0 RC0Sep. 14, 2012 8:00 PM?OSD Lib Ver - V3.0.10July 22, 2008 02:23PMÞDRV Lib Ver - V1.0.00
Oct. 25, 2012 2:30 PMBSTDP93xx Athena UaSTDP93xx RD1 BoardWApplication Ver - Beta TCuSep. 14, 2012 8:00 PM?LG_WQXGA_LM300WQ5_SLA1DRAM code execution	 Î 

Dual Flash Bank is disabled.!Init PCTL. April 27 2012<DDR3 - NT5CB64M16DP-CFFine tune WDQD: 0x%xine tune RDQSD: 0x%x¹PM restarted from SRAMÚLPM DC on!7+STHDMIRX-REL_BSP_1.7H.5 - E24CT5_HDMIRXRLPM GPIO0 : 0 => DP_MINIê%**Athena Product Version 2.%d**Ä2¼DP1.2 Lib Version 1.1È09:46:43  Nov  7 2012]!HDCP Repeater Lib Version 0.7i10:34:12  Sep  5 2012yAudio_Cus_Mute %d Audio_Cus_Mute %d VWD Gen3: May, 2012ìAudio_Cus_Mute %d 

The bolding is added by me of course, but a couple differences to notice are the “LPM” messages and overall length. In the “good” board output, the first block of text (ending with “DRAM code execution”) is from a single poweron (plugin); the 2nd block of text occurs after powering the monitor off and on again using the power button. The bad board appears to halt prematurely following the “LPM DC off!” message. (Needless to say, since this board is not responding to the power button, we don’t get any separate “warm restart” output either.) We’ll get into the details in a bit, but the LPM is the “Low-Power Mode” subsystem, which includes some dedicated power rails and and a low-speed microcontroller core running on the media processor IC that generally handles mundane tasks such as scanning for power/menu key presses, cable detection and power management. My best guess is this hints at a power issue on the mainboard (LPM domain supply not turning on, or not being detected by the processor), but since I ultimately solved my issue by just replacing the entire mainboard, I don’t plan to dig further into it at the component level.

Button/Menu Board Assembly

I didn’t bother taking a picture of this, but it’s a small pair of PCBs connected together by ribbon cables, consisting of a physical power button with internal LED on one, and a set of 5 “soft” buttons/LEDs on the other. Operation of the power button can be confirmed pretty easily via continuity test on the ribbon cable while pressing the button. The softbuttons are a bit more complex, but don’t prevent the basic video function from working and can be diagnosed as a culprit if “everything but menus” works.

T-Con (TFT Controller or Timing Controller) Board

I actually don’t know much about this one, and never liberated it from beneath its shield, but more-or-less every LCD panel has one in some form; it’s responsible for massaging the video data stream from the mainboard into the individual row/column drive chips festooning the edges of the LCD glass itself, and in most cases generating the necessary drive voltages. Do your own web searching if this is still on the table as a possible culprit, but it sounds like common (repairable) issues with a T-Con board will affect the picture as a whole and may include color or brightness shifts or lines across the entire display. In general, T-Con board issues cannot cause glitches localized to specific image regions.

Firmware Dumping, wait, what?

Oh yeah, I guess it’s obvious in retrospect, but did you know monitors have firmware? A quick web search on the debug output strings above reveals a couple reports of this media processor in similar monitors spontaneously bricking by corrupting its firmware, which resides on a small serial Flash chip on the board. In particular, this detailed report inspired me to dump and compare the firmware between the good/bad boards and see if this was the cause of failure. Long story short, it wasn’t – although there were minor byte-level differences, possibly due to different compiler versions used to build it, there were no obviously missing or corrupted blocks, and reflashing the old board with known-good contents didn’t have any effect. However, I now have two different FW dumps from this monitor model.

Despite the byte-level differences, examining the strings in both dumps using a ‘strings’ utility reveals only one identifiable difference in version strings. The content suggests a number of separate libraries, each with their own versioning, go into the build, although the association between a version string and identifiable library name is not always evident, and there are probably more that don’t include an explicit human-readable version string in the binary.

Now that we’ve got ’em, are these dumps remotely useful? There are scattered reports that various issues with this display, in particular compatibility issues with DisplayPort 1.2+ (e.g. hanging on/off with a blank screen and requiring power cycling) that are FW version dependent. Unfortunately I can’t say if either of these versions helps with that, but both are linked below in case you want to give it a go. The only obvious version string difference is in a string near a reference to “Dp1.2”, which is promising at least. The respective identifiable version strings are:


VWD Gen3: May, 2012

Init PCTL. April 27 2012

09:46:43  Nov  7 2012

10:34:12  Sep  5 2012


STDP93xx Athena U
STDP93xx RD1 Board
Beta TC
Sep. 14, 2012 8:00 PM

DP1.2 Lib Version 1.1

HDCP Repeater Lib Version 0.7
July 22, 2008 02:23PM
V1.0 RC0
Sep. 14, 2012 8:00 PM
Oct. 25, 2012 2:30 PM


VWD Gen3: May, 2012

Init PCTL. April 27 2012

09:46:43  Nov  7 2012

10:34:12  Sep  5 2012


STDP93xx Athena U
STDP93xx RD1 Board
Beta TC
Sep. 14, 2012 8:00 PM

DP1.2 Lib Version 1.1

HDCP Repeater Lib Version 0.7
July 22, 2008 02:23PM
V1.0 RC0
Sep. 14, 2012 8:00 PM
Oct. 25, 2012 2:30 PM

Just beware that the reflashing procedure is not for the faint of heart as it requires desoldering the Flash chip (labeled I319 on the board; exact partnumber and capacity may differ – mine had a 4MByte Winbond 25032FVSIG) and wiring it up to custom hardware (a 3.3V Arduino will work fine). While the media processor datasheet hints at reflashing capabilities through debug serial port commands and even HDMI/DisplayPort (?!) via a software tool called “EZ_Display UP”, both methods are rather well undocumented, and even Dell will have you mail the whole thing back for replacement rather than offer a software flash tool for things like the DisplayPort update.

The chip is a standard serial Flash (NOR Flash, DataFlash, etc.) for which a number of tools exist. Personally, I used an Arduino-compatible SparkFun RedBoard and the spi-flash-programmer library by Nicholas FitzRoy-Dale. With this combination, dumping the whole chip takes about 8 minutes, writing it takes quite a while longer (I just left it go overnight). If you do end up doing this, working out which pins correspond to the requisite serial lines on your chosen board is left as an exercise to the reader. On the RedBoard and likely others, the serial pins on the 6-pin ICSP header are the library default and any available digital pin can be chosen for /CS (D8 being the default). Just beware that on most of these chips the /WP and /HOLD pins need to be pulled high (3.3V) and cannot be left floating. The dumps linked above are 512Kbit or 4,194,304 bytes (addresses from 0 to 0x3FFFFF).

Some More Nerdy Details

Media Processor

The heatsink-enshrouded media processor on the mainboard is the STMicro STDP9320-BB or similar, also known as Athena (“Premium high resolution multimedia monitor controller with 3D video”). While some fairly useless “product brief” information about this chip is officially published, the actual datasheet is a national secret and probably available only under the strictest of NDAs, although it’s also available from unofficial sources online if you know where to look.

stdp93xx-92xx-73xx-datasheet.pdf, Doc ID C93x0-DAT-01 Rev F, wink nudge say no more

The chip documentation is a mildly interesting read, for those of us into that sort of thing. Of particular – if now mostly hypothetical – relevance to my board’s issue was the boot process of this chip and what could cause it to halt prematurely. To wrangle the various high-horsepower on-chip video hardware, the processor includes two separate microcontrollers (OCM – On-Chip Microcontroller), a 250MHz jobbie (“Mission”) handling the bulk of the operation, and a smaller, “LPM” (Low Power Microcontroller) at a paltry 27MHz handling basic user interface and power management tasks. The single serial Flash contains firmware images for both. Interestingly, the powerful Mission OCM boots first, controlling the process and in turn booting the LPM, rather than the other way around.

On my particular board, the evidence points to an issue with the power management signal from the LPM – “(MAIN_POWER_ON), which is used to control power to the mission power domain through external switching circuitry.” I don’t know how common this failure mode is, but as it started to sound like a tail-chasing exercise of finding the lifted BGA ball or shotgunning FETs at random without a schematic, a fresh board off the flea-est of Bays sounded too good to pass up.

A minor thing that stuck out at me is that with all this grunt and spare Flash on board, the PCB is still festooned with small EEPROMS (24Cxxxx) around the individual video and USB interface ICs, or that there are even discrete chips for that. On a quick look, the one near the HDMI port is tied directly to the port, and several existing video cable standards including HDMI explicitly use small I2C EEPROMs to store display identification data (EDID). It’s probably cheaper overall to stuff discrete chips for it than try to emulate them on demand.

Display Glass

The actual LCD in this monitor is made by LG, part number 0X3CH4, for which online replacements seem to exist (the sticker also contains the part-numbery value LM300WQ6, which turns up more in the way of likely replacements). In fact, the marriage between the display and the video “brains” board is a loose one and the signaling to the T-Con board is fairly well standardized; many of the images online for the LCD panel show various other video boards in use. YMMV of course, but if anyone reading this succeeds cobbling a working screen together using the mainboard from an unrelated one, I’d love to hear about it.

Solved: YouTube Watch Later ‘Remove Watched’ missing 2021

TL;DR: Watch one of the first 10 videos in your “Watch Later” list and see if it magically reappears.

Doing a web search for this problem reveals it has been an issue for some time, but possibly for varying reasons in the past. The above is working for me on Web + native clients as of 6/2021.

YouTube has a neat feature, ‘Watch Later’, which does exactly as it says on the tin. The Watch Later list has a very handy context menu option to ‘Remove Watched’, which will appropriately enough prune the list of already-watched videos. Or at least it’s supposed to. This option, which appears on the…. ugh… “three vertical dots menu” (is that really what you call these things?) on the current Android mobile client, appears to randomly appear and disappear from the native mobile client and Web versions of YouTube on a whim.

After more fruitless searching and plinking around than I’d like to admit, it appears this is not telegraphing the future removal of this option, nor part of a grand conspiracy to wean users off of it for more ‘engagement’ by manually deleting every watched video off the queue, instead it’s a dastardly combination of programmer cleverness and laziness. Cleverness in the sense of programmatically adding and 1984’ing* removing the menu item itself depending on an algorithmic assessment of whether it would currently be useful (vs. just leaving the menu item alone and having it do nothing if there are no watched videos to remove), and laziness in the sense of this assessment only checking if there is a watched video in the first ‘n’ (dozen or few) entries. Don’t ask me the exact value of ‘n’ or what factors it may vary on, but it does at least appear to vary between Web and native clients, so you may run into cases where the option appears on the Web version of YouTube but not the mobile app, and on another day disappears from both. Watching something in the topmost (oldest added) 10 or so videos seems to pretty reliably fix it though. (Just don’t ask me what this extreme menu-decluttering shenanigans is meant to accomplish, other than having you Google in circles at len…oh.)

* This menu option does not exist. This menu option has never existed.

Sticky Nano Heater, a small fishtank heater that stays dry

While procrastinating on my current year+ long “couple long weekends” project, in which I’ve clearly bitten off more than a post-kids me has time to chew, I set my sights on a stupid-simple project I could actually complete :-)

If you’ll recall a couple posts back, I found that it’s curiously hard to find small heaters intended for small (nano/pico) aquariums that don’t suck. Preset temperatures (or no temperature regulation whatsoever), not being invisible, short life span, not being invisible, sketchy mains lead going right into the water, and not being invisible are but a few ways in which they suck. So, I made an invisible one that doesn’t go in the water.

Ugly test tank, mostly storage for unruly marimo balls and Utricularia

Okay, so it’s not technically invisible, but this one can be easily concealed and does take up -zero- space inside the tank.

This is not a product for sale, but the design files are published under a Creative Commons license if you want to roll your own.

In a nano tank, every cc is precious, and well, your typical heater options for small tanks are a big ugly hunk of plastic or a big ugly tube of glass intruding into the already limited space. Everyone has the silly peel-and-stick liquid crystal thermometer on the outside of the glass though, because however sketchy it looks, they actually work pretty well. So, why not a heater that works the same way?

So to test it out I made the Sticky Nano Heater (not a real product name, suggestions welcome), a skinny circuit board with a 7-10W resistive heating element (PCB trace heater), MOSFET, and a pair of resistor-adjustable thermostat chips to control it. The board is adhered with double-sided tape to the outside of the glass in an unobtrusive location, such as along the substrate, and uses the glass itself as a heat spreader.

Besides “dirt simple project I could bang out quickly”, some design details are:

Dryness: No parts touch the water. Not the heating element, not the thermal sensor, not the power cord. This simplifies the sealing requirements to “nonexistent” (a splash-proof cover or coating is still not a bad idea for those water-change oopsies that never happen.)

Power: Just to keep things simple and avoid having to muck around with mains voltage near things that get wet, the power plug is USB and runs off one of the numerous spare phone chargers found sitting in a drawer somewhere. This keeps the really energetic wall pixies a good 3-6ft away. A common 5V/2A USB charger nominally delivers 10W, but I initially aimed to draw closer to ~7.5W just because I don’t trust the ratings on cheap phone chargers and neither should you. (I ended up cranking it back up to 10W for the 2.5-gallon test tank, but more on that later.)

Temperature Sensing: On one end of the PCB, away from the heating element, is a simple thermostat IC, the cheap and cheerful MCP9510 (or TMP709). It provides a digital output that cycles on/off as the temperature exceeds a resistor-set threshold, with a ~2C hysteresis. This is coupled to the glass by a copper thermal pour and via-stitching on the PCB. On this prototype version, the temperature setpoint is adjusted by twiddling a small pot, but providing a few discrete options is probably simpler for real-world users. A second thermostat on the heating element itself limits the maximum surface temperature to a user-set value (useful for very tiny setups and/or plastic tanks).

Safety Features:

  • The main one, pun intended, is no mains wire going into the water.
  • The second sensor, coupled directly to the heating trace by a via and small thermal pad, limits maximum surface temperature in case of detachment from the tank, tank materials with poor thermal conductivity, or other adversities. While some informal testing with an outdoor outlet, some dry leaves and a warm day showed that 10W over this surface area doesn’t really pack enough punch to be a serious fire hazard, this feature cheaply protects the heater and nearby surfaces from heat damage, keeps double-stick tape happy, and allows the heater to be freely used on those plastic tanks that are inexplicably the rage these days (YMMV on actual heat transfer of course).
  • A surface-mount fuse inline with the power supply. While the USB charger “should” current-limit in the event of a board-level fault, this will blow if it doesn’t, or if the charger faults in a way that sends wall voltage down the wire. (*the prototype shown uses a tiny 0603 fuse that is not rated for 120VAC; this is fixed in the published revision).

Looks like a snakeoil Kickstarter pitch – does this actually work?

Surprisingly well, albeit with a couple caveats I’ll explain in a moment.

Given I just bashed out a suck-it-and-see prototype rather than do any kind of actual thermal modeling, I was a little worried that the glass and tape would not be thermally conductive enough, the tape would immediately overheat and lose its stick, or that the glass would prove too conductive relative to water, causing the proximity of heating element to the temperature sensor to disturb the measurement. On a 2.5 gallon glass betta tank used for test, the outer surface of the heater part of the PCB gets barely warm to the touch on a filled tank (vs. scalding in free air), and the glass immediately outside the contact area is not perceptibly warm to the touch. The water very effectively sinks the heat; no noticeable heat transfers across the only-an-inch-or-so of glass between the heating element and temperature sensor. So far, I’ve just used plain Scotch brand double-stick tape to attach them to glass and plastic, and this has stayed very well stuck while seeming to have negligible effect on heat transfer. Likewise, the tank sinks heat effectively enough that I’ve seen no need to insulate the outer surface of the heater or surrounding glass against excess heat loss.

The only major caveat to the prototype shown is that the oft-quoted “3-5 watts per gallon” (or ~1 watt per liter) rule of thumb kind of breaks down for nano tanks as surface area begins to dominate volume, so the 10W (max) available from a standard USB charger just isn’t a lot. On my 2.5 US gallon (10L) test tank, 10W is really just enough to take the edge off a drafty windowsill and chilly winter thermostat setting, raising the temperature only a handful of degF on average. Of course, slapping on a second one to double the wattage is easy enough, but with 20+ gallons still being considered “nano”, festooning a tank with them kind of defeats the purpose. If I were to do up another one, I’d probably base it on a beefier power source of ~25W or so.

7-10W will keep one of these little plastic Triops kits toasty under pretty much any conditions, though!

This leads to the second, more minor caveat, which is that PCB trace resistors are rather low precision, the actual resistance (and thus heat output) from a given supply will vary between batches and even between boards. Across two batches of 3 OSH Park boards each, one had a spread of 4% resistance and the other about 50%, and I can’t tell you which of those cases is the outlier, if any. As long as the power supply has enough grunt to cover reasonable variations, this is not a big deal and the thermostatic control (a way to not suck, remember?) will maintain the right temperature regardless. But, it becomes an issue if you’re trying to eke every last drop of power from a marginal wall-wart without toasting it.

Design Details and Lessons Learned

Below is the full schematic. There’s not a lot to it (and the long zigzag at right represents the PCB trace heater, of course), but it’s still “complex” compared to the industry standard. Hey, at least it doesn’t have IoT and a phone app!

This is something an EE1 can probably design in their sleep, but a few minor real-world gotchas worth noting.

  • USB phone chargers are switching regulators and can be noisy, and USB cables vary dramatically in quality. In particular, long or poorly-made cables will exhibit a significant voltage drop when the heating element is on. Besides reducing output power, this can cause the tank temperature sensor to oscillate as its interpretation of the setpoint resistor is somewhat voltage-dependent, at least in the short term (the voltage drop when the heater turns on causes the setpoint to appear slightly lower, turning the heater off, which causes the voltage to rise again, etc.). The RC filter (R8, C2) on its power rail mitigates both the inherent noisiness of value-engineered USB chargers and the voltage ripple of heater operation. Note that I didn’t bother with filtering the overtemp sensor, since (you shouldn’t be tripping it anyway and) its exact setpoint is less critical.
  • The outputs from the two sensors are ANDed together with an explicit gate vs. using some more clever scheme with diodes. Mainly this allows the all-important tank temperature sensor to have minimal output loading by not driving the indicator LED directly. This avoids self-heating of the tiny sensor IC, which can be significant even at a handful of mA for the tiny die inside. Again, the overtemp sensor is less critical and does drive an LED directly, although at a fairly low current.
  • It doesn’t show on the schematic, but I designed the trace resistor to err on the high resistance side, and added a few sets of trim jumpers in the form of closely-spaced vias that could be soldered across to short neighboring traces. This came in handy as the trace resistors had a fair bit of tolerance and it became clear that I needed to extract as much heating power from the power source as possible.
  • Remember that the heater side needs to press flush against the tank, so all components (including indicator LEDs) are on the other side of the board and likely facing away from the viewer. To make the LED indication visible I just added openings in the soldermask on both sides of the board and mounted the SMT LEDs upside-down. This produces a pretty neat, subdued effect with the LEDs producing a diffuse glow through the board material in the shape of the soldermask opening. You can buy SMT LEDs specially designed for downward-firing, but simply flipping most standard ones over seems to work fine too, and shining into the PCB material they are plenty visible from both sides.
  • Yes, this could totally be done on a flex PCB for contoured tanks. All of mine happened to have flat surfaces handy, so I didn’t bother.

Potential Safety Flaw in Home Depot HDX brand wire shelving units

Asbestos undergarments? Check.

Lawyer-proof socks? Check.

Here we go.

I got a small safety lesson over the weekend I wanted to share. Officially, it’s about an extremely common design of wire-rack shelving units, but the real safety lesson is to double-check the workmanship of load-bearing products with a critical eye, because the manufacturer may not have!

Here, a simple cheap design choice combines with lax quality control inspection to produce a potentially unsafe product. The TL;DR is that insufficient and off-center welds are used as a primary load-bearing element on the Home Depot shelf design indicated below, allowing the shelves to fail in a way that dumps the contents (for the product described below, up to 350lbs, ~160kg, per shelf) off the shelf, and potentially onto the person who just put them there.

The basic wire shelving unit design I’m referring to dates back to at least 1970 and can be seen in US patent 3,523,508. While the one I purchased, depicted below, came from The Home Depot, you can buy a substantially similar product from most big-box retailers. They consist of a set of typically 3-5 rectangular wire-rack shelves that slip over a set of four corner support poles with grooves at intervals. Each shelf has tubular metal sleeves in the corners to receive the poles, and assembly consists of snapping a set of plastic rings (clamshell or mating half-rings) onto the poles at the groove corresponding to the desired shelf height, then sliding the shelf down the poles until it snugs in against the plastic rings. The plastic rings and/or the matching corner sleeves in the shelf are very slightly tapered, causing the shelf to wedge firmly into place and support heavy loads without sliding down. If you remember your grade school science class, a wedge is a classic simple machine, in this case converting the downward force of objects on the shelf into an outward force at the support sleeves, at a substantial mechanical advantage. With the narrow wedge angle and small size of these sleeves, the forces they must bear are significant to say the least.

The product photos below show the “HDX 5-Tier Wire Garage Storage Shelving Unit in Black (36 in. W x 72 in. H x 16 in. D)”, model #  21656PS-1, although as of this writing the website shows dozens of nearly identical products with various names and model numbers, differing in color or number of shelves (or not at all). This unit claims a load capacity of 350lbs (~160kg) per shelf, or 1,750lbs total. While setting the first shelf with a healthy downward hand-shove in each corner, I heard a small ‘tink’ sound as welds on one of the corner sleeves gave way. With these welds popped, slight hand pressure is enough to peel the corner sleeve open and send the corner of the shelf floorward (the plastic ring should not be visible above the sleeve at all).

The first obvious thing to notice on the Home Depot version is that the load-bearing sleeves are not continuous tubes of metal as shown in the original patent, but were rolled from sheet stock and have a seam. The small pair of welds serves double duty of fixing the wire shelving to the corner sleeve and holding the sleeve closed at the seam. Kind of a penny-pinching design choice, but it’s probably OK as long as those welds are solid…

The first set of broken welds, suffered before the shelf was fully assembled (let alone loaded) prompted a quick visual inspection of the other seam welds. In the photo below, the individual shelves are stacked alongside one another to show the variability in weld quality and placement. Holy wow!

The leftmost sleeve, now marked with red tape, is the same one that’s shown peeling open in the previous photo above, but without the force of the plastic ring it springs back to its original shape and the break is nearly impossible to see. The next, marked with yellow tape, is not yet broken (no attempt was made to assemble it), but the welds are so far off the mark that there is almost no material bridging the seam at all. This one is a disaster waiting to happen. Finally, the rightmost sleeves show more trustworthy welds, centered on the seam with adequate coverage on both sides serving to hold the sleeve closed. Ironically, each sleeve is pre-stamped with an NSF certification logo.

As previously mentioned, a lot is riding on these tiny welds: they bear the entire shelf load (or technically, 1/4 of it per sleeve, assuming the weight is perfectly distributed) via the outward force exerted on the sleeve by the wedging action of the plastic support rings. A failed seam weld will cause the shelf to tilt as that corner slides freely down its supporting rod, putting all the force as well as an unexpected bending moment on the remaining corners and, whether or not this results in a cascading failure, likely tipping the shelf contents onto the floor. Or, since this is most likely to happen in dynamic loading conditions, possibly onto whoever just placed the heavy thing there.

Hoping this unit was a fluke, I tried to exchange it for another of the same model. The employee working the returns counter brought out another and even invited an inspection before taking it home. Unfortunately, while the quality of the welds on the new unit wasn’t quite as bad as the first, there were still corners were only one of the welds traversed both sides of the seam at all. I left with a refund but otherwise empty-handed, apart from the associate manager’s business card and a promise to “run it up the chain”. We’ll see.

If you, dear reader, have a shelving unit like this already in use, I urge you to inspect it for the combination of seams at the shelf corner sleeves and shoddy welds holding them together. While welds are in general nontrivial to assess visually, even by experienced professionals, the process-control issues shown above are pretty obvious to inspection. That said, don’t ask me for advice on whether yours are “good” or “good enough” for “good enough to hold exactly 3 bags of concrete mix if set down gently”. Luckily, if in doubt, 1/2″ height pipe clamps will juuust fit between the typical shelf wires and could be used to bolster the corner sleeves – either as a permanent bit of due-diligence bodgework, or at least long enough to safely unload the unit if you’re not taking any chances.

DISCLAIMER: While the information above depicts a specific product, the underlying issue is not specific to one retailer or model number, and may occur on any unit with a similar design. This post and all statements it contains represents my personal opinion. It does not represent the opinions of my employer, my cat, The Home Depot, or any recognized safety agency. I am not an engineer (well, I am, but not a mechanical engineer), and this is not engineering, legal or any other sort of advice.

Tim Tears It Apart: SunGrow Betta Heater, a Sketchy Preset Aquarium Heater

Spoiler alert time. Did you know there are ferrous metal composites with arbitrary Curie points, extending down to room temperature and even below freezing? You never know where a teardown of a theoretically boring product will lead. Seriously, take more stuff apart, it’s good for you.

I’ll spare you the lengthy story of the goldfish that finally decided it was big enough to turn cannibal, but I needed to evict some sundew plants from a small betta tank I had on the windowsill, and make it habitable for actual fish on short notice in winter. Hurry, right? It turns out the options for tiny low-wattage heaters for small tanks are kind of limited, so click, click, buy. Soon, gracing me with its presence is this cute little heater stick. The size is right, but the hairs on the back of my neck stood up a bit on noticing this fine Chinese gadget’s 120VAC cord is meant to go from the wall socket directly into the water. If you had the kind of parents itching to slap that spoonful of raw-egg cookie dough out of your mouth, or the kind that made you do all chainsaw juggling outside, they’d probably have a thing or two to say about this too. Not only is it a fully submersible heater, but the instructions actually warn against leaving the top (cord entrypoint) of the heater out of the water (or tilting it, but that’s neither here or there).

To my pleasant surprise, this thing worked as advertised, maintaining the tank at more-or less a balmy 78F, and not electrocuting anyone or anything. However, just shy of a whopping 6 months after purchase, I notice the heating indicator light is staying on constantly, but it’s no longer actually producing any heat. So, let’s have a peek inside and see what happened! You can probably guess, but the innards are mildly interesting regardless. Let’s take a look.

According to the actual marking on the unit, this is the “Aqua Thermo Nano Plastic Aquarium Heater”. Through a cursory search on Amazon, the exact same heater is sold under a wide variety of manufacturer names, including a collection of randomish all-caps names like GOOBAT, VIBIRIT, PGFUNNY and dozens more, in some cases with claimed power ratings up to 300 Watts(!).

These small “betta heaters” seem to come in two varieties: those with a preset thermostat, permanently set to a fairly high temperature around 78F, and those that are just a fixed resistor that continuously injects a certain number of watts of heat into the tank. (There is a third kind advertising a PTC thermal control element, that according to reviews, are actually just a fixed resistor with no thermostatic element.)

Heater top
Heater bottom

The submersible unit has only one obvious seam, near the top where the “cap” receiving the power cord enters (the seam-like line around the bottom face appears to be a moulding artifact; this is all a single piece). The top cap is mostly decorative; popping it off reveals, along with a bit of trapped water, a second, rubber cap glued in place by an adhesive resembling RTV silicone caulk. This is the seal protecting the 120V cord and innards from water intrusion. Removing this inner cap reveals the simple circuit inside.

The first thing to note, aside from the complete lack of fusing, GFCI or any other safety measures, is that the inside of this compartment is dry (the small amount of water visible on the workbench was trapped between the white plug and decorative black cap, not in the innards). Leakage into this area was not the cause of failure. Cutting into the power cord reveals that it is also dry inside, with at least individually-insulated mains wires inside the outer insulation jacket. This chamber contains nearly the entire circuit, except for the heating element, buried beneath a final plug of black epoxy. Sawing the package open reveals the final component.

Sawing into the bottom, the first thing to emerge was a pile of damp(!) sand, along with a bit of fiber fill (cotton ball?) and a not-faint “fishy” smell. While many unique smells are described as fishy, this reminded me of the smell released by an electrolytic capacitor going bang in the night. Don’t ask me how the sand got damp; the plastic package appears to be a sensible one-piece design apart from the plug where the cord comes in and did not appear obviously damaged, and again, the upper components beneath the plug were bone dry.

With the sand and some heatshrink out of the way, here is the full circuit, exposed for your viewing pleasure. As you might expect, there’s not much to it. The component marked “CTR-34” is a thermal switch in series with the heating element; the only other components are a current-limiting resistor and reverse-blocking diode protecting a small red indicator LED glued into the white cap. Interestingly, the heating element is an off-the-shelf radial wirewound resistor, rather than a bespoke Nichrome wire coil or similar. You can find very similar-looking name-brand parts. This is marked with a generous 15W rating – a pleasant surprise for our 10W-rated heater (more on this in a bit) – and 780-ohm resistance. It’s also marked as… no, wait, that’s a giant crack extending most of the way down the package. This was not suffered during the disassembly effort. Gently removing the cracked-off portion reveals discoloration and corrosion, as well as an even stronger waft of “fishy” smell. If the wet sand wasn’t enough to give it away, I think we’ve found our failure mode. That said, quick math on our “10 Watt” heater has the resistor dissipating 18.5W on 120VAC North American power. For the curious, the “120V” figure quoted is an RMS value – an average of sorts that already accounts for the fact that AC voltage, and thus its resistive heating power, varies throughout its 60Hz cycle (the peak voltage from the wall is around 170V).

So, we’ve done it, right? Nothing left to take apart! We even had a look inside the resistor! (pause for reverent sighs)

The “CTR-34” component is a bit of a mystery to me; I’ve never seen a thermal switch of this style before, with its sleekly polished package. The ad copy makes reference to an “Intelligent Temperature Chip”. So let’s saw it open, amirite?

With only a little mangling in the process, we liberate the contents and see some surprising complexity. There is a tiny glass-encapsulated reed switch – not a bimetallic switch, but a magnetic reed switch – encased by a stack of black donuts, further encased in the sealed metal shell. The black donuts turn out to be a pair of small permanent magnets surrounding a piece of ferrite material with a carefully-engineered Curie point. One such material is known by the trade name Thermorite, although many competing vendors offer similar switches using the same approach. By carefully controlling the mix of dopants and particle size of the ferrite material, it can be engineered to have arbitrary Curie temperatures as low as -10C or lower. (Holy shit, that’s a thing?) Below the Curie temperature, the soft-iron ferrite is permeable to the permanent magnets’ field and holds the switch inside closed. Above the Curie temperature, the ferrite loses its permeability, allowing the switch to open and turn off the heat. If I were to guess, the sleek silver-colored shell is a special alloy such as mu-metal that protects it somewhat from external magnetic fields.

As far as I can find, switches made using this approach are simply known as thermal reed switches. I have to admit I am a little baffled by the design choice, as this is otherwise a “cost-optimized” overseas import product, but these switches run several dollars a pop (at least domestically; the cheapest I found on Mouser was $3.20@1000pcs). My best guess is that once you eat the actual cost of the component, it’s idiotproof and can be crammed into the case by trained monkeys without any manual adjustment, calibration or chance of getting out-of-whack due to rough handling.

Other Utricularia for aquascaping besides UG

Utricularia graminifolia (UG) is a popular foreground plant for planted aquariums, with grass-like leaves that eventually form a lush green carpet over the substrate. It is considered anywhere between easy and impossible to grow, and has some special needs that make it not a beginner’s plant. However, there are a whole range of Utricularia that might grow fully submerged and give you a different look, and might even be easier for you to grow.

Below is a dump list of suspected submersion-friendly Utricularia, mostly for my future self to try growing when the world grows out of the current pandemic, my kids start sleeping during the night and I have a lick of free time for stuff like that again. This is by no means a comprehensive list. If you want that, head to Barry Rice’s extremely comprehensive Carnivorous Plant FAQ, find the Utricularia section and start clicking. He breaks them out by subgenus (so get your clicking finger warmed up), but under each is a table listing the growth habit (those listed as affixed aquatic and possibly subaffixed aquatic are good candidates). For photo references below, I tried to find some showing the foliage – for many Utricularia growers, and specific species, it’s all about the flowers.

If you do have any experience growing these fully submersed, please sound off in the comments!

  • U. bifida – (photo) (Source: webforum post)
  • U. caerulea – (photo/description) Note, may be synonymous with U. nivea (Source: webforum post)
  • U. dichotoma – (photo) Better known as Fairy Aprons for their unique flowers. (Sources: Besides this webforum post, and this other one, Carnivorous Plant Nursery until recently sold this plant with a mention of use as an aquarium foreground plant.)
  • U. geofrayii – To a depth of 5cm before flowering ceased (Source: webforum post)
  • U. limosa – (Source: webforum post)
  • U. livida – (photos/info) “easier to grow than UG and forms nice, thick mats in the aquarium”. More bulbous, grayish foliage compared to UG. (Sources: this fishkeeping article. Carnivorous Plant Nursery sold it as an aquarium foreground plant comparable to UG, but has since stopped carrying it.)
  • U. monanthos – (Source: webforum post)
  • U. nivea – To a depth of 5cm before flowering ceased (Source: webforum post)
  • U. praelonga – “easier to grow than UG and forms nice, thick mats in the aquarium” (Source: fishkeeping article). Appears to have significantly larger foliage than others in this list.
  • U. sandersonii – (photo) “The leaves are roughly the size and shape of duckweed, with small bladders interspersed throughout. It is much easier to grow than U. graminifolia and forms a beautiful dense carpet” (Source: fishkeeping article). Outside aquascaping, this plant is widely available commerically and known for having flowers that look like a cute bunny.
  • U. tricolor (Source: webforum post)
  • U. uliginosa – “Some varieties […] thrive at a depth of 30-40cm” (Source: webforum post)
  • U. volubilis – Twining bladderwort. “It has long leaves that are arranged in a rosette, and each rosette produces long stolons that produce additional plantlets.” (Source: The Carnivorous Plant FAQ. You can also see a picture of this growing in an aquarium with sand substrate)

For my own part, UG has been somewhere in the middle, between easy and impossible to grow (or between dead and flourishing). I did get it to grow densely (if incredibly slowly) in basically just dirty water – a bed of terrarium gravel full of runoff from watering other CP pots – and even flower, but for anything resembling fully submerged in an aquarium, it just gradually uproots (ahem, up-submerged-stems?) and floats away rather than establishing. I’m almost certainly Doing it Wrong(tm), but figure it might be easier to adapt the plant load to the growing conditions rather than the other way around.

Utricularia graminifolia flowering in a bed of river gravel

So far, my UG seems to survive in a wide range of lighting and fertilization conditions, tolerating pretty crap lighting on my windowsill and more, ahem, tank-borne nutrient (produced naturally by the fish) than I would expect for a carnivorous plant, although it doesn’t seem to love any of those conditions. Grown semi-emersed in a peat:sand mix and well protected from nibbly fish, it produces the patchy carpet shown below. This is the better part of a year’s growth starting from a couple small sprigs.

Disable automatic reboots on Windows 10 (maybe)

Look, I get it, updates are important, and so is installing them timely. But after waking up this morning to find yet-another overnight job lost to an automatic reboot, it’s the last straw. To afford nice things (you know, like computers), I need to do my job, and that requires actually using my computer and delivering the results, including the output of long-running jobs.

For Windows 10 Professional/Enterprise/whatchamacallit, this can be done through the group policy editor (gpedit.msc), but this is not available on the Home edition (and workarounds involve sketchy download sites). So, trying this to defeat auto-updates instead (from this thread). Note, these files are protected and you need admin privileges to touch them of course:

  • Open %windir%\System32\Tasks\Microsoft\Windows\UpdateOrchestrator
  • Rename ‘Reboot’ to Reboot.old
  • Create a dummy folder named Reboot (I assume this prevents the “missing” file being restored or overwritten).

We’ll see if this does anything, or for how long…!

Notes To Myself: Fixing File Sharing in Windows 10 after “Fall Creators Update” breaks it

About 2 years ago, sharing movies from my desktop (Windows 10, alas) to the old Linux Mint laptop acting as a Chromecast-that-plays-local-content-without-weird-workarounds randomly stopped working, with Gigolo reporting the very helpful error, “Connection timed out”. Fair enough, it’s not Gigolo’s job to diagnose problems caused by dodgy Windows updates. After more Googling than it should have required (and 2 years putting it off for lack of computer-fixing and movie-watching time as the laptop, a horizontal surface, slowly drowned under new clutter), it turns out that some then-recent Windows updates, installed during the night for your convenience, silently break several features non-Windows hosts (yes, even recently updated ones) depend on to access Samba/CIFS shares.

One of these changes, beginning in version 1803, is to break the ability to discover shared folders or their hosts on the network via browsing. (You can still access them via UNC paths such as \\computername\folder or \\\folder or their smb:// equivalents, if you know them already.). Officially, the change is described as disabling HomeGroups, a Windows-specific, post-XP discovery feature I still can’t differentiate from Workgroups, but for me it seemed to break discovery on the whole, including on Linux hosts.

Another change, introduced in the “Fall Creators Update”, disable support for the old SMB 1.x protocol version. This is what was resulting in the Windows share host seeming to not exist, even after the usual rain dances (yes network is connected, yes router is powercycled, yes smbd/nmbd are running, …), on freshly-updated Linux Mint 18.2 (LTS). Killing this off is at least defensible given the age of SMB 1.x and known (if somewhat theoretical unless the hackers are already in your living room) security holes, but a heads-up would have been nice.

Props to this random page for the “Suddenly can’t connect in Linux/Android”, SMB 1.x fix. In case it disappears, the fix is to open a Run dialog (Win-R) or command prompt, and type “optionalfeatures” (minus quotes). This sounds like a directive in the world’s lamest text-based adventure, but bear with me. You should get a dialog of various random features in alphabetical order. Find the one labeled “SMB 1.0/CIFS File Sharing Support”, expand it and enable the SMB 1.0 client and server.

To fix discoverability, some discovery services need to be turned back on (natch). Amazingly, the Windows 10 support article on the subject (which often appears as a link embedded in the UI near the relevant setting) is at least partially helpful. You have to scroll down to the “How do I troubleshoot sharing files and folders?” item and expand it yourself, because including a shareable anchor link to that part was a bridge too far, but the instructions within are pure gold. The instructions under ” Make sharing services start automatically” will roll back the damage caused by the 1803 update and make shared folders great discoverable again. While you’re in there, you can also follow the advice to “Turn on network discovery and file and printer sharing, and turn off password protected sharing” if you’d like. I still couldn’t connect to the share after the above incantations, but can’t be 100% sure if it was another Windows update demon, a bug on the Linux side, or just me misrecalling the username/password after years of disuse, so I enabled this (nuclear) option and finally was back to watching movies on our TV. Just be sure to check for and de-l33t any local hackers already connected to your Wifi before doing this. Check the basement, check the cupboards, and if you have a weird closet under the stairs from which crackling sounds and owl noises occasionally emanate, be sure to check there too.

Silly Subproject: Switching 7805 / 7833 TO-220 replacement

Thanks to an oopsie on a larger project that involved not doing the math before dropping a small 3.3V linear regulator into a >20V input circuit design, I had a need to swap it for something that generated less finger-burning, board-cooking, smoky-smelling heat. I’m sure everyone and their dog has done one of these already, but I was putting in for boards anyway and had the PCB editor already open, so: A tiny, low cost switching regulator board that’s a drop-in replacement for a through-hole TO-220 linear regulator such as a 7805 (5V) or LM1117/LM3940/etc. (3.3V). You can also wire it up in place of a SMT regulator of course; I just liked the reusability of the iconic 7805-style pinout and form factor.

Download Gerbers, EAGLE (7.x) files and BoM below.

You can order the bare board itself from OSH Park for a couple bucks. Note, this is a surface mount design with 0603 size components, although there are only 6 components in total.

The heart of this minimal circuit is an AP632xx buck converter from Diodes Inc., with a maximum input voltage of 32V and claimed output current of 2A, with typical efficiencies in the 80-90% range (depending on input voltage, output voltage and current). The output voltage can be switched by stuffing either the AP63203 (3.3V) or AP63205 (5V). I’d be a little cautious of trying for the claimed 2A on this board, just because it’s so tiny (0.4 x 0.635 inches) and heat dissipation from the chip itself is, at least according to the datasheet recommendations, mainly by proximity (through an air gap!) of the plastic package to the groundplane beneath. On the other hand, in the circuit I made this for (~20V input, 3.3V out at ~300mA) the chip itself is barely detectable as warm to the touch. If in doubt, adding some thermal goop on the underside of the package might help.

Regulator in action on a silly “IoT” project that I’ll never have time to complete. The soldering and lack of cleaning are atrocious, but this is prototyping, don’t judge!

Not Dead

I’ve just been busy with a couple of little things.

Now that the littlest one is figuring out this whole sleep thing, I might have time for projects again :-)

Hooah! Could your next MRE contain bug meat?

This delicious solicitation for an upcoming DARPA project rolled …er, scuttled? across my desk last week. Now, I’m no stranger to unconventional protein sources with way too much exoskeleton, but this project might be food for thought if you plan to enlist.

Excerpts, emphasis mine:

Component: DARPA
Topic #: SB172-002
Title: Improved Mass Production of Beneficial Insects
Technology Areas: Bio Medical Chem / Bio Defense

OBJECTIVE: Develop innovative … approaches [for] insect colony production to be used for a variety of purposes in agricultural production or agricultural research (e.g., edible insects, natural enemies for biological control of agricultural pests, pathogens, or weeds, etc.).

[M]anaged insect production could play a large and important role in ensuring national security through stabilization of food security or the provisioning of other essential services delivered by insects.

Removing or reducing barriers to the efficient, economical, and effective production of valuable insect species could be used to improve agricultural production, deliver novel sources of nutrition, and protect necessary ecosystem services.

Phase III projects should address the challenge of encouraging human acceptance of insects and insect-derived products for human use.

Phase III (Military): The integration of insect-derived products or ecosystem services (e.g., into the Combat Feeding Directorate or the Armed Forces Pest Management Board) is a potential option for technology transition. The objective of Phase III (Military) will be to determine feasibility, utility, and acceptance levels of these products and production systems by military personnel, especially in deployment scenarios.

The full solicitation will be available at this link for a limited time, with all the buzzwords intact. For when it inevitably crawls shuffles off this mortal coil, the fulltext is reproduced below. DARPA awards are a great cross-pollination opportunity for small businesses; let’s just hope the award doesn’t go to some fly-by-night operation.

OBJECTIVE: Develop innovative engineering (e.g., automation or bio-sensing technologies), genetic, and/or genomic approaches to reduce the negative characteristics associated with insect colony production to be used for a variety of purposes in agricultural production or agricultural research (e.g., edible insects, natural enemies for biological control of agricultural pests, pathogens, or weeds, etc.). Projects focusing on mosquito production are discouraged from applying.

DESCRIPTION: There is a DoD need to improve production systems to produce insects for food or feed, agricultural release, or entomological research in an effort to mitigate threats to agriculture stability and develop alternative methods of producing nutrients or other bio-synthesized products. Insects currently provide crucial “ecosystem services” including natural pest suppression and pollination that are under increasing strain from environmental and anthropogenic disturbance. In contrast, advances in synthetic biology provide future opportunities to bolster these roles, or create entirely new insect-delivered services altogether. Achievement of these goals will require large numbers of specific insect species to be produced at a scale that is currently difficult because of system bottlenecks. If these bottlenecks could be overcome, managed insect production could play a large and important role in ensuring national security through stabilization of food security or the provisioning of other essential services delivered by insects.

Insects are the dominant animal group on the planet, and many species are accordingly vital to the provisioning of natural capital in support of the human economy. These so-called “ecosystem services” may be calculated as the value of the services lost if insects were to disappear. Using this method, Losey and Vaugh (2006) valued wild insect ecosystem services in the United States, including pollination, pest suppression, nutrient cycling, and recreational opportunities, at no less than $57 billion USD per year. Debates continue as to the accuracy and ethics of assigning values to natural services, but few can argue that a world without insects would struggle and perhaps fail to support human economies as we know them today.

The opportunity to positively affect large-scale managed insect production requires technological advances to overcome the bottlenecks created by the feeding media or substrate, labor, post-processing, quality control, and insufficient capital to generate efficiencies of scale (Cohen et al. 1999, Grenier 2009). Many insect species, especially those used for pest control, have relatively inflexible dietary demands in terms of nutritional quality, and some natural enemy species require only certain animal species as hosts. Insect bodies are fragile, and have generally been handled by humans during husbandry and packaging, a time-consuming and often expensive endeavor. Artificial rearing sometimes produces poor quality results; for example, it can yield insects with low nutritional value or that are unable to function in the environment upon release. Too often, existing solutions are expensive, thus triggering a vicious cycle where the insect product is not economical enough to attract the very capital expansion investment that would reduce the cost-per-unit to sustainable levels.

Accordingly, innovative solutions to these problems of rearing valuable insect species en masse would prove immensely valuable. Opportunities abound to improve rearing success on artificial diets, increase automation of husbandry and processing, improve quality control, and reduce cost-to-entry barriers of novel or existing technologies that overcome the most common insect rearing hurdles. Improved genetic, genomic, and proteomic understanding and editing tools allows enhanced diet optimization on both the production (nutritional) and consumption (insect) ends of the pipeline. Vast improvements in sensors, robotics, and computing have already allowed a nascent, automated plant-farming industry to form, and similar technologies could be developed or transferred to insect rearing and processing methods. Plummeting costs in an array of molecular techniques and specialized production platforms encourage a re-evaluation of formerly cost-prohibitive processes or a re-imagination of new ones.

Removing or reducing barriers to the efficient, economical, and effective production of valuable insect species could be used to improve agricultural production, deliver novel sources of nutrition, and protect necessary ecosystem services. Innovative engineering, bio-synthetic, and/or genetic/genomic strategies will be required to improve the output, quality, and viability of large-scale insect rearing needed to meet these goals.

This SBIR topic seeks approaches to identify and address issues associated with large-scale insect rearing and/or the improvement of production outcomes. We encourage applications that use emerging engineering and genetic/genomic tools to these ends. Expected outcomes could be: rapid assessment and/or production of successful artificial diets; improved rearing efficiency and/or scale through the use of automation, strategies, or machines to rapidly assess insect quality or delicately handle live insects for post-processing; and materials or methods to speed return on investment during the scaling-up process.

PHASE I: Identify engineering objectives, molecular targets, or innovative strategies for improving production and performance of insects to improve large-scale rearing operations. Individual projects should address at least one of several challenges expected, which include: (1) artificial insect diet success, (2) increased efficiency and automation, (3) improved quality control and post-processing, (4) materials or methods to significantly improve rates of return on creating economies of scale. Example approaches could include the following:

• Artificial diets for difficult-to-produce or especially valuable beneficial insect species.
• Engineering advances in insect rearing facilities to increase energy, materials, and/or labor efficiency.
• Methods, sensors, or machines to improve insect quality and reduce post-processing time or losses.
• Novel, alternative, or streamlined solutions to especially costly insect rearing facility problems.

The key deliverable for Phase I will be the demonstration of a proof of concept that the selected challenge has been overcome and can be scaled to a larger format. These demonstrations should be performed in repeated experiments in small colonies (i.e., tens to hundreds of individuals) on single or multiple insect species where significant improvements in insect rearing success, efficiency, end product, or cost-per-unit can be shown to have significantly improved through relevant analysis.

For this topic, DARPA will accept proposals for work and cost up to $225,000 for Phase I. The preferred structure is a $175,000, 12-month base period, and a $50,000, 4-month option period. Alternative structures may be accepted if sufficient rationale is provided.

PHASE II: The small-scale, small-colony approach taken in Phase I will be transferred to and implemented in a large-scale (i.e., hundreds to thousands of individuals) insect-products-sourcing platform. The goal of Phase II is the integration of technologies used to increase the output of insect rearing facilities through the success of artificial diets, automation, quality control and post-processing, or reduced cost per unit. Therefore, the deliverable for Phase II is the demonstration of a large-scale insect production system utilizing integrated engineering, genetic, or materials technologies. Communication with the proper regulatory agencies will be a key component to determine how these technologies can be safely and ethically monitored for proper use and eventual commercialization of the anticipated product.

PHASE III DUAL-USE APPLICATIONS: Phase III (Commercial): The technologies developed in Phases I and II will be integrated into a fundamental platform to improve the production of economically or environmentally valuable insect species. These integrated technologies will serve as the foundation for further improvement. Phase III will be a demonstration of a fully adopted system that utilizes two or more technologies to improve production. In addition to the development of a plan for regulatory oversight, if applicable, Phase III projects should address the challenge of encouraging human acceptance of insects and insect-derived products for human use.

Phase III (Military): The integration of insect-derived products or ecosystem services (e.g., into the Combat Feeding Directorate or the Armed Forces Pest Management Board) is a potential option for technology transition. The objective of Phase III (Military) will be to determine feasibility, utility, and acceptance levels of these products and production systems by military personnel, especially in deployment scenarios.


1. Chambers, Darrell L. 1977. Quality Control in Mass Rearing. Annual Review of Entomology 22:289-308
2. Clarke, Geoffrey M., Leslie J. McKenzie. 1992. Fluctuating Asymmetry as a Quality Control Indicator for Insect Mass Rearing Processes. Economic Entomology 85(6):2045-2050.
3. Cohen, Allen C., Donald A. Nordlund, and Rebecca A. Smith. 1999. Mass Rearing of entomophagous insects and predaceous mites: are bottlenecks biological, engineering, economic, or cultural? Biocontrol 20(3):85N-90N.
4. Grenier, Simon. 2009. In vitro rearing of entomophagous insects – Past and future trends: a mini review. Bulletin of Insectology 62(1):1-6.
5. Losey, John E., Mace Vaughn. 2006. The Economic Value of Ecological Services Provided by Insects. BioScience 56(4):311-323.
6. Riddick, Eric W. 2009. Benefits and limitations of factitious prey and artificial diets on life parameters of predatory beetles, bugs, and lacewings: a mini-review. Biocontrol 54:325-339.
KEYWORDS: insect production, automation, molecular biology, beneficial insects, ecosystem services

Solved: Mazda (JCI) Infotainment crashes when playing OGG Vorbis files

The Fix (Tl;dr): The Mazda Connect (a.k.a. “JCI infotainment” or Johnson Controls Infotainment, etc.) firmware available as of 1/2016 has a problem with long tags in Ogg Vorbis files, specifically the tags used for album/cover art. Vorbis files containing cover art will likely cause the radio to freeze and then reboot. To fix this, you need to FULLY strip any such long tags, including any padding left behind after their removal (in some software “removing” the tags only zeroes them out, leaving blank space in the file, i.e. the filesize does not change). This is specific to Ogg files, it seems to have no trouble with cover art in mp3 ID3 tags.

To fix this, I’m using the CleanTags script with a minor tweak to force it to fully rewrite the files (no padding) rather than simply zero out the data. There are probably off-the-shelf GUI executables that can fully strip tags too, but I don’t know any off the top of my head, and they probably don’t advertise this as a feature since there’s really no need to do it unless working around an obscure bug in a specific in-dash music player. If you know any, please reply in the comments!

To use the cleantags script, you need to have Python (2.x?) and mutagen installed on your computer. After downloading a copy of cleantags, find the line containing:

and replace it with: x:0)

Whitespace matters in Python, so be sure not to mess with the indentation of the line. Then, run the script as usual (see documentation) and it should fully strip more obscure tags including the problematic cover art, avoiding the bug in the infotainment system’s player.

Up until recently, I’d been kicking it oldschool with my music collection: artist and song name embedded right in the filename; duplicating this info in internal metadata was for sissies (or people with entirely too much time on their hands). After finally retiring my old car for a modern new one with all the bells and whistles (Antilock brakes! Traction control! It plays mp3s?!), I discovered the stereo*, for all the sweet things it can do, will simply display “Unknown Artist/Unknown Title/Unknown Album” for any song files not formally tagged with this info, rather than fail over to displaying the filename (heck, even my homebrew old one did that). That was annoying, so I finally caved on the metadata issue. Well, really, my issue was not giving enough of a crap to sit there like a monkey retyping the filename info across a zillion files into a tagging tool. Luckily, we live in the future! and the drudgery of tagging mp3s can now be crowdsourced and automated.

The tool I’m using for this is MusicBrainz Picard. Basically, think Shazam for ID3 tags. It scans the files to generate a fingerprint based on acoustic content and length/etc., pulls the nearest-matching results from a crowdsourced database and Bob’s your uncle. It even grabs… cover art! And puts it right in your file! And then your music player shows it while playing the song! There’s a STANDARD for this! Like I said, the future. Up until that moment, I didn’t even know you could do that. And here is where the problems began.

Some time passed, and files from my now-tagged music collection eventually propagated to the USB key plugged into the dash**. Some more time passed. All of the sudden, after a couple months, the car stereo started to crash sporadically when playing songs off the USB key. Due to the whole not-showing-any-filenames-ever thing, it took a while to figure out it was crashing every time it tried to play a .OGG file. OGG files comprise a fairly small minority of my music collection***, which is why the problem was so sporadic and didn’t manifest right away.

Tracking down the problem actually took a bit of fiddling, since the tagging was no longer fresh in my mind as a first-guess trouble source. My first guess was that since all the files showing the problem were fairly old, and a freshly-generated Ogg file with the current encoder played back fine, bugs in the way-back-when encoder or obsolete version of the format were the most likely culprits. This eventually led me down a bit of a rabbit hole of Ogg diagnostic tools, so the below might be useful if you’re having other Ogg-related playback troubles on other players.

What I’m calling an Ogg music file is really a “Vorbis” bitstream, i.e. the actual compressed audio stuff generated by the Vorbis codec, packaged into a file using the “Ogg” container format, so separate problems can occur at either level. If this were a buffet, the Vorbis bitstreams and tags, etc. are different foods and the Ogg container is one of those silly divided cafeteria trays.

The Oggz tools are probably the most approachable way to check for problems in the container itself. These are command-line tools, but available as ready-to-rock executables for major Linux platforms and even Windows. The oggz-validate tool will warn you of common errors or inconsistencies in the file, and other tools in the package will help you dig further. Despite its intended usage for merging multiple files, oggz-merge can be used with a single file to correct some errors like out-of-order timestamps(?) and missing end-of-stream indicators. Note though, warnings from oggz-validate do not necessarily mean your player will choke on the file (for example, missing EOS page is very common and should be fairly harmless as most players should know to stop playing when the file itself ends.)

Assuming the container checks out, the problem could be in the codec-specific data stored in it. Ogg is really a content-agnostic container format and can hypothetically contain any sort of audio/video/etc. streams, but Vorbis audio is probably the most common by far. In this case, your debugging options may be more limited. The only tool I found for validating Vorbis bitstreams is a C program called ‘vorbose’, available only as naked source code in a SVN repo. To use this, you also need to install ‘libogg2’ from the same repository, which is different from the ‘libogg’ available in any Linux package manager and will require you fetch the source and compile that yourself too. Luckily, there are no further dependencies. Once you provide the platform-specific magic incantations to make your computer cough forth a locally-sourced, artisanal, sustainably raised executable, you can use the -w option to “report anything fishy in the stream” (don’t ask me how comprehensive this is), or even -v to learn more than you ever wanted to about the file.

* otherwise known as “infotainment system”, since it does all sorts of other stuff, like service reminders and even GPS navigation as an (horribly overpriced in the era of cheap Garmins and Google Maps) option. As such, this thing is somewhat intricately tied into the bowels of the vehicle, so it’s generally easier to work with its problems since replacing it is not as simple as the headunit swaps of yore.

** next to the Raspberry Pi, which is, in a first for Raspberry Pi-kind, not being used as a media player (or retro console emulator).

*** Of the many things I experimented with in college, Ogg Vorbis was one of them. It’s a great format, but fairly unheard-of at the time, so when portable music players started to come out (e.g. Nomad), it was guaranteed they wouldn’t be able to play them. This was also a time in computing history where you’d start a single song encoding into mp3, go to class, and hope it was done when you got back. Playing an mp3 (~50% CPU time on my “newish” computer at the time) meant pausing it periodically so mIRC chats could catch back up to realtime. Needless to say, re-ripping one’s CD collection (remember those?) into an obscure format with even modestly slower encode/decode speeds was a tough sell.

Free IoT Telemetry using DNS Tunneling and Other Peoples’ WiFi (Part 1)

Well, it happened. Despite all my talk about “Internet of Things” hype being teh suck and not ready for primetime yet, I’m now an official IoT Hero. I suppose I should actually do something about that. Today, we explore cheap-as-free data exfiltration for mobile IoT gadgets using a trick known as DNS tunneling.

IoT Hero

Superpowers include starting the toaster via TCP/IP and tripping over tall bandwidth bills in a single bound.

Internet of Toilets, data exfiltration and wardriving!

Suppose you are a company that rents, leases, or otherwise loans out large numbers of some mobile object that you hope to get back at some point… say, Port-a-Potties Port-a-Johns Portaloos Honey Buckets portable toilets (yes, this is based on a true story). As it happens, they often get lost, stolen, blown away, forgotten somewhere or simply lost track of somewhere in your own logistics chain. This happens more than you might think, with various computer glitches or simple human screwups leaving inventory trapped on trucks or lost right under your nose in your own warehouse.

As a world leader in mobile outhouses with over 1 meeeelion units in circulation worldwide, you can’t afford to have your product going walkabout all the time, so you’d like to tag them with a bit of battery-powered IoT smarts so they can report back their locations periodically. A FindMyCrapper app tied into your logistics would let you opportunistically round up your lost sheep on the way by and bring them home.

So, we have 2 needs:
1) Get the location of the object periodically
2) Phone it home

This applies to whatever object you’d like back, including toilets, dumpsters, lighted traffic devices, reusable shipping containers… Housepets too if you can convince them to wear it.

The tried-and-mostly-true approach for #1 is GPS, with the caveat that it won’t work well, if at all, indoors (including units lost in your distribution center) or under dense cover. For #2 (hah!) it’s cellular. You can get a uBlox module for $35USD that covers this as well as free access to their cell triangulation database, which will provide rudimentary location, even indoors. Off-brand cell modems from the usual sources can probably be had for much less. Just beware of the cost of connectivity, especially if you’re not a big enough fish to sweet-talk your way out of per-device maintenance charges (I omit satellite options for just this reason). Also, as anyone living outside metropolitan areas can attest, coverage outside metropolitan areas can be iffy. So ideally, we want a cheap backup option for both of these needs.

Seeing as you can now get ludicrously cheap WiFi modules (like the infamous $2 ESP8266) and run them indefinitely (if infrequently) with a small solar cell and rechargeable battery, they scream possibilities. If you could use random WiFi access points nearby to (a) triangulate trilaterate your location and (b) phone it home, you’d be in business. We know WiFi geolocation is a thing (Apple and Google are doing it all the time), but sneaking data through a public hotspot without logging in?

To find out, I ran a little experiment with a Raspberry Pi in my car running a set of wardriving scripts. As I went about my daily business, it continuously scanned for open access points, and for any it was able to connect to, tried to pass some basic data about the access point via DNS tunneling*, a long-known but recently-popular technique for sneaking data through captive WiFi portals. Read the footnote if you’re interested in how this works!

The Experiments
Since I’m pressed for freetime and this is just a quick proof-of-concept, I used a Raspberry Pi, WiFi/GPS USB dongles and bashed together some Python scripts rather than *actual* tiny/cheap/battery-able gear and clever energy harvesting schemes (even if that is occasionally my day job). The scripts answer some key questions about WiFi geolocation and data exfiltration. All of the scripts and some supporting materials are uploaded on GitHub (WiSneak).

1) Can mere mortals (not Google) viably geolocate things using public WiFi data? How good is it compared to GPS? Is it good enough to find your stuff?

The ‘wifind’ script in scanning mode grabs the list of visible access points and signal strengths, current GPS coordinate, and the current time once per second, and adds it to a Sqlite database on the device. Later (remember, lazy and proof of concept), another script, ‘query-mls’ reads the list of entries gathered for each trip, queries the Mozilla Location Service with the WiFi AP data to get lat/lon, then exports .GPX files of the WiFi vs. GPS location tracks for comparison.

There are other WiFi location databases out there (use yer Googler), but most are either nonfree (in any sense of the word) or have very limited or regional data. MLS seemed to have an answer for every point I threw at it. The only real catch is you have to provide at least 2 APs in close physical proximity as a security measure – you can’t simply poke in the address of a single AP and get a dot on the map.

2) Just how much WiFi is out there? How many open / captive portal hotspots? How many of them are vulnerable to DNS tunneling?

A special subdomain and DNS record were set up on my web host (cheap Dreamhost shared hosting account) to delegate DNS requests for that subdomain to my home server, running a DNS server script (‘dnscatch’) and logging all queries. This is the business end end of the DNS tunnel.

In tunnel discovery mode, ‘wifind’ repeatedly scans for unencrypted APs and checks if their tunneling status is known yet. If unknown APs are in range, it connects to the one with the highest signal strength, and progresses through the stages of requesting an address (DHCP), sending a DNS request (tunnel probe with short payload), validating the response (if any) against a known-good value, and finally, fetching a known Web page and validating the received contents against the expected contents. The AP is considered ‘known’ if all these evaluations have completed, or if they could not be completed in a specified number of connection attempts. The companion script, ‘dnscatch’ running on a home PC (OK, another RasPi… yes I have a problem) catches the tunneled probe data and logs it to a file. The probe data includes the MAC address and SSID of the access point it was sent through. Finally, ‘query-mls’ correlates the list of successfully received tunneled data with the locations where the vulnerable AP was in range, outputting another set of .GPX files with these points as POI markers.

Preliminary Results

This is expected to be an ongoing project, with additional regional datasets captured and lots of statistics. I haven’t got ’round to any of that yet, but here is a look at some early results.

All of the map images below were made by uploading the resulting .GPX files into OpenStreetMaps’ uMap site, which hosts an open-source map-generation tool (also) called uMap. Until finding this, I thought generating the pretty pictures would end up being the hardest part of this exercise.

WiFi geolocation and data exfiltration points

WiFi geolocation and data exfiltration points

This trip, part of my morning commute (abridged in the map image), consists of 406 total GPS trackpoints and 467 WiFi trackpoints (the lower figure for GPS is due to the time it takes to fix after a cold start). Of these, 238 (51%) were in view of a tunnelable AP. The blue line is the “ground truth” GPS track and the purple line with dots is the track estimated from WiFi geolocation showing the actual trackpoints (and indirectly, the distribution of WiFi users in the area). The red dots indicate trackpoints where a tunneling-susceptible AP was in-view, allowing (in a non-proof-of-concept) live and historical telemetry to be exfiltrated by some dirty WiFi moocher.

Overall, the WiFi estimate is pretty good in an urban residential/commercial setting, although it struggles a bit here due to the prevalence of big parking lots, cinderblock industrial parks and conservation areas. The apparent ‘bias’ in the data toward WiFi-less areas in this dataset is consistent, based on comparison to the return drive, and does not appear to be an artifact of e.g. WiFi antenna placement on the left vs. right side of the Pi.

GPS and WiFi geolocation points with tunneling-friendly points shown in red.

GPS and WiFi geolocation points with tunneling-friendly points shown in red.

Here is basically the same thing, with the tunnelable points shown according to the ground-truth GPS position track rather than the WiFi location track.

How does it do under slightly more challenging WiFi conditions?

Here is another dataset, taken while boogeying down MA-2 somewhere north of the posted speed limit. Surprisingly, the location accuracy is pretty good in general, even approaching proper highway speeds. This is the closest thing I have to a “highway” dataset, due to just now finishing up the scripts and not having a chance to actually drive on any yet. It will be interesting to see how many significant locatable APs can be found on a highway surrounded by cornfields instead of encroaching urban sprawl. I suspect there are a few lurking in various taffic/etc. monitoring systems and AMBER alert type signage scattered about, but they may not be useful for locating (with MLS, at least) due to the restriction of requiring at least two APs in-view to return a position. This is ostensibly to prevent random Internet loonies from tracking people to their houses via their WiFi MACs, although I have no idea how one would actually accomplish that (at least in an easier way than just walking around watching the signal strength numbers, which doesn’t require MLS at all). Unfortunately, I don’t have tunneling data for this track since I haven’t driven it with the script in tunnel discovery mode yet.

Location track on MA-2 with fairly sporadic WiFi coverage

Location track on MA-2 with fairly sporadic WiFi coverage

Closeup of location track on MA-2 with fairly sporadic WiFi coverage

Closeup of location track on MA-2 with fairly sporadic WiFi coverage

In this dataset, MLS would occasionally (typically at the fringes of reception with only a couple APs visible) return a completely bogus point, centered a few towns over, with an accuracy value of 5000.0 (I presume this is the maximum value, and represents not having any clue where you are). Every unknown point produced the same value, resulting in an odd-looking track that periodically jumps to an oddly-specific (and completely uninteresting) coordinate in Everett, MA, resulting in the odd radiants shown below. These bogus points are easily excluded by setting a floor on allowable accuracy values, which are intended to represent the radius of the “circle where you might actually be with 95% confidence, in meters”.

Location track on MA-2 with fairly sporadic WiFi coverage, bogus points included

Location track on MA-2 with fairly sporadic WiFi coverage, bogus points included


1) Can mere mortals (not Google) viably geolocate things using public WiFi data?

Totally! Admittedly, my sample set is very small and only covers a local urban area, but it’s clear that geolocating with MLS is very approachable for mortals. If you plan to make large numbers of requests at a time, they expect you to request an API key, and reserve the right to limit the number of daily requests, but API keys are currently provided free of charge. IIRC, numbers of requests that need to worry about API keys or limits are on the order of 20k/day; this requirement is mainly aimed at folks deploying popular apps and not individual tinkerers. For testing and small volume stuff, you can use “test” as an API key.

How good is it compared to GPS? Is it good enough to find your stuff?

So far, ranging from GPS-level accuracy to street-level accuracy (or “within a couple streets” if they are packed densely enough but not much WiFi is around). Generally, not as good as GPS. The estimated accuracy values ranged typically from 50-some to 250 or so meters, vs. a handful for GPS. Remember though, the accuracy circle represents a 95% confidence value, so there’s a decent chance the thing you’re looking for is closer to the middle than the edges. This might also depend on how big your thing is and how many places there are to look. In some cases, narrowing it down to your own distribution center might be enough.

2) Just how much WiFi is out there? How many open / captive portal hotspots? How many of them are vulnerable to DNS tunneling?

Like mentioned above, I found about 50% of points taken in a dense residential area were in-view of an access point susceptible to tunneling. In this area, the cable operator Comcast is largely – if inadvertently – responsible for this, so your mileage may vary in other areas (although I expect others to follow suit). In the last few years, Comcast has been replacing its “dumb” rental cable modems with ones that include a WiFi hotspot, which shows up unencrypted under the name ‘xfinitywifi’ and serves up a captive portal. The idea is that Comcast customers in the area can log into these and borrow bandwidth from other customers while on the go. Fortunately for us, so far it also means plenty of tunneling opportunities: ‘xfinitywifi’ represented nearly 50% of all APs in my local area, and 15% of a much larger dataset including upstate New York (this dataset has limited tunnel-probe data only and does not include location data). It also means Comcast – and other cablecos – could make an absolute killing selling low-cost IoT connectivity if they can provide a standardized machine-usable login method and minimal/no per-device-per-month access charges. An enterprise wishing to instrument stuff buys a single service package covering their entire fleet of devices and pays by the byte. Best of all, Comcast’s cable Internet customers already pay for nearly all of the infrastructure (bandwidth bills, electricity, climate-controlled housing of the access points…), so they can double-dip like a Chicago politician.

Under The Hood
There are a few practical challenges with this test approach that had to be dealt with:

The method used to manage the WiFi dongle (a Python library called ‘wifi‘) is a bit of a kludge that relies on scraping the output of commandline tools such as iwlist, iwconfig, etc and continually rewriting your /etc/network/interfaces file (probably via the aforementioned tools). This combination tends to fall over in a stiff wind, for example, if it encounters an access point with an empty SSID (”). Running headless, crashes are neither detectable or recoverable (except by plugcycling), so to keep things running, I had to add a check that avoids trying to connect to those APs. It turns out there are quite a few of them out there, so I’m probably missing a lot of tunneling opportunities and location resolution, but the data collected from the remaining ones is more than adequate for a proof-of-concept. I also added the step of replacing /etc/network/interfaces with a known-good backup copy at every startup, as a crash will leave it in an indeterminate state of disrepair, ensuring a fair chance the script will immediately crash again on the next run.

Keeping track of trips. A headless Pi plugged into the cig lighter and being power-cycled each time the car starts/stops will quickly mix data from multiple trips together in the database, and adding clock data would only compound the problem (as the clock is reset each time). The quick solution to this was don’t use a database adding a variable called the ‘runkey’ to the data for each trip. The runkey is a random number generated at script startup and associated with a single trip. To my mild surprise, I have gotten all unique random values despite the startup conditions being basically identical each time. Maybe the timing jitter in bringing up the wifi dongle is a better-than-nothing entropy source?

Connection attempts are time-consuming and blocking. At the start of this project, I wasn’t sure that connecting to random APs would even work at all at drive-by speeds. It does, but you kind of have to get lucky: DHCP takes some seconds (timeout set to 15), then the remaining evaluations take some further seconds each. Even with the hack of always preferring to connect to the strongest AP (assuming it is the closest and most likely to succeed), the odds of connecting are iffy. Luckily, my daily commute is the same every day, so repeated drives collect more data. Ordinarily, whatever AP I encounter ‘first’ would effectively shadow a bunch of others encountered shortly after, but the finite-retry mechanism ensures they are eventually excluded from consideration so that the others can be evaluated.

The blocking nature of the connection attempts also prevents WiFi location data (desired at fixed, ideally short intervals) from being collected reliably while tunnel probing. The easy solutions are to plop another Pi on the dashboard (sorry, fresh out!) or just toggle the script operating mode on a subsequent drive (I did the latter).

The iffyness of the connections may also explain a significant discrepancy between tunnel probes successfully received at my home server and APs reported as successful by the script side (meaning they sent the probe AND received a valid reply). Of course, I also found the outgoing responses would be eaten by Comcast if they contained certain characters (like ‘=’) in the fake domain name, even though the incoming probes containing them were unaffected.

One thing that hasn’t bitten me yet, oddly enough, is corruption of the Pi’s memory card, even though I am letting the power cut every time the car stops rather than have some way to cleanly shut down the headless Pi first. You really ought to shut it down first, but I’ve been lucky so far, and my current plan is to just reimage the card if/when it happens.

*DNS Tunneling
Typically, public WiFi hotspots, aka captive portals, are “open” (unsecured) access points, but not really open to the public – you can connect to it, but the first time you try to visit a website, you’re greeted with a login page instead.

DNS Tunneling is one technique for slipping modest amounts of data through many firewalls and other network-access blockades such as captive portals. How it works is the data, and any responses, are encoded as phony DNS lookup packets to and from a ‘special’ DNS server you control. The query consists mostly of your encoded data payload disguised as a really long domain name; likewise, any response is disguised as a DNS record response. Why it (usually) works is a bit of a longer story, but the short answer is that messing with DNS queries or responses yields a high chance of the client never being able to fetch a Web page, thus the access point never having the opportunity to feed a login page in its place, so the queries are let through unimpeded. If the access point simply blocked these queries for non-authenticated users, the HTTP request that follows would never occur; likewise, since hosts (by intention and design of the DNS) aggressively cache results, the access point feeding back bogus results for these initial queries would prevent web access even after the user logged in.

DNS tunneling is far from a new idea; the trick has been known since the mid to late ’90s and is used by some of the big players (PC and laptop vendors you’ve most certainly heard of) to push small amounts of tracking data past the firewall of whatever network their users might be connected to. However, it’s gained notoriety in the last few years as tools like iodine have evolved to push large enough amounts of data seamlessly enough to approximate a ‘normal’ internet connection.

Isn’t it illegal?

I am not a lawyer, and this is not legal advice, but my gut says “probably not”. Whether this is an officially sanctioned way to use the access point or not, the fact is it is intentionally left open for you to connect to (despite it being trivial to implement access controls such as turning on WPA(2) encryption, and the advice screamed from every mountaintop that you should do so), accepts your connection, then willfully passes your data and any response. The main thing that leaves me guess it’s on this side of the jail bar is that a bunch of well-known companies have been doing it for a long time on other peoples’ networks and not gotten into any sort of trouble for it. (Take that with a big grain of salt though; big companies have a knack for not getting in trouble for doing stuff that would get everyday schmucks up a creek.)

Doesn’t it open up the access point owner to all sorts of liability from anonymous ne’er-do-wells torrenting warez and other illegal stuff through it?

Not really. The ‘tunneling’ part is key; the other end of the tunnel is where the ne’er-do-well’s traffic will appear to originate from; that’s a server they control. Unless they are complete idiots, the traffic within the tunnel is encrypted, and it’s essentially the same thing as a VPN connection. Anyone with a good reason to track down the owner of that server will follow the money trail in a similar way. While a clever warez d00d will attempt to hide their trail using a torrent-VPN type service offered from an obscure country and pay for it in bitcoins, the trail will at least not point to the access point owner.

So it’s just a free-for-all then?

No; there are at least a few technical measures an access point designer/owner can take against this technique. An easy and safe one is to severely throttle or rate-limit DNS traffic for unauthenticated users. While this won’t stop it outright, it will limit any bandwidth consumption to a negligible level, and the ‘abuse’ to a largely philosophical one (somebody deriving a handful of bytes/sec worth of benefit without paying for it). The ratelimit could get increasingly aggressive the longer the user is connected without authenticating. Another is to intercept A record responses and feed back fake ones anyway (e.g. virtual IPs in a private address range, e.g. 10.x.x.x), with the caveat that the AP then must store the association between the fake address it handed out and the real one, and once the user is authenticated, forward traffic for it for the life of the session. I wouldn’t recommend this approach as it may still have consequences for the client once they connect to a different access point and the (now cached) 10.x.x.x address is no longer valid, but I’ve seen it done. Finally, you can target the worst offenders pretty easily, as they (via software such as iodine) are pushing large volumes of requests for e.g. TXT records (human-readable text records which can hold larger data payloads) instead of A records (IP address lookups). However, some modern internet functionality such as email anti-spam measures (e.g. SPF) do legitimately rely on TXT record lookup, so proceed with caution. Finally, statistical methods could be used – again, the hardcore methods like iodine will attempt to max out the length AND compress hell out of the payloads, so heuristics based on average hostname lookup length and entropy analysis of the contents could work. This is more involved though, deep packet inspection territory, and as with any statistical method runs the risk of false positives and negatives.

Notes To Myself: Starting out with OpenWSN (Part 1)

Successful toolchain setup, flashing and functional radio network! Still todo: Fix network connectivity between the radio network and host system, and find/fix why the CPUs run constantly (drawing excess current) instead of sleeping.

Over the last few weeks (er, months?), I build up and tried out some circuit boards implementing OpenWSN, an open-source low-power wireless mesh networking project. OpenWSN implements a stack of kid-tested, IEEE-approved open protocols ranging from 802.15.4e (Time-Synchronized Channel Hopping) at the physical layer, 6TiSCH (an interim, hardcoded channel/timeslot schedule until the smarts for deciding them on the fly is finalized), 6LoWPAN (a compressed form of IPv6 whose headers fit in a 127-byte 802.15.4etc. frame), RPL/ROLL (routing), and finally CoAP/HTTP at the application level. The end result is (will be) similar to Dust SmartMesh IP, but using all open-standard and open-source parts. This should not be a huge surprise; it turns out the project is headed up by the original Berkeley Smart Dust guy. Don’t ask me about the relationship between this and Dust-the-click-n-buy-solution (now owned by Linear Technology), TSCH, any patents, etc. That’s above my pay grade. My day-job delves heavily into low-power wireless stuff, and here SmartMesh delivers everything it promises. But it’s rather out of the price range of hobbyists as well as some commercial projects. That, and if you use it in a published hobby project Richard Stallman might come to your house wielding swords. So how about a hands-dirty crash course in OpenWSN?


At the time of this writing, the newest and shiniest hardware implementation seems to be based on the TI CC2538, which packs a low-power ARM Cortex core and radio in a single package. OpenMote is the (official?) buyable version of this, but this being the hands-dirty crash course, I instead spun my own boards. You don’t really own it unless you get solderpaste in your beard, right? The OpenMote board seems to be a faithful replication of a TI reference design (the chip, high and low-speed crystals, and some decoupling caps), so we can start from there. To save time I grabbed an old draft OpenMote schematic from the interwebs, swapped the regulator for a lower-current one and added some pushbuttons.

OpenWSN PCB based on an early OpenMote version

OpenWSN PCB based on an early OpenMote version

Here is the finished product. Boards were ordered thru OSH Park, and SMT stencil through OSHStencils (no relation). Parts were placed using tweezers and crossed fingers, and cooked on a Hamilton Beach pancake griddle. 2 out of 3 worked on the first try! The third was coaxed back to life by lifting the chip and rebaking followed by some manual touch-up.


I first smoke-tested the boards using the OpenMote firmware, following this official guide. No matter where you start, you’ll need to install the GCC ARM toolchain. Details are on that page.

This package REQUIRES Ubuntu (or something like it), and a reasonably modern version of it at that (the internets say you can theoretically get it working on Debian with some undue hacking-about, if you don’t mind it exploding in your face sometime in the future.).

If your Ubuntu/Mint/etc. version is too old (specifically, package manager version), you’ll get an error relating to the compression type (or associated file extension) used in the PPA file not being recognized. You can maybe hack about to pull in a ‘future’ version for your distro version, but who knows which step you’ll get stuck at next for the same reasons. (Maybe none, but I just swapped the hard drive and installed a fresh Mint installation on another one.)

First build the CC2538 library: In the libcc2538 folder: python
This will build libcc2538.a. Probably after you got a “libcc2538.a does not exist. Stop.” error message.

Next, try compiling a test project to make sure the toolchain works:

chmod 777

Assuming all goes well, now you can flash the resulting binary onto the board!

sudo make TARGET=cc2538 BOARD=openmote-cc2538 bsl

Needless to say, you need some kind of serial connection to the bootloader UART (PA0, PA1) on the board for this to work (I used a USB-serial dongle with 3.3V output).

Successful output from this step looks something like:

Loading test-radio into target...
Opening port /dev/ttyUSB0, baud 115200
Reading data from test-radio.hex
Connecting to target...
Target id 0xb964, CC2538
Erasing 524288 bytes starting at address 0x200000
Erase done
Writing 524288 bytes starting at address 0x200000
Write done

Now you can actually try compiling OpenWSN.

OPTIONAL STEP: If you foresee doing active development on OpenWSN, you might want to install Eclipse. NOTES:

Direct from website; even Mint package manager version as of 12/15 is still on v3. Follow the instructions on this page, for the most part.
If you installed arm-none-eabi etc. from the previous step, it “should” be ready to rock.
The options in Eclipse have changed a bit since this was written. When creating a new (blank) test project, select “Cross ARM GCC”. Create a main.c file with a hello-world main() (or copy & paste from the page above), then save and build (Ctrl-B).

You may get a linker error similar to: “undefined reference to `_exit'” . I solved this by selecting ‘Do not use standard start files (-nostartfiles)’

I’m currently getting a 0-byte file as reported by ‘size’ (actual output file has nonzero size). Not sure whether to be concerned about this or not:

Invoking: Cross ARM GNU Print Size
arm-none-eabi-size --format=berkeley "empty-test.elf"
text data bss dec hex filename
0 0 0 0 0 empty-test.elf

The actual .elf and .hex are 13k and 34 bytes on disk, respectively.

This is not actually crucial to compiling OpenWSN, so I gave up here and went back to the important stuff:

NON-OPTIONAL: Download and set up SCons. This is the build tool (comparable to an advanced version of ‘make’) used by OpenWSN.

Again, anything in your package manager is horribly out of date, so grab it from the web site, unpack and ‘sudo python install’.

Clone the ‘openwsn-fw‘ repository somewhere convenient (preferably using git, but you could just download the .zip files from github), change into its directory and run scons without any arguments. This gives you help, or is supposed to. It gives some options for various ‘name=value’ options, along with text suggesting that the options listed are the only valid ones. However, popular options like the gcc ARM toolchain and OpenMote-CC2538 are not among the listed options. Luckily, they still work if you googled around for the magic text strings:

scons board=OpenMote-CC2538 toolchain=armgcc goldenImage=root oos_openwsn

This results in an output file of decidedly nonzero reported size:

arm-none-eabi-size --format=berkeley -x --totals build/OpenMote-CC2538_armgcc/projects/common/03oos_openwsn_prog
text data bss dec hex filename
0x16303 0x28c 0x1bd8 98663 18167 build/OpenMote-CC2538_armgcc/projects/common/03oos_openwsn_prog
0x16303 0x28c 0x1bd8 98663 18167 (TOTALS)

Add ‘bootload /dev/ttyUSB0’ (or whatever your serial device shows up as) and run it with the mote in boot mode (hold Boot button / pin PA6 low and reset), and it should Just Work. Upload takes a while. Ideally, you need to flash at least 2 boards for a meaningful test (one master, or ‘DAG root’ in OpenWSN parlance, and one edge node).

Now, need to run openvisualizer to see if anything’s actually happening.

First… currently ‘’ for this package is broken, and barfs with errors e.g.:

Traceback (most recent call last):
File "", line 34, in
with open(os.path.join('openvisualizer', 'data', 'requirements.txt')) as f:
IOError: [Errno 2] No such file or directory: 'openvisualizer/data/requirements.txt'

‘pip install’ might be another way to go, but this appears to install from an outdated repository, and barfs with some version dependency issue.

Since the actual Python code is already here, we can just try running it, which seems to be expected to go through SCons: run ‘sudo scons rungui’ in the openvisualizer directory.

Traceback (most recent call last):
File "bin/openVisualizerApp/", line 29, in
import openVisualizerApp
File "/home/cnc/workspace/openwsn-sw/software/openvisualizer/bin/openVisualizerApp/", line 17, in

from openvisualizer.eventBus import eventBusMonitor
File "/home/cnc/workspace/openwsn-sw/software/openvisualizer/openvisualizer/eventBus/", line 18, in

from pydispatch import dispatcher
ImportError: No module named pydispatch

Well, that doesn’t actually work, but it’s at least a starting point to flushing out all the unmet dependencies by hand. Install the following packages:

pip (to install later stuff)
pydispatch (pip install…)*
pydispatcher (pip install…)
python-tk (apt-get install…)

*No, wait, that one’s already installed. According to the externalized Google commandline, sez you actually need to install a separate package named ‘pydispatcher‘.

Finally, let’s give it a go.

cnc@razor ~/workspace/openwsn-sw/software/openvisualizer $ sudo scons rungui

The OpenWSN OpenVisualizer

The OpenWSN OpenVisualizer

It works! Sort of. I get a mote ID, and can toggle it to be DAG root, and after a while, the 2nd board appears with a long-and-similar address in the neighbor list. It’s receiving packets and displays a RSSI. So at least the hardware is working. However, I can’t interact with it, and it doesn’t show up as a selectable mote ID (presumably just an openvisualizer issue). Nor can I ping either one as described in the tutorial, even though the part of the console dump relating to the TUN interface looks exactly as it does in the example (warning messages and all):

scons: done building targets.
cnc@razor ~/workspace/openwsn-sw/software/openvisualizer $ ioctl(TUNSETIFF): Device or resource busy

created following virtual interface:
3: tun0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
inet6 bbbb::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link
valid_lft forever preferred_lft forever
22:43:24 INFO create instance
22:43:24 INFO create instance
22:43:24 INFO create instance

Killing openvisualizer, plugcycling both radios and restarting it returns a screenful of:

[Errno 11] Resource temporarily unavailable
device reports readiness to read but returned no data (device disconnected?)

Lets try rebooting, it fixes stuff in Windows… Actually, it looks like the GUI window sometimes persists after openvisualizer is supposedly killed at the console and can’t be closed via the UI; there is probably a task you can kill as a less drastic measure.

That cleared up the ‘no data’ errors, but still can’t ping any motes:

cnc@razor ~ $ ping bbbb::0012:4b00:042e:4f19
ping: unknown host bbbb::0012:4b00:042e:4f19

Well, at any rate we know the radio hardware is working, so let’s see the next moment of truth: power consumption. That’s really the point of this whole timeslotted radio exercise; otherwise you’d just drop an XBee on your board and hog as much bandwidth and electrons as you like. The 802.15.4e approach is for wireless networks that run for months or years between battery changes.

Firing up the ol’ multimeter is not the best way to measure current draw of a bursty load, but as a quick first peek it’ll do. On startup, the radio node draws a steady ~28mA, which is not all that unexpected (it needs a 100% initial on-time to listen for advertisements from the network and sync up.) After a few moments, the current drops to 11mA and the node appears in OpenVisualizer. Wait a minute… 11mA you say, with an ‘m’? That’s not that low. Scoping the 32MHz crystal confirms that the CPU is running all the time, rather than sleeping between scheduled activity. Scoping the 32KHz crystal mostly confirms that you can’t easily scope a low-power 32KHz crystal (the added probe capacitance quenches it), but doing so causes the node to drop off the network, then reappear a short time after the probe is removed, so that crystal appears to be functional (not to mention important).

Now, is it software or hardware?

Back to the OpenMote (not OpenWSN) example projects, let’s try an example that ‘should’ put the CPU to sleep:

cnc@razor ~/OpenMote/firmware/projects/freertos-tickless-cc2538 $ make TARGET=cc2538 BOARD=openmote-cc2538 all
Building 'tickless-cc2538' project...
Compiling cc2538_lowpower.c...
Compiling ../../kernel/freertos/croutine.c...
Compiling ../../kernel/freertos/event_groups.c...

cnc@razor ~/OpenMote/firmware/projects/freertos-tickless-cc2538 $ sudo make TARGET=cc2538 BOARD=openmote-cc2538 bsl
Loading tickless-cc2538 into target...
Opening port /dev/ttyUSB0, baud 115200
Reading data from tickless-cc2538.hex
Connecting to target...
Target id 0xb964, CC2538
Erasing 524288 bytes starting at address 0x200000
Erase done
Writing 524288 bytes starting at address 0x200000
Write done

Sure enough, with this example project loaded, the board’s current consumption drops to the uA range (actually, I was lazy and concluded the “0.00” reading on the mA scale told me what I wanted to know), with the 32MHz crystal flatlined except for very brief (<1msec) activity periods.

That’s it for Part 1. Stay tuned, Future Tim, for the part where we track down the source of the missing milliamps!

Clutter, Give Me Clutter (or, a GUI that doesn’t use Google as an externalized commandline)

UX nightmare: Get the menu at a restaurant, and it has only 2 items: toast and black coffee. But if you spindle the corner just right, a hidden flap pops out with a dessert menu. And if you shake it side to side, a card with a partial list of entrees falls in your lap (not all of them, just the ones you’ve ordered recently).

When you eat at Chateau DeClutter, bring a friend. If you can pinch all 4 corners of the menu at the same time, you can request the Advanced menu, wherein you just yell the name of a food across the room, and if that’s something they make, it’ll appear in about 20 minutes, and if not, nothing will happen.

Tim Tears It Apart: Measurement Specialties Inc. 832M1 Accelerometer

So, yesterday the outdoors turned into this.

This much snow in a few hours gets you a travel lockdown...

This much snow in a few hours gets you a travel lockdown…

Not quite the snowpocalypse, but it was enough that a travel ban was in effect, and work was closed. What happens when we’re stuck in the house with gadgets?

All right, I’d like to tell you that’s the reason, but this actually got broken open accidentally at my work a little while back. Sorry for the crummy cellphone-cam pic and not-too-exhaustive picking at the wreckage.

Today’s patient anatomical specimen is a fancy 3-axis piezo accelerometer from Measurement Specialties Inc. This puppy retails for about $150, so this is a ‘sometimes‘ teardown.

The insides of the 832M1 showing two sensor orientations and the in-package charge amplifiers

The insides of the 832M1 showing two sensor orientations and the in-package charge amplifiers

One thing that comes to your attention right away is holy shit, there’s an entire circuit board in there. In retrospect, I probably shouldn’t be too surprised by this. It appears that these are full-on piezoelectric sensors (not e.g. piezoresistive), which are a bit dastardly to read from without a charge amplifier inline. On the circuit you can see three identical-ish copies of a small circuit that is almost certainly that, with a small SOT23-5 opamp in each. The part’s total quiescent current consumption is billed at 12uA, so that’s a paltry 4uA per circuit.

Here you also get a gander at the acceleration sensors themselves. Each is ‘glued’ by what appears to be low-temperature solder paste to its own metal pad on the ceramic substrate, with more of the same used to bond together the parts of the sensor itself. These consist mainly of a layer of gray piezoceramic material sandwiched between two chunks of metal. The larger of these acts as a proof mass, compressing or tensioning the piezoceramic layer when the part is moved on the axis normal to it. The metal ‘buns’ double as electrodes. There are (were) three such sensors in different orientations, one per axis, but the middle one broke off and flew across the room when the package was cracked open.

Like most piezoceramics, the sensors inside are affected by thermal changes, and become more sensitive with increasing temperature. The designers appear to account for this and provide some measurement headroom over the nominal value (this bad boy is a 500G accelerometer) so that the full quoted range can be measured even at the maximum specified operating temperature. This means at room temperature, where it’s less sensitive, you can actually measure accelerations of maybe 30-40% higher than the nominal value before the output limits (with appropriate calibration and reduced resolution of course). At very cold temperatures, even quite a bit higher measurements are possible, with the same caveats.