## Solved: YouTube Watch Later ‘Remove Watched’ missing 2021

TL;DR: Watch one of the first 10 videos in your “Watch Later” list and see if it magically reappears.

Doing a web search for this problem reveals it has been an issue for some time, but possibly for varying reasons in the past. The above is working for me on Web + native clients as of 6/2021.

YouTube has a neat feature, ‘Watch Later’, which does exactly as it says on the tin. The Watch Later list has a very handy context menu option to ‘Remove Watched’, which will appropriately enough prune the list of already-watched videos. Or at least it’s supposed to. This option, which appears on the…. ugh… “three vertical dots menu” (is that really what you call these things?) on the current Android mobile client, appears to randomly appear and disappear from the native mobile client and Web versions of YouTube on a whim.

After more fruitless searching and plinking around than I’d like to admit, it appears this is not telegraphing the future removal of this option, nor part of a grand conspiracy to wean users off of it for more ‘engagement’ by manually deleting every watched video off the queue, instead it’s a dastardly combination of programmer cleverness and laziness. Cleverness in the sense of programmatically adding and 1984’ing* removing the menu item itself depending on an algorithmic assessment of whether it would currently be useful (vs. just leaving the menu item alone and having it do nothing if there are no watched videos to remove), and laziness in the sense of this assessment only checking if there is a watched video in the first ‘n’ (dozen or few) entries. Don’t ask me the exact value of ‘n’ or what factors it may vary on, but it does at least appear to vary between Web and native clients, so you may run into cases where the option appears on the Web version of YouTube but not the mobile app, and on another day disappears from both. Watching something in the topmost (oldest added) 10 or so videos seems to pretty reliably fix it though. (Just don’t ask me what this extreme menu-decluttering shenanigans is meant to accomplish, other than having you Google in circles at len…oh.)

* This menu option does not exist. This menu option has never existed.

## Sticky Nano Heater, a small fishtank heater that stays dry

While procrastinating on my current year+ long “couple long weekends” project, in which I’ve clearly bitten off more than a post-kids me has time to chew, I set my sights on a stupid-simple project I could actually complete :-)

If you’ll recall a couple posts back, I found that it’s curiously hard to find small heaters intended for small (nano/pico) aquariums that don’t suck. Preset temperatures (or no temperature regulation whatsoever), not being invisible, short life span, not being invisible, sketchy mains lead going right into the water, and not being invisible are but a few ways in which they suck. So, I made an invisible one that doesn’t go in the water.

Okay, so it’s not technically invisible, but this one can be easily concealed and does take up -zero- space inside the tank.

This is not a product for sale, but the design files are published under a Creative Commons license if you want to roll your own.

In a nano tank, every cc is precious, and well, your typical heater options for small tanks are a big ugly hunk of plastic or a big ugly tube of glass intruding into the already limited space. Everyone has the silly peel-and-stick liquid crystal thermometer on the outside of the glass though, because however sketchy it looks, they actually work pretty well. So, why not a heater that works the same way?

So to test it out I made the Sticky Nano Heater (not a real product name, suggestions welcome), a skinny circuit board with a 7-10W resistive heating element (PCB trace heater), MOSFET, and a pair of resistor-adjustable thermostat chips to control it. The board is adhered with double-sided tape to the outside of the glass in an unobtrusive location, such as along the substrate, and uses the glass itself as a heat spreader.

Besides “dirt simple project I could bang out quickly”, some design details are:

Dryness: No parts touch the water. Not the heating element, not the thermal sensor, not the power cord. This simplifies the sealing requirements to “nonexistent” (a splash-proof cover or coating is still not a bad idea for those water-change oopsies that never happen.)

Power: Just to keep things simple and avoid having to muck around with mains voltage near things that get wet, the power plug is USB and runs off one of the numerous spare phone chargers found sitting in a drawer somewhere. This keeps the really energetic wall pixies a good 3-6ft away. A common 5V/2A USB charger nominally delivers 10W, but I initially aimed to draw closer to ~7.5W just because I don’t trust the ratings on cheap phone chargers and neither should you. (I ended up cranking it back up to 10W for the 2.5-gallon test tank, but more on that later.)

Temperature Sensing: On one end of the PCB, away from the heating element, is a simple thermostat IC, the cheap and cheerful MCP9510 (or TMP709). It provides a digital output that cycles on/off as the temperature exceeds a resistor-set threshold, with a ~2C hysteresis. This is coupled to the glass by a copper thermal pour and via-stitching on the PCB. On this prototype version, the temperature setpoint is adjusted by twiddling a small pot, but providing a few discrete options is probably simpler for real-world users. A second thermostat on the heating element itself limits the maximum surface temperature to a user-set value (useful for very tiny setups and/or plastic tanks).

Safety Features:

• The main one, pun intended, is no mains wire going into the water.
• The second sensor, coupled directly to the heating trace by a via and small thermal pad, limits maximum surface temperature in case of detachment from the tank, tank materials with poor thermal conductivity, or other adversities. While some informal testing with an outdoor outlet, some dry leaves and a warm day showed that 10W over this surface area doesn’t really pack enough punch to be a serious fire hazard, this feature cheaply protects the heater and nearby surfaces from heat damage, keeps double-stick tape happy, and allows the heater to be freely used on those plastic tanks that are inexplicably the rage these days (YMMV on actual heat transfer of course).
• A surface-mount fuse inline with the power supply. While the USB charger “should” current-limit in the event of a board-level fault, this will blow if it doesn’t, or if the charger faults in a way that sends wall voltage down the wire. (*the prototype shown uses a tiny 0603 fuse that is not rated for 120VAC; this is fixed in the published revision).

Looks like a snakeoil Kickstarter pitch – does this actually work?

Surprisingly well, albeit with a couple caveats I’ll explain in a moment.

Given I just bashed out a suck-it-and-see prototype rather than do any kind of actual thermal modeling, I was a little worried that the glass and tape would not be thermally conductive enough, the tape would immediately overheat and lose its stick, or that the glass would prove too conductive relative to water, causing the proximity of heating element to the temperature sensor to disturb the measurement. On a 2.5 gallon glass betta tank used for test, the outer surface of the heater part of the PCB gets barely warm to the touch on a filled tank (vs. scalding in free air), and the glass immediately outside the contact area is not perceptibly warm to the touch. The water very effectively sinks the heat; no noticeable heat transfers across the only-an-inch-or-so of glass between the heating element and temperature sensor. So far, I’ve just used plain Scotch brand double-stick tape to attach them to glass and plastic, and this has stayed very well stuck while seeming to have negligible effect on heat transfer. Likewise, the tank sinks heat effectively enough that I’ve seen no need to insulate the outer surface of the heater or surrounding glass against excess heat loss.

The only major caveat to the prototype shown is that the oft-quoted “3-5 watts per gallon” (or ~1 watt per liter) rule of thumb kind of breaks down for nano tanks as surface area begins to dominate volume, so the 10W (max) available from a standard USB charger just isn’t a lot. On my 2.5 US gallon (10L) test tank, 10W is really just enough to take the edge off a drafty windowsill and chilly winter thermostat setting, raising the temperature only a handful of degF on average. Of course, slapping on a second one to double the wattage is easy enough, but with 20+ gallons still being considered “nano”, festooning a tank with them kind of defeats the purpose. If I were to do up another one, I’d probably base it on a beefier power source of ~25W or so.

This leads to the second, more minor caveat, which is that PCB trace resistors are rather low precision, the actual resistance (and thus heat output) from a given supply will vary between batches and even between boards. Across two batches of 3 OSH Park boards each, one had a spread of 4% resistance and the other about 50%, and I can’t tell you which of those cases is the outlier, if any. As long as the power supply has enough grunt to cover reasonable variations, this is not a big deal and the thermostatic control (a way to not suck, remember?) will maintain the right temperature regardless. But, it becomes an issue if you’re trying to eke every last drop of power from a marginal wall-wart without toasting it.

Design Details and Lessons Learned

Below is the full schematic. There’s not a lot to it (and the long zigzag at right represents the PCB trace heater, of course), but it’s still “complex” compared to the industry standard. Hey, at least it doesn’t have IoT and a phone app!

This is something an EE1 can probably design in their sleep, but a few minor real-world gotchas worth noting.

• USB phone chargers are switching regulators and can be noisy, and USB cables vary dramatically in quality. In particular, long or poorly-made cables will exhibit a significant voltage drop when the heating element is on. Besides reducing output power, this can cause the tank temperature sensor to oscillate as its interpretation of the setpoint resistor is somewhat voltage-dependent, at least in the short term (the voltage drop when the heater turns on causes the setpoint to appear slightly lower, turning the heater off, which causes the voltage to rise again, etc.). The RC filter (R8, C2) on its power rail mitigates both the inherent noisiness of value-engineered USB chargers and the voltage ripple of heater operation. Note that I didn’t bother with filtering the overtemp sensor, since (you shouldn’t be tripping it anyway and) its exact setpoint is less critical.
• The outputs from the two sensors are ANDed together with an explicit gate vs. using some more clever scheme with diodes. Mainly this allows the all-important tank temperature sensor to have minimal output loading by not driving the indicator LED directly. This avoids self-heating of the tiny sensor IC, which can be significant even at a handful of mA for the tiny die inside. Again, the overtemp sensor is less critical and does drive an LED directly, although at a fairly low current.
• It doesn’t show on the schematic, but I designed the trace resistor to err on the high resistance side, and added a few sets of trim jumpers in the form of closely-spaced vias that could be soldered across to short neighboring traces. This came in handy as the trace resistors had a fair bit of tolerance and it became clear that I needed to extract as much heating power from the power source as possible.
• Remember that the heater side needs to press flush against the tank, so all components (including indicator LEDs) are on the other side of the board and likely facing away from the viewer. To make the LED indication visible I just added openings in the soldermask on both sides of the board and mounted the SMT LEDs upside-down. This produces a pretty neat, subdued effect with the LEDs producing a diffuse glow through the board material in the shape of the soldermask opening. You can buy SMT LEDs specially designed for downward-firing, but simply flipping most standard ones over seems to work fine too, and shining into the PCB material they are plenty visible from both sides.
• Yes, this could totally be done on a flex PCB for contoured tanks. All of mine happened to have flat surfaces handy, so I didn’t bother.

## Potential Safety Flaw in Home Depot HDX brand wire shelving units

Asbestos undergarments? Check.

Lawyer-proof socks? Check.

Here we go.

I got a small safety lesson over the weekend I wanted to share. Officially, it’s about an extremely common design of wire-rack shelving units, but the real safety lesson is to double-check the workmanship of load-bearing products with a critical eye, because the manufacturer may not have!

Here, a simple cheap design choice combines with lax quality control inspection to produce a potentially unsafe product. The TL;DR is that insufficient and off-center welds are used as a primary load-bearing element on the Home Depot shelf design indicated below, allowing the shelves to fail in a way that dumps the contents (for the product described below, up to 350lbs, ~160kg, per shelf) off the shelf, and potentially onto the person who just put them there.

The basic wire shelving unit design I’m referring to dates back to at least 1970 and can be seen in US patent 3,523,508. While the one I purchased, depicted below, came from The Home Depot, you can buy a substantially similar product from most big-box retailers. They consist of a set of typically 3-5 rectangular wire-rack shelves that slip over a set of four corner support poles with grooves at intervals. Each shelf has tubular metal sleeves in the corners to receive the poles, and assembly consists of snapping a set of plastic rings (clamshell or mating half-rings) onto the poles at the groove corresponding to the desired shelf height, then sliding the shelf down the poles until it snugs in against the plastic rings. The plastic rings and/or the matching corner sleeves in the shelf are very slightly tapered, causing the shelf to wedge firmly into place and support heavy loads without sliding down. If you remember your grade school science class, a wedge is a classic simple machine, in this case converting the downward force of objects on the shelf into an outward force at the support sleeves, at a substantial mechanical advantage. With the narrow wedge angle and small size of these sleeves, the forces they must bear are significant to say the least.

The product photos below show the “HDX 5-Tier Wire Garage Storage Shelving Unit in Black (36 in. W x 72 in. H x 16 in. D)”, model #  21656PS-1, although as of this writing the website shows dozens of nearly identical products with various names and model numbers, differing in color or number of shelves (or not at all). This unit claims a load capacity of 350lbs (~160kg) per shelf, or 1,750lbs total. While setting the first shelf with a healthy downward hand-shove in each corner, I heard a small ‘tink’ sound as welds on one of the corner sleeves gave way. With these welds popped, slight hand pressure is enough to peel the corner sleeve open and send the corner of the shelf floorward (the plastic ring should not be visible above the sleeve at all).

The first obvious thing to notice on the Home Depot version is that the load-bearing sleeves are not continuous tubes of metal as shown in the original patent, but were rolled from sheet stock and have a seam. The small pair of welds serves double duty of fixing the wire shelving to the corner sleeve and holding the sleeve closed at the seam. Kind of a penny-pinching design choice, but it’s probably OK as long as those welds are solid…

The first set of broken welds, suffered before the shelf was fully assembled (let alone loaded) prompted a quick visual inspection of the other seam welds. In the photo below, the individual shelves are stacked alongside one another to show the variability in weld quality and placement. Holy wow!

The leftmost sleeve, now marked with red tape, is the same one that’s shown peeling open in the previous photo above, but without the force of the plastic ring it springs back to its original shape and the break is nearly impossible to see. The next, marked with yellow tape, is not yet broken (no attempt was made to assemble it), but the welds are so far off the mark that there is almost no material bridging the seam at all. This one is a disaster waiting to happen. Finally, the rightmost sleeves show more trustworthy welds, centered on the seam with adequate coverage on both sides serving to hold the sleeve closed. Ironically, each sleeve is pre-stamped with an NSF certification logo.

As previously mentioned, a lot is riding on these tiny welds: they bear the entire shelf load (or technically, 1/4 of it per sleeve, assuming the weight is perfectly distributed) via the outward force exerted on the sleeve by the wedging action of the plastic support rings. A failed seam weld will cause the shelf to tilt as that corner slides freely down its supporting rod, putting all the force as well as an unexpected bending moment on the remaining corners and, whether or not this results in a cascading failure, likely tipping the shelf contents onto the floor. Or, since this is most likely to happen in dynamic loading conditions, possibly onto whoever just placed the heavy thing there.

Hoping this unit was a fluke, I tried to exchange it for another of the same model. The employee working the returns counter brought out another and even invited an inspection before taking it home. Unfortunately, while the quality of the welds on the new unit wasn’t quite as bad as the first, there were still corners were only one of the welds traversed both sides of the seam at all. I left with a refund but otherwise empty-handed, apart from the associate manager’s business card and a promise to “run it up the chain”. We’ll see.

If you, dear reader, have a shelving unit like this already in use, I urge you to inspect it for the combination of seams at the shelf corner sleeves and shoddy welds holding them together. While welds are in general nontrivial to assess visually, even by experienced professionals, the process-control issues shown above are pretty obvious to inspection. That said, don’t ask me for advice on whether yours are “good” or “good enough” for “good enough to hold exactly 3 bags of concrete mix if set down gently”. Luckily, if in doubt, 1/2″ height pipe clamps will juuust fit between the typical shelf wires and could be used to bolster the corner sleeves – either as a permanent bit of due-diligence bodgework, or at least long enough to safely unload the unit if you’re not taking any chances.

DISCLAIMER: While the information above depicts a specific product, the underlying issue is not specific to one retailer or model number, and may occur on any unit with a similar design. This post and all statements it contains represents my personal opinion. It does not represent the opinions of my employer, my cat, The Home Depot, or any recognized safety agency. I am not an engineer (well, I am, but not a mechanical engineer), and this is not engineering, legal or any other sort of advice.

## Tim Tears It Apart: SunGrow Betta Heater, a Sketchy Preset Aquarium Heater

Spoiler alert time. Did you know there are ferrous metal composites with arbitrary Curie points, extending down to room temperature and even below freezing? You never know where a teardown of a theoretically boring product will lead. Seriously, take more stuff apart, it’s good for you.

I’ll spare you the lengthy story of the goldfish that finally decided it was big enough to turn cannibal, but I needed to evict some sundew plants from a small betta tank I had on the windowsill, and make it habitable for actual fish on short notice in winter. Hurry, right? It turns out the options for tiny low-wattage heaters for small tanks are kind of limited, so click, click, buy. Soon, gracing me with its presence is this cute little heater stick. The size is right, but the hairs on the back of my neck stood up a bit on noticing this fine Chinese gadget’s 120VAC cord is meant to go from the wall socket directly into the water. If you had the kind of parents itching to slap that spoonful of raw-egg cookie dough out of your mouth, or the kind that made you do all chainsaw juggling outside, they’d probably have a thing or two to say about this too. Not only is it a fully submersible heater, but the instructions actually warn against leaving the top (cord entrypoint) of the heater out of the water (or tilting it, but that’s neither here or there).

To my pleasant surprise, this thing worked as advertised, maintaining the tank at more-or less a balmy 78F, and not electrocuting anyone or anything. However, just shy of a whopping 6 months after purchase, I notice the heating indicator light is staying on constantly, but it’s no longer actually producing any heat. So, let’s have a peek inside and see what happened! You can probably guess, but the innards are mildly interesting regardless. Let’s take a look.

According to the actual marking on the unit, this is the “Aqua Thermo Nano Plastic Aquarium Heater”. Through a cursory search on Amazon, the exact same heater is sold under a wide variety of manufacturer names, including a collection of randomish all-caps names like GOOBAT, VIBIRIT, PGFUNNY and dozens more, in some cases with claimed power ratings up to 300 Watts(!).

These small “betta heaters” seem to come in two varieties: those with a preset thermostat, permanently set to a fairly high temperature around 78F, and those that are just a fixed resistor that continuously injects a certain number of watts of heat into the tank. (There is a third kind advertising a PTC thermal control element, that according to reviews, are actually just a fixed resistor with no thermostatic element.)

The submersible unit has only one obvious seam, near the top where the “cap” receiving the power cord enters (the seam-like line around the bottom face appears to be a moulding artifact; this is all a single piece). The top cap is mostly decorative; popping it off reveals, along with a bit of trapped water, a second, rubber cap glued in place by an adhesive resembling RTV silicone caulk. This is the seal protecting the 120V cord and innards from water intrusion. Removing this inner cap reveals the simple circuit inside.

The first thing to note, aside from the complete lack of fusing, GFCI or any other safety measures, is that the inside of this compartment is dry (the small amount of water visible on the workbench was trapped between the white plug and decorative black cap, not in the innards). Leakage into this area was not the cause of failure. Cutting into the power cord reveals that it is also dry inside, with at least individually-insulated mains wires inside the outer insulation jacket. This chamber contains nearly the entire circuit, except for the heating element, buried beneath a final plug of black epoxy. Sawing the package open reveals the final component.

Sawing into the bottom, the first thing to emerge was a pile of damp(!) sand, along with a bit of fiber fill (cotton ball?) and a not-faint “fishy” smell. While many unique smells are described as fishy, this reminded me of the smell released by an electrolytic capacitor going bang in the night. Don’t ask me how the sand got damp; the plastic package appears to be a sensible one-piece design apart from the plug where the cord comes in and did not appear obviously damaged, and again, the upper components beneath the plug were bone dry.

With the sand and some heatshrink out of the way, here is the full circuit, exposed for your viewing pleasure. As you might expect, there’s not much to it. The component marked “CTR-34” is a thermal switch in series with the heating element; the only other components are a current-limiting resistor and reverse-blocking diode protecting a small red indicator LED glued into the white cap. Interestingly, the heating element is an off-the-shelf radial wirewound resistor, rather than a bespoke Nichrome wire coil or similar. You can find very similar-looking name-brand parts. This is marked with a generous 15W rating – a pleasant surprise for our 10W-rated heater (more on this in a bit) – and 780-ohm resistance. It’s also marked as… no, wait, that’s a giant crack extending most of the way down the package. This was not suffered during the disassembly effort. Gently removing the cracked-off portion reveals discoloration and corrosion, as well as an even stronger waft of “fishy” smell. If the wet sand wasn’t enough to give it away, I think we’ve found our failure mode. That said, quick math on our “10 Watt” heater has the resistor dissipating 18.5W on 120VAC North American power. For the curious, the “120V” figure quoted is an RMS value – an average of sorts that already accounts for the fact that AC voltage, and thus its resistive heating power, varies throughout its 60Hz cycle (the peak voltage from the wall is around 170V).

So, we’ve done it, right? Nothing left to take apart! We even had a look inside the resistor! (pause for reverent sighs)

The “CTR-34” component is a bit of a mystery to me; I’ve never seen a thermal switch of this style before, with its sleekly polished package. The ad copy makes reference to an “Intelligent Temperature Chip”. So let’s saw it open, amirite?

With only a little mangling in the process, we liberate the contents and see some surprising complexity. There is a tiny glass-encapsulated reed switch – not a bimetallic switch, but a magnetic reed switch – encased by a stack of black donuts, further encased in the sealed metal shell. The black donuts turn out to be a pair of small permanent magnets surrounding a piece of ferrite material with a carefully-engineered Curie point. One such material is known by the trade name Thermorite, although many competing vendors offer similar switches using the same approach. By carefully controlling the mix of dopants and particle size of the ferrite material, it can be engineered to have arbitrary Curie temperatures as low as -10C or lower. (Holy shit, that’s a thing?) Below the Curie temperature, the soft-iron ferrite is permeable to the permanent magnets’ field and holds the switch inside closed. Above the Curie temperature, the ferrite loses its permeability, allowing the switch to open and turn off the heat. If I were to guess, the sleek silver-colored shell is a special alloy such as mu-metal that protects it somewhat from external magnetic fields.

As far as I can find, switches made using this approach are simply known as thermal reed switches. I have to admit I am a little baffled by the design choice, as this is otherwise a “cost-optimized” overseas import product, but these switches run several dollars a pop (at least domestically; the cheapest I found on Mouser was $3.20@1000pcs). My best guess is that once you eat the actual cost of the component, it’s idiotproof and can be crammed into the case by trained monkeys without any manual adjustment, calibration or chance of getting out-of-whack due to rough handling. ## Other Utricularia for aquascaping besides UG Utricularia graminifolia (UG) is a popular foreground plant for planted aquariums, with grass-like leaves that eventually form a lush green carpet over the substrate. It is considered anywhere between easy and impossible to grow, and has some special needs that make it not a beginner’s plant. However, there are a whole range of Utricularia that might grow fully submerged and give you a different look, and might even be easier for you to grow. Below is a dump list of suspected submersion-friendly Utricularia, mostly for my future self to try growing when the world grows out of the current pandemic, my kids start sleeping during the night and I have a lick of free time for stuff like that again. This is by no means a comprehensive list. If you want that, head to Barry Rice’s extremely comprehensive Carnivorous Plant FAQ, find the Utricularia section and start clicking. He breaks them out by subgenus (so get your clicking finger warmed up), but under each is a table listing the growth habit (those listed as affixed aquatic and possibly subaffixed aquatic are good candidates). For photo references below, I tried to find some showing the foliage – for many Utricularia growers, and specific species, it’s all about the flowers. If you do have any experience growing these fully submersed, please sound off in the comments! • U. bifida – (photo) (Source: webforum post) • U. caerulea – (photo/description) Note, may be synonymous with U. nivea (Source: webforum post) • U. dichotoma – (photo) Better known as Fairy Aprons for their unique flowers. (Sources: Besides this webforum post, and this other one, Carnivorous Plant Nursery until recently sold this plant with a mention of use as an aquarium foreground plant.) • U. geofrayii – To a depth of 5cm before flowering ceased (Source: webforum post) • U. limosa – (Source: webforum post) • U. livida – (photos/info) “easier to grow than UG and forms nice, thick mats in the aquarium”. More bulbous, grayish foliage compared to UG. (Sources: this fishkeeping article. Carnivorous Plant Nursery sold it as an aquarium foreground plant comparable to UG, but has since stopped carrying it.) • U. monanthos – (Source: webforum post) • U. nivea – To a depth of 5cm before flowering ceased (Source: webforum post) • U. praelonga – “easier to grow than UG and forms nice, thick mats in the aquarium” (Source: fishkeeping article). Appears to have significantly larger foliage than others in this list. • U. sandersonii – (photo) “The leaves are roughly the size and shape of duckweed, with small bladders interspersed throughout. It is much easier to grow than U. graminifolia and forms a beautiful dense carpet” (Source: fishkeeping article). Outside aquascaping, this plant is widely available commerically and known for having flowers that look like a cute bunny. • U. tricolor (Source: webforum post) • U. uliginosa – “Some varieties […] thrive at a depth of 30-40cm” (Source: webforum post) • U. volubilis – Twining bladderwort. “It has long leaves that are arranged in a rosette, and each rosette produces long stolons that produce additional plantlets.” (Source: The Carnivorous Plant FAQ. You can also see a picture of this growing in an aquarium with sand substrate) For my own part, UG has been somewhere in the middle, between easy and impossible to grow (or between dead and flourishing). I did get it to grow densely (if incredibly slowly) in basically just dirty water – a bed of terrarium gravel full of runoff from watering other CP pots – and even flower, but for anything resembling fully submerged in an aquarium, it just gradually uproots (ahem, up-submerged-stems?) and floats away rather than establishing. I’m almost certainly Doing it Wrong(tm), but figure it might be easier to adapt the plant load to the growing conditions rather than the other way around. So far, my UG seems to survive in a wide range of lighting and fertilization conditions, tolerating pretty crap lighting on my windowsill and more, ahem, tank-borne nutrient (produced naturally by the fish) than I would expect for a carnivorous plant, although it doesn’t seem to love any of those conditions. Grown semi-emersed in a peat:sand mix and well protected from nibbly fish, it produces the patchy carpet shown below. This is the better part of a year’s growth starting from a couple small sprigs. ## Disable automatic reboots on Windows 10 (maybe) Look, I get it, updates are important, and so is installing them timely. But after waking up this morning to find yet-another overnight job lost to an automatic reboot, it’s the last straw. To afford nice things (you know, like computers), I need to do my job, and that requires actually using my computer and delivering the results, including the output of long-running jobs. For Windows 10 Professional/Enterprise/whatchamacallit, this can be done through the group policy editor (gpedit.msc), but this is not available on the Home edition (and workarounds involve sketchy download sites). So, trying this to defeat auto-updates instead (from this thread). Note, these files are protected and you need admin privileges to touch them of course: • Open %windir%\System32\Tasks\Microsoft\Windows\UpdateOrchestrator • Rename ‘Reboot’ to Reboot.old • Create a dummy folder named Reboot (I assume this prevents the “missing” file being restored or overwritten). We’ll see if this does anything, or for how long…! ## Notes To Myself: Fixing File Sharing in Windows 10 after “Fall Creators Update” breaks it About 2 years ago, sharing movies from my desktop (Windows 10, alas) to the old Linux Mint laptop acting as a Chromecast-that-plays-local-content-without-weird-workarounds randomly stopped working, with Gigolo reporting the very helpful error, “Connection timed out”. Fair enough, it’s not Gigolo’s job to diagnose problems caused by dodgy Windows updates. After more Googling than it should have required (and 2 years putting it off for lack of computer-fixing and movie-watching time as the laptop, a horizontal surface, slowly drowned under new clutter), it turns out that some then-recent Windows updates, installed during the night for your convenience, silently break several features non-Windows hosts (yes, even recently updated ones) depend on to access Samba/CIFS shares. One of these changes, beginning in version 1803, is to break the ability to discover shared folders or their hosts on the network via browsing. (You can still access them via UNC paths such as \\computername\folder or \\192.168.1.2\folder or their smb:// equivalents, if you know them already.). Officially, the change is described as disabling HomeGroups, a Windows-specific, post-XP discovery feature I still can’t differentiate from Workgroups, but for me it seemed to break discovery on the whole, including on Linux hosts. Another change, introduced in the “Fall Creators Update”, disable support for the old SMB 1.x protocol version. This is what was resulting in the Windows share host seeming to not exist, even after the usual rain dances (yes network is connected, yes router is powercycled, yes smbd/nmbd are running, …), on freshly-updated Linux Mint 18.2 (LTS). Killing this off is at least defensible given the age of SMB 1.x and known (if somewhat theoretical unless the hackers are already in your living room) security holes, but a heads-up would have been nice. Props to this random page for the “Suddenly can’t connect in Linux/Android”, SMB 1.x fix. In case it disappears, the fix is to open a Run dialog (Win-R) or command prompt, and type “optionalfeatures” (minus quotes). This sounds like a directive in the world’s lamest text-based adventure, but bear with me. You should get a dialog of various random features in alphabetical order. Find the one labeled “SMB 1.0/CIFS File Sharing Support”, expand it and enable the SMB 1.0 client and server. To fix discoverability, some discovery services need to be turned back on (natch). Amazingly, the Windows 10 support article on the subject (which often appears as a link embedded in the UI near the relevant setting) is at least partially helpful. You have to scroll down to the “How do I troubleshoot sharing files and folders?” item and expand it yourself, because including a shareable anchor link to that part was a bridge too far, but the instructions within are pure gold. The instructions under ” Make sharing services start automatically” will roll back the damage caused by the 1803 update and make shared folders great discoverable again. While you’re in there, you can also follow the advice to “Turn on network discovery and file and printer sharing, and turn off password protected sharing” if you’d like. I still couldn’t connect to the share after the above incantations, but can’t be 100% sure if it was another Windows update demon, a bug on the Linux side, or just me misrecalling the username/password after years of disuse, so I enabled this (nuclear) option and finally was back to watching movies on our TV. Just be sure to check for and de-l33t any local hackers already connected to your Wifi before doing this. Check the basement, check the cupboards, and if you have a weird closet under the stairs from which crackling sounds and owl noises occasionally emanate, be sure to check there too. ## Silly Subproject: Switching 7805 / 7833 TO-220 replacement Thanks to an oopsie on a larger project that involved not doing the math before dropping a small 3.3V linear regulator into a >20V input circuit design, I had a need to swap it for something that generated less finger-burning, board-cooking, smoky-smelling heat. I’m sure everyone and their dog has done one of these already, but I was putting in for boards anyway and had the PCB editor already open, so: A tiny, low cost switching regulator board that’s a drop-in replacement for a through-hole TO-220 linear regulator such as a 7805 (5V) or LM1117/LM3940/etc. (3.3V). You can also wire it up in place of a SMT regulator of course; I just liked the reusability of the iconic 7805-style pinout and form factor. Download Gerbers, EAGLE (7.x) files and BoM below. You can order the bare board itself from OSH Park for a couple bucks. Note, this is a surface mount design with 0603 size components, although there are only 6 components in total. The heart of this minimal circuit is an AP632xx buck converter from Diodes Inc., with a maximum input voltage of 32V and claimed output current of 2A, with typical efficiencies in the 80-90% range (depending on input voltage, output voltage and current). The output voltage can be switched by stuffing either the AP63203 (3.3V) or AP63205 (5V). I’d be a little cautious of trying for the claimed 2A on this board, just because it’s so tiny (0.4 x 0.635 inches) and heat dissipation from the chip itself is, at least according to the datasheet recommendations, mainly by proximity (through an air gap!) of the plastic package to the groundplane beneath. On the other hand, in the circuit I made this for (~20V input, 3.3V out at ~300mA) the chip itself is barely detectable as warm to the touch. If in doubt, adding some thermal goop on the underside of the package might help. ## Not Dead I’ve just been busy with a couple of little things. Now that the littlest one is figuring out this whole sleep thing, I might have time for projects again :-) ## Hooah! Could your next MRE contain bug meat? This delicious solicitation for an upcoming DARPA project rolled …er, scuttled? across my desk last week. Now, I’m no stranger to unconventional protein sources with way too much exoskeleton, but this project might be food for thought if you plan to enlist. Excerpts, emphasis mine: Component: DARPA Topic #: SB172-002 Title: Improved Mass Production of Beneficial Insects Technology Areas: Bio Medical Chem / Bio Defense OBJECTIVE: Develop innovative … approaches [for] insect colony production to be used for a variety of purposes in agricultural production or agricultural research (e.g., edible insects, natural enemies for biological control of agricultural pests, pathogens, or weeds, etc.). [M]anaged insect production could play a large and important role in ensuring national security through stabilization of food security or the provisioning of other essential services delivered by insects. Removing or reducing barriers to the efficient, economical, and effective production of valuable insect species could be used to improve agricultural production, deliver novel sources of nutrition, and protect necessary ecosystem services. Phase III projects should address the challenge of encouraging human acceptance of insects and insect-derived products for human use. Phase III (Military): The integration of insect-derived products or ecosystem services (e.g., into the Combat Feeding Directorate or the Armed Forces Pest Management Board) is a potential option for technology transition. The objective of Phase III (Military) will be to determine feasibility, utility, and acceptance levels of these products and production systems by military personnel, especially in deployment scenarios. The full solicitation will be available at this link for a limited time, with all the buzzwords intact. For when it inevitably crawls shuffles off this mortal coil, the fulltext is reproduced below. DARPA awards are a great cross-pollination opportunity for small businesses; let’s just hope the award doesn’t go to some fly-by-night operation. OBJECTIVE: Develop innovative engineering (e.g., automation or bio-sensing technologies), genetic, and/or genomic approaches to reduce the negative characteristics associated with insect colony production to be used for a variety of purposes in agricultural production or agricultural research (e.g., edible insects, natural enemies for biological control of agricultural pests, pathogens, or weeds, etc.). Projects focusing on mosquito production are discouraged from applying. DESCRIPTION: There is a DoD need to improve production systems to produce insects for food or feed, agricultural release, or entomological research in an effort to mitigate threats to agriculture stability and develop alternative methods of producing nutrients or other bio-synthesized products. Insects currently provide crucial “ecosystem services” including natural pest suppression and pollination that are under increasing strain from environmental and anthropogenic disturbance. In contrast, advances in synthetic biology provide future opportunities to bolster these roles, or create entirely new insect-delivered services altogether. Achievement of these goals will require large numbers of specific insect species to be produced at a scale that is currently difficult because of system bottlenecks. If these bottlenecks could be overcome, managed insect production could play a large and important role in ensuring national security through stabilization of food security or the provisioning of other essential services delivered by insects. Insects are the dominant animal group on the planet, and many species are accordingly vital to the provisioning of natural capital in support of the human economy. These so-called “ecosystem services” may be calculated as the value of the services lost if insects were to disappear. Using this method, Losey and Vaugh (2006) valued wild insect ecosystem services in the United States, including pollination, pest suppression, nutrient cycling, and recreational opportunities, at no less than$57 billion USD per year. Debates continue as to the accuracy and ethics of assigning values to natural services, but few can argue that a world without insects would struggle and perhaps fail to support human economies as we know them today.

The opportunity to positively affect large-scale managed insect production requires technological advances to overcome the bottlenecks created by the feeding media or substrate, labor, post-processing, quality control, and insufficient capital to generate efficiencies of scale (Cohen et al. 1999, Grenier 2009). Many insect species, especially those used for pest control, have relatively inflexible dietary demands in terms of nutritional quality, and some natural enemy species require only certain animal species as hosts. Insect bodies are fragile, and have generally been handled by humans during husbandry and packaging, a time-consuming and often expensive endeavor. Artificial rearing sometimes produces poor quality results; for example, it can yield insects with low nutritional value or that are unable to function in the environment upon release. Too often, existing solutions are expensive, thus triggering a vicious cycle where the insect product is not economical enough to attract the very capital expansion investment that would reduce the cost-per-unit to sustainable levels.

Accordingly, innovative solutions to these problems of rearing valuable insect species en masse would prove immensely valuable. Opportunities abound to improve rearing success on artificial diets, increase automation of husbandry and processing, improve quality control, and reduce cost-to-entry barriers of novel or existing technologies that overcome the most common insect rearing hurdles. Improved genetic, genomic, and proteomic understanding and editing tools allows enhanced diet optimization on both the production (nutritional) and consumption (insect) ends of the pipeline. Vast improvements in sensors, robotics, and computing have already allowed a nascent, automated plant-farming industry to form, and similar technologies could be developed or transferred to insect rearing and processing methods. Plummeting costs in an array of molecular techniques and specialized production platforms encourage a re-evaluation of formerly cost-prohibitive processes or a re-imagination of new ones.

Removing or reducing barriers to the efficient, economical, and effective production of valuable insect species could be used to improve agricultural production, deliver novel sources of nutrition, and protect necessary ecosystem services. Innovative engineering, bio-synthetic, and/or genetic/genomic strategies will be required to improve the output, quality, and viability of large-scale insect rearing needed to meet these goals.

This SBIR topic seeks approaches to identify and address issues associated with large-scale insect rearing and/or the improvement of production outcomes. We encourage applications that use emerging engineering and genetic/genomic tools to these ends. Expected outcomes could be: rapid assessment and/or production of successful artificial diets; improved rearing efficiency and/or scale through the use of automation, strategies, or machines to rapidly assess insect quality or delicately handle live insects for post-processing; and materials or methods to speed return on investment during the scaling-up process.

PHASE I: Identify engineering objectives, molecular targets, or innovative strategies for improving production and performance of insects to improve large-scale rearing operations. Individual projects should address at least one of several challenges expected, which include: (1) artificial insect diet success, (2) increased efficiency and automation, (3) improved quality control and post-processing, (4) materials or methods to significantly improve rates of return on creating economies of scale. Example approaches could include the following:

• Artificial diets for difficult-to-produce or especially valuable beneficial insect species.
• Engineering advances in insect rearing facilities to increase energy, materials, and/or labor efficiency.
• Methods, sensors, or machines to improve insect quality and reduce post-processing time or losses.
• Novel, alternative, or streamlined solutions to especially costly insect rearing facility problems.

The key deliverable for Phase I will be the demonstration of a proof of concept that the selected challenge has been overcome and can be scaled to a larger format. These demonstrations should be performed in repeated experiments in small colonies (i.e., tens to hundreds of individuals) on single or multiple insect species where significant improvements in insect rearing success, efficiency, end product, or cost-per-unit can be shown to have significantly improved through relevant analysis.

Seeing as you can now get ludicrously cheap WiFi modules (like the infamous $2 ESP8266) and run them indefinitely (if infrequently) with a small solar cell and rechargeable battery, they scream possibilities. If you could use random WiFi access points nearby to (a) triangulate trilaterate your location and (b) phone it home, you’d be in business. We know WiFi geolocation is a thing (Apple and Google are doing it all the time), but sneaking data through a public hotspot without logging in? To find out, I ran a little experiment with a Raspberry Pi in my car running a set of wardriving scripts. As I went about my daily business, it continuously scanned for open access points, and for any it was able to connect to, tried to pass some basic data about the access point via DNS tunneling*, a long-known but recently-popular technique for sneaking data through captive WiFi portals. Read the footnote if you’re interested in how this works! The Experiments Since I’m pressed for freetime and this is just a quick proof-of-concept, I used a Raspberry Pi, WiFi/GPS USB dongles and bashed together some Python scripts rather than *actual* tiny/cheap/battery-able gear and clever energy harvesting schemes (even if that is occasionally my day job). The scripts answer some key questions about WiFi geolocation and data exfiltration. All of the scripts and some supporting materials are uploaded on GitHub (WiSneak). 1) Can mere mortals (not Google) viably geolocate things using public WiFi data? How good is it compared to GPS? Is it good enough to find your stuff? The ‘wifind’ script in scanning mode grabs the list of visible access points and signal strengths, current GPS coordinate, and the current time once per second, and adds it to a Sqlite database on the device. Later (remember, lazy and proof of concept), another script, ‘query-mls’ reads the list of entries gathered for each trip, queries the Mozilla Location Service with the WiFi AP data to get lat/lon, then exports .GPX files of the WiFi vs. GPS location tracks for comparison. There are other WiFi location databases out there (use yer Googler), but most are either nonfree (in any sense of the word) or have very limited or regional data. MLS seemed to have an answer for every point I threw at it. The only real catch is you have to provide at least 2 APs in close physical proximity as a security measure – you can’t simply poke in the address of a single AP and get a dot on the map. 2) Just how much WiFi is out there? How many open / captive portal hotspots? How many of them are vulnerable to DNS tunneling? A special subdomain and DNS record were set up on my web host (cheap Dreamhost shared hosting account) to delegate DNS requests for that subdomain to my home server, running a DNS server script (‘dnscatch’) and logging all queries. This is the business end end of the DNS tunnel. In tunnel discovery mode, ‘wifind’ repeatedly scans for unencrypted APs and checks if their tunneling status is known yet. If unknown APs are in range, it connects to the one with the highest signal strength, and progresses through the stages of requesting an address (DHCP), sending a DNS request (tunnel probe with short payload), validating the response (if any) against a known-good value, and finally, fetching a known Web page and validating the received contents against the expected contents. The AP is considered ‘known’ if all these evaluations have completed, or if they could not be completed in a specified number of connection attempts. The companion script, ‘dnscatch’ running on a home PC (OK, another RasPi… yes I have a problem) catches the tunneled probe data and logs it to a file. The probe data includes the MAC address and SSID of the access point it was sent through. Finally, ‘query-mls’ correlates the list of successfully received tunneled data with the locations where the vulnerable AP was in range, outputting another set of .GPX files with these points as POI markers. Preliminary Results This is expected to be an ongoing project, with additional regional datasets captured and lots of statistics. I haven’t got ’round to any of that yet, but here is a look at some early results. All of the map images below were made by uploading the resulting .GPX files into OpenStreetMaps’ uMap site, which hosts an open-source map-generation tool (also) called uMap. Until finding this, I thought generating the pretty pictures would end up being the hardest part of this exercise. WiFi geolocation and data exfiltration points This trip, part of my morning commute (abridged in the map image), consists of 406 total GPS trackpoints and 467 WiFi trackpoints (the lower figure for GPS is due to the time it takes to fix after a cold start). Of these, 238 (51%) were in view of a tunnelable AP. The blue line is the “ground truth” GPS track and the purple line with dots is the track estimated from WiFi geolocation showing the actual trackpoints (and indirectly, the distribution of WiFi users in the area). The red dots indicate trackpoints where a tunneling-susceptible AP was in-view, allowing (in a non-proof-of-concept) live and historical telemetry to be exfiltrated by some dirty WiFi moocher. Overall, the WiFi estimate is pretty good in an urban residential/commercial setting, although it struggles a bit here due to the prevalence of big parking lots, cinderblock industrial parks and conservation areas. The apparent ‘bias’ in the data toward WiFi-less areas in this dataset is consistent, based on comparison to the return drive, and does not appear to be an artifact of e.g. WiFi antenna placement on the left vs. right side of the Pi. GPS and WiFi geolocation points with tunneling-friendly points shown in red. Here is basically the same thing, with the tunnelable points shown according to the ground-truth GPS position track rather than the WiFi location track. How does it do under slightly more challenging WiFi conditions? Here is another dataset, taken while boogeying down MA-2 somewhere north of the posted speed limit. Surprisingly, the location accuracy is pretty good in general, even approaching proper highway speeds. This is the closest thing I have to a “highway” dataset, due to just now finishing up the scripts and not having a chance to actually drive on any yet. It will be interesting to see how many significant locatable APs can be found on a highway surrounded by cornfields instead of encroaching urban sprawl. I suspect there are a few lurking in various taffic/etc. monitoring systems and AMBER alert type signage scattered about, but they may not be useful for locating (with MLS, at least) due to the restriction of requiring at least two APs in-view to return a position. This is ostensibly to prevent random Internet loonies from tracking people to their houses via their WiFi MACs, although I have no idea how one would actually accomplish that (at least in an easier way than just walking around watching the signal strength numbers, which doesn’t require MLS at all). Unfortunately, I don’t have tunneling data for this track since I haven’t driven it with the script in tunnel discovery mode yet. Location track on MA-2 with fairly sporadic WiFi coverage Closeup of location track on MA-2 with fairly sporadic WiFi coverage In this dataset, MLS would occasionally (typically at the fringes of reception with only a couple APs visible) return a completely bogus point, centered a few towns over, with an accuracy value of 5000.0 (I presume this is the maximum value, and represents not having any clue where you are). Every unknown point produced the same value, resulting in an odd-looking track that periodically jumps to an oddly-specific (and completely uninteresting) coordinate in Everett, MA, resulting in the odd radiants shown below. These bogus points are easily excluded by setting a floor on allowable accuracy values, which are intended to represent the radius of the “circle where you might actually be with 95% confidence, in meters”. Location track on MA-2 with fairly sporadic WiFi coverage, bogus points included Summary 1) Can mere mortals (not Google) viably geolocate things using public WiFi data? Totally! Admittedly, my sample set is very small and only covers a local urban area, but it’s clear that geolocating with MLS is very approachable for mortals. If you plan to make large numbers of requests at a time, they expect you to request an API key, and reserve the right to limit the number of daily requests, but API keys are currently provided free of charge. IIRC, numbers of requests that need to worry about API keys or limits are on the order of 20k/day; this requirement is mainly aimed at folks deploying popular apps and not individual tinkerers. For testing and small volume stuff, you can use “test” as an API key. How good is it compared to GPS? Is it good enough to find your stuff? So far, ranging from GPS-level accuracy to street-level accuracy (or “within a couple streets” if they are packed densely enough but not much WiFi is around). Generally, not as good as GPS. The estimated accuracy values ranged typically from 50-some to 250 or so meters, vs. a handful for GPS. Remember though, the accuracy circle represents a 95% confidence value, so there’s a decent chance the thing you’re looking for is closer to the middle than the edges. This might also depend on how big your thing is and how many places there are to look. In some cases, narrowing it down to your own distribution center might be enough. 2) Just how much WiFi is out there? How many open / captive portal hotspots? How many of them are vulnerable to DNS tunneling? Like mentioned above, I found about 50% of points taken in a dense residential area were in-view of an access point susceptible to tunneling. In this area, the cable operator Comcast is largely – if inadvertently – responsible for this, so your mileage may vary in other areas (although I expect others to follow suit). In the last few years, Comcast has been replacing its “dumb” rental cable modems with ones that include a WiFi hotspot, which shows up unencrypted under the name ‘xfinitywifi’ and serves up a captive portal. The idea is that Comcast customers in the area can log into these and borrow bandwidth from other customers while on the go. Fortunately for us, so far it also means plenty of tunneling opportunities: ‘xfinitywifi’ represented nearly 50% of all APs in my local area, and 15% of a much larger dataset including upstate New York (this dataset has limited tunnel-probe data only and does not include location data). It also means Comcast – and other cablecos – could make an absolute killing selling low-cost IoT connectivity if they can provide a standardized machine-usable login method and minimal/no per-device-per-month access charges. An enterprise wishing to instrument stuff buys a single service package covering their entire fleet of devices and pays by the byte. Best of all, Comcast’s cable Internet customers already pay for nearly all of the infrastructure (bandwidth bills, electricity, climate-controlled housing of the access points…), so they can double-dip like a Chicago politician. Under The Hood There are a few practical challenges with this test approach that had to be dealt with: The method used to manage the WiFi dongle (a Python library called ‘wifi‘) is a bit of a kludge that relies on scraping the output of commandline tools such as iwlist, iwconfig, etc and continually rewriting your /etc/network/interfaces file (probably via the aforementioned tools). This combination tends to fall over in a stiff wind, for example, if it encounters an access point with an empty SSID (”). Running headless, crashes are neither detectable or recoverable (except by plugcycling), so to keep things running, I had to add a check that avoids trying to connect to those APs. It turns out there are quite a few of them out there, so I’m probably missing a lot of tunneling opportunities and location resolution, but the data collected from the remaining ones is more than adequate for a proof-of-concept. I also added the step of replacing /etc/network/interfaces with a known-good backup copy at every startup, as a crash will leave it in an indeterminate state of disrepair, ensuring a fair chance the script will immediately crash again on the next run. Keeping track of trips. A headless Pi plugged into the cig lighter and being power-cycled each time the car starts/stops will quickly mix data from multiple trips together in the database, and adding clock data would only compound the problem (as the clock is reset each time). The quick solution to this was don’t use a database adding a variable called the ‘runkey’ to the data for each trip. The runkey is a random number generated at script startup and associated with a single trip. To my mild surprise, I have gotten all unique random values despite the startup conditions being basically identical each time. Maybe the timing jitter in bringing up the wifi dongle is a better-than-nothing entropy source? Connection attempts are time-consuming and blocking. At the start of this project, I wasn’t sure that connecting to random APs would even work at all at drive-by speeds. It does, but you kind of have to get lucky: DHCP takes some seconds (timeout set to 15), then the remaining evaluations take some further seconds each. Even with the hack of always preferring to connect to the strongest AP (assuming it is the closest and most likely to succeed), the odds of connecting are iffy. Luckily, my daily commute is the same every day, so repeated drives collect more data. Ordinarily, whatever AP I encounter ‘first’ would effectively shadow a bunch of others encountered shortly after, but the finite-retry mechanism ensures they are eventually excluded from consideration so that the others can be evaluated. The blocking nature of the connection attempts also prevents WiFi location data (desired at fixed, ideally short intervals) from being collected reliably while tunnel probing. The easy solutions are to plop another Pi on the dashboard (sorry, fresh out!) or just toggle the script operating mode on a subsequent drive (I did the latter). The iffyness of the connections may also explain a significant discrepancy between tunnel probes successfully received at my home server and APs reported as successful by the script side (meaning they sent the probe AND received a valid reply). Of course, I also found the outgoing responses would be eaten by Comcast if they contained certain characters (like ‘=’) in the fake domain name, even though the incoming probes containing them were unaffected. One thing that hasn’t bitten me yet, oddly enough, is corruption of the Pi’s memory card, even though I am letting the power cut every time the car stops rather than have some way to cleanly shut down the headless Pi first. You really ought to shut it down first, but I’ve been lucky so far, and my current plan is to just reimage the card if/when it happens. *DNS Tunneling Typically, public WiFi hotspots, aka captive portals, are “open” (unsecured) access points, but not really open to the public – you can connect to it, but the first time you try to visit a website, you’re greeted with a login page instead. DNS Tunneling is one technique for slipping modest amounts of data through many firewalls and other network-access blockades such as captive portals. How it works is the data, and any responses, are encoded as phony DNS lookup packets to and from a ‘special’ DNS server you control. The query consists mostly of your encoded data payload disguised as a really long domain name; likewise, any response is disguised as a DNS record response. Why it (usually) works is a bit of a longer story, but the short answer is that messing with DNS queries or responses yields a high chance of the client never being able to fetch a Web page, thus the access point never having the opportunity to feed a login page in its place, so the queries are let through unimpeded. If the access point simply blocked these queries for non-authenticated users, the HTTP request that follows would never occur; likewise, since hosts (by intention and design of the DNS) aggressively cache results, the access point feeding back bogus results for these initial queries would prevent web access even after the user logged in. DNS tunneling is far from a new idea; the trick has been known since the mid to late ’90s and is used by some of the big players (PC and laptop vendors you’ve most certainly heard of) to push small amounts of tracking data past the firewall of whatever network their users might be connected to. However, it’s gained notoriety in the last few years as tools like iodine have evolved to push large enough amounts of data seamlessly enough to approximate a ‘normal’ internet connection. Isn’t it illegal? I am not a lawyer, and this is not legal advice, but my gut says “probably not”. Whether this is an officially sanctioned way to use the access point or not, the fact is it is intentionally left open for you to connect to (despite it being trivial to implement access controls such as turning on WPA(2) encryption, and the advice screamed from every mountaintop that you should do so), accepts your connection, then willfully passes your data and any response. The main thing that leaves me guess it’s on this side of the jail bar is that a bunch of well-known companies have been doing it for a long time on other peoples’ networks and not gotten into any sort of trouble for it. (Take that with a big grain of salt though; big companies have a knack for not getting in trouble for doing stuff that would get everyday schmucks up a creek.) Doesn’t it open up the access point owner to all sorts of liability from anonymous ne’er-do-wells torrenting warez and other illegal stuff through it? Not really. The ‘tunneling’ part is key; the other end of the tunnel is where the ne’er-do-well’s traffic will appear to originate from; that’s a server they control. Unless they are complete idiots, the traffic within the tunnel is encrypted, and it’s essentially the same thing as a VPN connection. Anyone with a good reason to track down the owner of that server will follow the money trail in a similar way. While a clever warez d00d will attempt to hide their trail using a torrent-VPN type service offered from an obscure country and pay for it in bitcoins, the trail will at least not point to the access point owner. So it’s just a free-for-all then? No; there are at least a few technical measures an access point designer/owner can take against this technique. An easy and safe one is to severely throttle or rate-limit DNS traffic for unauthenticated users. While this won’t stop it outright, it will limit any bandwidth consumption to a negligible level, and the ‘abuse’ to a largely philosophical one (somebody deriving a handful of bytes/sec worth of benefit without paying for it). The ratelimit could get increasingly aggressive the longer the user is connected without authenticating. Another is to intercept A record responses and feed back fake ones anyway (e.g. virtual IPs in a private address range, e.g. 10.x.x.x), with the caveat that the AP then must store the association between the fake address it handed out and the real one, and once the user is authenticated, forward traffic for it for the life of the session. I wouldn’t recommend this approach as it may still have consequences for the client once they connect to a different access point and the (now cached) 10.x.x.x address is no longer valid, but I’ve seen it done. Finally, you can target the worst offenders pretty easily, as they (via software such as iodine) are pushing large volumes of requests for e.g. TXT records (human-readable text records which can hold larger data payloads) instead of A records (IP address lookups). However, some modern internet functionality such as email anti-spam measures (e.g. SPF) do legitimately rely on TXT record lookup, so proceed with caution. Finally, statistical methods could be used – again, the hardcore methods like iodine will attempt to max out the length AND compress hell out of the payloads, so heuristics based on average hostname lookup length and entropy analysis of the contents could work. This is more involved though, deep packet inspection territory, and as with any statistical method runs the risk of false positives and negatives. ## Notes To Myself: Starting out with OpenWSN (Part 1) TL;DR: Successful toolchain setup, flashing and functional radio network! Still todo: Fix network connectivity between the radio network and host system, and find/fix why the CPUs run constantly (drawing excess current) instead of sleeping. Over the last few weeks (er, months?), I build up and tried out some circuit boards implementing OpenWSN, an open-source low-power wireless mesh networking project. OpenWSN implements a stack of kid-tested, IEEE-approved open protocols ranging from 802.15.4e (Time-Synchronized Channel Hopping) at the physical layer, 6TiSCH (an interim, hardcoded channel/timeslot schedule until the smarts for deciding them on the fly is finalized), 6LoWPAN (a compressed form of IPv6 whose headers fit in a 127-byte 802.15.4etc. frame), RPL/ROLL (routing), and finally CoAP/HTTP at the application level. The end result is (will be) similar to Dust SmartMesh IP, but using all open-standard and open-source parts. This should not be a huge surprise; it turns out the project is headed up by the original Berkeley Smart Dust guy. Don’t ask me about the relationship between this and Dust-the-click-n-buy-solution (now owned by Linear Technology), TSCH, any patents, etc. That’s above my pay grade. My day-job delves heavily into low-power wireless stuff, and here SmartMesh delivers everything it promises. But it’s rather out of the price range of hobbyists as well as some commercial projects. That, and if you use it in a published hobby project Richard Stallman might come to your house wielding swords. So how about a hands-dirty crash course in OpenWSN? Boards At the time of this writing, the newest and shiniest hardware implementation seems to be based on the TI CC2538, which packs a low-power ARM Cortex core and radio in a single package. OpenMote is the (official?) buyable version of this, but this being the hands-dirty crash course, I instead spun my own boards. You don’t really own it unless you get solderpaste in your beard, right? The OpenMote board seems to be a faithful replication of a TI reference design (the chip, high and low-speed crystals, and some decoupling caps), so we can start from there. To save time I grabbed an old draft OpenMote schematic from the interwebs, swapped the regulator for a lower-current one and added some pushbuttons. OpenWSN PCB based on an early OpenMote version Here is the finished product. Boards were ordered thru OSH Park, and SMT stencil through OSHStencils (no relation). Parts were placed using tweezers and crossed fingers, and cooked on a Hamilton Beach pancake griddle. 2 out of 3 worked on the first try! The third was coaxed back to life by lifting the chip and rebaking followed by some manual touch-up. Software I first smoke-tested the boards using the OpenMote firmware, following this official guide. No matter where you start, you’ll need to install the GCC ARM toolchain. Details are on that page. This package REQUIRES Ubuntu (or something like it), and a reasonably modern version of it at that (the internets say you can theoretically get it working on Debian with some undue hacking-about, if you don’t mind it exploding in your face sometime in the future.). If your Ubuntu/Mint/etc. version is too old (specifically, package manager version), you’ll get an error relating to the compression type (or associated file extension) used in the PPA file not being recognized. You can maybe hack about to pull in a ‘future’ version for your distro version, but who knows which step you’ll get stuck at next for the same reasons. (Maybe none, but I just swapped the hard drive and installed a fresh Mint installation on another one.) First build the CC2538 library: In the libcc2538 folder: python libcc2538.py This will build libcc2538.a. Probably after you got a “libcc2538.a does not exist. Stop.” error message. Next, try compiling a test project to make sure the toolchain works:  chmod 777 test-projects.sh ./test-projects.sh Assuming all goes well, now you can flash the resulting binary onto the board! sudo make TARGET=cc2538 BOARD=openmote-cc2538 bsl Needless to say, you need some kind of serial connection to the bootloader UART (PA0, PA1) on the board for this to work (I used a USB-serial dongle with 3.3V output). Successful output from this step looks something like: Loading test-radio into target... Opening port /dev/ttyUSB0, baud 115200 Reading data from test-radio.hex Connecting to target... Target id 0xb964, CC2538 Erasing 524288 bytes starting at address 0x200000 Erase done Writing 524288 bytes starting at address 0x200000 Write done  Now you can actually try compiling OpenWSN. OPTIONAL STEP: If you foresee doing active development on OpenWSN, you might want to install Eclipse. NOTES: Direct from website; even Mint package manager version as of 12/15 is still on v3. Follow the instructions on this page, for the most part. If you installed arm-none-eabi etc. from the previous step, it “should” be ready to rock. The options in Eclipse have changed a bit since this was written. When creating a new (blank) test project, select “Cross ARM GCC”. Create a main.c file with a hello-world main() (or copy & paste from the page above), then save and build (Ctrl-B). You may get a linker error similar to: “undefined reference to _exit'” . I solved this by selecting ‘Do not use standard start files (-nostartfiles)’ I’m currently getting a 0-byte file as reported by ‘size’ (actual output file has nonzero size). Not sure whether to be concerned about this or not:  Invoking: Cross ARM GNU Print Size arm-none-eabi-size --format=berkeley "empty-test.elf" text data bss dec hex filename 0 0 0 0 0 empty-test.elf The actual .elf and .hex are 13k and 34 bytes on disk, respectively. This is not actually crucial to compiling OpenWSN, so I gave up here and went back to the important stuff: NON-OPTIONAL: Download and set up SCons. This is the build tool (comparable to an advanced version of ‘make’) used by OpenWSN. Again, anything in your package manager is horribly out of date, so grab it from the web site, unpack and ‘sudo python setup.py install’. Clone the ‘openwsn-fw‘ repository somewhere convenient (preferably using git, but you could just download the .zip files from github), change into its directory and run scons without any arguments. This gives you help, or is supposed to. It gives some options for various ‘name=value’ options, along with text suggesting that the options listed are the only valid ones. However, popular options like the gcc ARM toolchain and OpenMote-CC2538 are not among the listed options. Luckily, they still work if you googled around for the magic text strings: scons board=OpenMote-CC2538 toolchain=armgcc goldenImage=root oos_openwsn This results in an output file of decidedly nonzero reported size: arm-none-eabi-size --format=berkeley -x --totals build/OpenMote-CC2538_armgcc/projects/common/03oos_openwsn_prog text data bss dec hex filename 0x16303 0x28c 0x1bd8 98663 18167 build/OpenMote-CC2538_armgcc/projects/common/03oos_openwsn_prog 0x16303 0x28c 0x1bd8 98663 18167 (TOTALS) Add ‘bootload /dev/ttyUSB0’ (or whatever your serial device shows up as) and run it with the mote in boot mode (hold Boot button / pin PA6 low and reset), and it should Just Work. Upload takes a while. Ideally, you need to flash at least 2 boards for a meaningful test (one master, or ‘DAG root’ in OpenWSN parlance, and one edge node). Now, need to run openvisualizer to see if anything’s actually happening. First… currently ‘setup.py’ for this package is broken, and barfs with errors e.g.: Traceback (most recent call last): File "setup.py", line 34, in with open(os.path.join('openvisualizer', 'data', 'requirements.txt')) as f: IOError: [Errno 2] No such file or directory: 'openvisualizer/data/requirements.txt' ‘pip install’ might be another way to go, but this appears to install from an outdated repository, and barfs with some version dependency issue. Since the actual Python code is already here, we can just try running it, which seems to be expected to go through SCons: run ‘sudo scons rungui’ in the openvisualizer directory. Traceback (most recent call last): File "bin/openVisualizerApp/openVisualizerGui.py", line 29, in import openVisualizerApp File "/home/cnc/workspace/openwsn-sw/software/openvisualizer/bin/openVisualizerApp/openVisualizerApp.py", line 17, in from openvisualizer.eventBus import eventBusMonitor File "/home/cnc/workspace/openwsn-sw/software/openvisualizer/openvisualizer/eventBus/eventBusMonitor.py", line 18, in from pydispatch import dispatcher ImportError: No module named pydispatch Well, that doesn’t actually work, but it’s at least a starting point to flushing out all the unmet dependencies by hand. Install the following packages: pip (to install later stuff) pydispatch (pip install…)* pydispatcher (pip install…) python-tk (apt-get install…) *No, wait, that one’s already installed. According to the externalized Google commandline, http://stackoverflow.com/questions/17802792/no-module-named-pydispatch-when-using-pyopengl sez you actually need to install a separate package named ‘pydispatcher‘. Finally, let’s give it a go. cnc@razor ~/workspace/openwsn-sw/software/openvisualizer$ sudo scons rungui

The OpenWSN OpenVisualizer

It works! Sort of. I get a mote ID, and can toggle it to be DAG root, and after a while, the 2nd board appears with a long-and-similar address in the neighbor list. It’s receiving packets and displays a RSSI. So at least the hardware is working. However, I can’t interact with it, and it doesn’t show up as a selectable mote ID (presumably just an openvisualizer issue). Nor can I ping either one as described in the tutorial, even though the part of the console dump relating to the TUN interface looks exactly as it does in the example (warning messages and all):

scons: done building targets. cnc@razor ~/workspace/openwsn-sw/software/openvisualizer $ioctl(TUNSETIFF): Device or resource busy   created following virtual interface: 3: tun0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500 link/none inet6 bbbb::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::1/64 scope link valid_lft forever preferred_lft forever 22:43:24 INFO create instance 22:43:24 INFO create instance 22:43:24 INFO create instance Killing openvisualizer, plugcycling both radios and restarting it returns a screenful of: [Errno 11] Resource temporarily unavailable device reports readiness to read but returned no data (device disconnected?)  Lets try rebooting, it fixes stuff in Windows… Actually, it looks like the GUI window sometimes persists after openvisualizer is supposedly killed at the console and can’t be closed via the UI; there is probably a task you can kill as a less drastic measure. That cleared up the ‘no data’ errors, but still can’t ping any motes: cnc@razor ~$ ping bbbb::0012:4b00:042e:4f19 ping: unknown host bbbb::0012:4b00:042e:4f19

Well, at any rate we know the radio hardware is working, so let’s see the next moment of truth: power consumption. That’s really the point of this whole timeslotted radio exercise; otherwise you’d just drop an XBee on your board and hog as much bandwidth and electrons as you like. The 802.15.4e approach is for wireless networks that run for months or years between battery changes.

Firing up the ol’ multimeter is not the best way to measure current draw of a bursty load, but as a quick first peek it’ll do. On startup, the radio node draws a steady ~28mA, which is not all that unexpected (it needs a 100% initial on-time to listen for advertisements from the network and sync up.) After a few moments, the current drops to 11mA and the node appears in OpenVisualizer. Wait a minute… 11mA you say, with an ‘m’? That’s not that low. Scoping the 32MHz crystal confirms that the CPU is running all the time, rather than sleeping between scheduled activity. Scoping the 32KHz crystal mostly confirms that you can’t easily scope a low-power 32KHz crystal (the added probe capacitance quenches it), but doing so causes the node to drop off the network, then reappear a short time after the probe is removed, so that crystal appears to be functional (not to mention important).

Now, is it software or hardware?

Back to the OpenMote (not OpenWSN) example projects, let’s try an example that ‘should’ put the CPU to sleep:

cnc@razor ~/OpenMote/firmware/projects/freertos-tickless-cc2538 $make TARGET=cc2538 BOARD=openmote-cc2538 all Building 'tickless-cc2538' project... Compiling cc2538_lowpower.c... Compiling ../../kernel/freertos/croutine.c... Compiling ../../kernel/freertos/event_groups.c... ...   cnc@razor ~/OpenMote/firmware/projects/freertos-tickless-cc2538$ sudo make TARGET=cc2538 BOARD=openmote-cc2538 bsl Loading tickless-cc2538 into target... Opening port /dev/ttyUSB0, baud 115200 Reading data from tickless-cc2538.hex Connecting to target... Target id 0xb964, CC2538 Erasing 524288 bytes starting at address 0x200000 Erase done Writing 524288 bytes starting at address 0x200000 Write done 

Sure enough, with this example project loaded, the board’s current consumption drops to the uA range (actually, I was lazy and concluded the “0.00” reading on the mA scale told me what I wanted to know), with the 32MHz crystal flatlined except for very brief (<1msec) activity periods.

That’s it for Part 1. Stay tuned, Future Tim, for the part where we track down the source of the missing milliamps!

## Clutter, Give Me Clutter (or, a GUI that doesn’t use Google as an externalized commandline)

UX nightmare: Get the menu at a restaurant, and it has only 2 items: toast and black coffee. But if you spindle the corner just right, a hidden flap pops out with a dessert menu. And if you shake it side to side, a card with a partial list of entrees falls in your lap (not all of them, just the ones you’ve ordered recently).

When you eat at Chateau DeClutter, bring a friend. If you can pinch all 4 corners of the menu at the same time, you can request the Advanced menu, wherein you just yell the name of a food across the room, and if that’s something they make, it’ll appear in about 20 minutes, and if not, nothing will happen.

## Tim Tears It Apart: Measurement Specialties Inc. 832M1 Accelerometer

So, yesterday the outdoors turned into this.

This much snow in a few hours gets you a travel lockdown…

Not quite the snowpocalypse, but it was enough that a travel ban was in effect, and work was closed. What happens when we’re stuck in the house with gadgets?

All right, I’d like to tell you that’s the reason, but this actually got broken open accidentally at my work a little while back. Sorry for the crummy cellphone-cam pic and not-too-exhaustive picking at the wreckage.

Today’s patient anatomical specimen is a fancy 3-axis piezo accelerometer from Measurement Specialties Inc. This puppy retails for about \$150, so this is a ‘sometimes‘ teardown.

The insides of the 832M1 showing two sensor orientations and the in-package charge amplifiers

One thing that comes to your attention right away is holy shit, there’s an entire circuit board in there. In retrospect, I probably shouldn’t be too surprised by this. It appears that these are full-on piezoelectric sensors (not e.g. piezoresistive), which are a bit dastardly to read from without a charge amplifier inline. On the circuit you can see three identical-ish copies of a small circuit that is almost certainly that, with a small SOT23-5 opamp in each. The part’s total quiescent current consumption is billed at 12uA, so that’s a paltry 4uA per circuit.

Here you also get a gander at the acceleration sensors themselves. Each is ‘glued’ by what appears to be low-temperature solder paste to its own metal pad on the ceramic substrate, with more of the same used to bond together the parts of the sensor itself. These consist mainly of a layer of gray piezoceramic material sandwiched between two chunks of metal. The larger of these acts as a proof mass, compressing or tensioning the piezoceramic layer when the part is moved on the axis normal to it. The metal ‘buns’ double as electrodes. There are (were) three such sensors in different orientations, one per axis, but the middle one broke off and flew across the room when the package was cracked open.

Like most piezoceramics, the sensors inside are affected by thermal changes, and become more sensitive with increasing temperature. The designers appear to account for this and provide some measurement headroom over the nominal value (this bad boy is a 500G accelerometer) so that the full quoted range can be measured even at the maximum specified operating temperature. This means at room temperature, where it’s less sensitive, you can actually measure accelerations of maybe 30-40% higher than the nominal value before the output limits (with appropriate calibration and reduced resolution of course). At very cold temperatures, even quite a bit higher measurements are possible, with the same caveats.

## Tim Tears It Apart: Koolpad Qi Wireless Charger (Also: how to silence it without soldering)

My wife goes to bed long before me, so when I go to bed, it behooves me to do so without significant light or racket. After countless nights of fiddling with a 3-sided micro-USB cable in the dark, I bought this neat little USB phone charger. It’s not the cheapest, nor the priciest, but was approximately the size and shape of my phone (less risk of our cat bumping it off a tiny pad during the night and waking us up), and lives up to its promise of cool operation, especially while not charging (meaning it is not constantly guzzling power trying to charge a device that isn’t there).

But…

This charger also came with one untenable drawback: this gadget for charging my phone without any noisy fiddling about, comes with a built-in noisemaker that beeps loudly every time you put a device on it to charge. That’s bad enough for a one-time event, but if the phone is placed or later bumped off-center (i.e. the cat’s been anywhere near the nightstand), it will go on beeping all through the night as the charger detects the phone intermittently. So if the ladies don’t find you handsome

First off, the part you’re probably here for: how to silence that infernal beeping sound once and for all, without soldering. Open the charger (there are 4 small screws hidden beneath the rubber feet), find the 8-pin chip in the lower-left and use your favorite small tool (nippers, X-Acto, etc.) to cut the indicated pin. This pin directly drives the piezo buzzer (tall square part in the bottom-left corner). Of course, you can also cut, desolder or otherwise neutralize either of these parts in its entirety too if you have the time and a soldering iron, but I think cutting the one pin is easier.

Cut this pin to disable that beeping once and for all!

While we’re in here, let’s have a look at the circuitry. There’s actually rather a lot of it – I was expecting a massively integrated single-chip solution, but there’s a surprising pile of discretes in here, including a total of 12 opamps.

Koolpad innards showing charging coil and PCB

The main IC, U5 in the center, is an “ASIC for QI Wireless Charger” manufactured by GeneralPlus. At the time of this writing, their site is rather less than forthcoming with specs or datasheets. This is driving a set of 4 MOSFETs (Q1 ~ Q4) via a pair of TI TPS28225 FET drivers (U1, U4) to drive the ludicrously thick charging coil. The coil itself is mounted to a large ferrite disk resembling a fender washer, which in turn is adhered to a piece of adhesive-backed foam – probably to dampen any audible vibrations as the coil moves in response to nearby magnetic or ferrous objects as it is driven (such as components in the device being charged). The parallel stack of large ceramic caps (C12, C14, C15, C18), along with the coil thickness itself, gives a hint as to the kinds of peak currents being blasted through it.

For fans of Nikola Tesla, this planar coil arrangement should look oddly familiar. Qi inductive charging, like most contemporary twists / rehashes / repatentings of this idea, extend it by using modern cheap-as-chips microcontrollers to allow the charger and chargee to talk amongst themselves. This allows them to collude to tune their antennas for maximimum transfer efficiency, negotiate charging rate, perform authentication (maybe you don’t want freeloaders powering up near your charger) or etc. In the case of Qi, the communication is unidirectional – chargee to charger – and used to signal the charger to increase or decrease its transmit power as needed. This provides effective voltage regulation at the receiving end and can instruct the charger to more-or-less shut down when charging is complete. Communication is achieved via backscatter modulation, similar to an RFID tag.

The chips to the left and right of the Qi ASIC, U2 and U3, are LM324 quad opamps. Without formally reverse-engineering the circuit, my gut says these opamps circuits, surrounded with RC discretes like groupies at a One Direction concert, are likely active filters, probably involved in sensing the backscatter modulated signal and overall power impact of the chargee, if any. Again, this is just educated guessing without actually tracing out the circuitry (which would involve more time than I care to spend).

The chip at the bottom right, U6, is an LM358 dual opamp, with what is probably a LM431-compatible voltage reference (U8) a bit to its left, acting directly as a 2.5V reference (cathode and reference pin tied together). At least one pin of the LM358 is visibly supplying the power to the charge-indicator LED, so it’s a reasonable guess this circuit is there to control both LEDs in response to the voltage loading produced by a device during charge. Finally, U7 near the bottom-left, noted earlier as driving that irritating-ass beepy speaker, is a 555 timer that provides the actual oscillation to drive the charge-indication beep in response to a momentary signal from elsewhere (it disappears underneath the ASIC). Q5 is most likely acting as a power switch for U7, keeping it disabled (unpowered) between beeps.

One final note completely unrelated to any teardown of the device itself: it comes with a troll USB cable. That is to say, while it looks like a regular USB cable, and may easily get mixed in with the rest of your stash, it’s actually missing the data wires entirely and only provides power. While this is not unreasonable considering it’s just a charger, beware not to let this cable get mixed in with ‘real’ ones unless you’re pulling a prank on someone. Otherwise it’ll come back to bite you some months later when you grab the nearest cable, plug it into a gadget and it’s mysteriously stopped working.

## Tim Tears It Apart: Cheap Solar Pump

GY Solar water pump package

So, I picked up a pair of these cheapo solar pump on fleabay for about 6 or 8 bucks a pop, to filter water for the fish in my old-lady-swallowed-a-fly lotus pot. They actually work pretty well, apart from one very occasionally getting stuck and needing a spin by hand to get going. But it’s winter, the fish have met Davy Jones (natural causes) and my “plastic twinwall and a big water tank will keep my greenhouse above freezing in a New England winter” hypothesis turned out to be way not true, so they’re just sitting around my basement for the interminable non-growing season. Winter boredom plus unused gadgets sitting around equals…

Package contents. Note the cable was not severed out of the box; that was my doing!

The mechanical end of the pump

Inside the pump end. There’s nothing more to see without destroying it.

There’s not much to see on the pump end itself. A slotted cover blocks large particulates from getting into the works, followed by a plastic baffle and a centrifugal impeller, which flings the water at the outlet port. The impeller “shaft” is a magnet and doubles as the rotor for the electric motor, allowing the coils to be in the non-rotating housing (stator). Under bright sun, this arrangement can generate a head of a few inches or a decent amount of flow, not bad for a cheap pump running from a little solar panel.

Potting compound over pump power entry side

Lifting the cover on the other end reveals where the electronics must be, a cavity completely filled in with potting compound. I declined trying to get through this mess and look at the circuitry on the pump itself. Guessing wildly though, this should probably look very similar to the circuit that drives a DC computer fan, with a small Hall effect sensor detecting the passing of the magnetic poles on the rotor and flipflopping power to the stator coils as needed to push/pull it in the desired direction.

Back of solar panel. There is a strange lump on the back.

‘Kickstarter’ PCB back side

‘Kickstarter’ PCB component side

The interesting part is a random lump on the back of the solar panel. Pop it open and, sure enough, it contains some active circuitry. This consists of a large (4700uF) electrolytic capacitor and undervoltage lockout circuit. This circuit cuts power to the pump until the capacitor charges to several volts, giving it an initial high-current kickstart to overcome static friction. This works about like you’d expect, as a voltage comparator with a fairly large hysteresis band (on at 6V, off at 2V, for example). Interestingly though, there’s no discrete comparator in sight. Instead, there’s an ATTINY13 microcontroller. The ATTINY does have a builtin comparator though, and the chip’s only purpose in this circuit seems to be as a wrapper around this peripheral. It’s entirely possible that from Chinese sources, this chip was actually cheaper than a standalone comparator and voltage reference. Another likely possibility is it was competitive or cheaper than low-power comparators, and the use of a microcontroller allows better efficiency by sampling the voltage at a very low duty cycle. For reference, the ATTINY13 runs about 53 cents @ 4000pcs from a US distributor. That’s pretty cheap, but not quite as cheap as the cheapest discrete with internal voltage reference at <=100uA quiescent current, which currently comes in at ~36 cents @ 3000pcs. Noting the single-sided PCB, another possibility is that the ATTINY and other silicon were chosen for their pinouts, allowing for single-sided routing and thus cost savings on the PCB itself. Anyway, onto the circuit. R3 and D1 are an intriguing side-effect of using a general-purpose microcontroller as a comparator, as the absolute maximum Vcc permitted on this part is ~5.5V. D1 is a Zener diode, which along with the 47-ohm resistor, clamps the voltage seen by the uC to a safe level. This seems like it would leak a lot of current above 5.5V - and it does - but under normal operation, the pump motor should drag the voltage down below that when operating. R1 and R2 form the voltage divider for the comparator, which is no doubt using an on-chip voltage reference for its negative "pin". Pin 5 of the micro is the comparator output, which feeds the gate of an n-channel enhancement-mode MOSFET, U2, through R5, with a weak 100k pulldown (R4). With this circuit, the pump makes periodic startup attempts in weak sunlight until there's enough sun to sustain continuous operation, with no stalling if the power comes up very slowly (e.g. sunrise).

## Notes To Myself: Cheap Feedlines for Cheap Boards

Goal: Produce reasonable impedance-matched (usually 50-ohms) RF feedlines for hobby-grade radio PCBs. Rather than get a PhD in RF engineering for a one-off project, use an online calculator and some rules of thumb to get a “good enough” first prototype.

Problem: Most RF boards and stripline calculators assume or drive toward 4-layer boards. In hobby quantities, 4-layer boards are much more expensive and have longer leadtimes. If using EAGLE, can no longer use the free/noncommercial or Lite editions (they only allow 2 layers).

The main driver of feedline impedance is its geometry and dielectric “distance” from the groundplane. The aforementioned stripline/microstrip/etc. calculators often assume there is nothing on the top (feedline) layer in its vicinity, there is just a groundplane on another PCB layer beneath it, and all proximity to the groundplane is through the FR-4 between the layers. For bog-standard 2-layer boards, that’s ~.062″ of material, which yields unacceptably wide traces (>100 mils) that cannot be cleanly terminated to most antennas or connectors, let alone a surface-mount IC pad.

Solution: Forget plain microstrip stuff, look up a “coplanar waveguide with ground” calculator instead. This takes into account a groundplane on the same layer, surrounding the feedline, in addition to a groundplane on a lower layer. Now the clearance between the feedline and coplanar goundplane can be tweaked to get a sane trace width for various copper weights, board thicknesses or other factors less easily in your direct control.

More notes:
FR=4 relative dialectric constant: 4.2 (in reality, it can vary quite a bit, and there are about a million material variants called “FR-4” and used interchangeably by board houses, but if you can’t be a chooser, this is probably as good an approximation as you get.)
“1oz” copper: 1.37 mils thickness (multiply-divide for other copper weights).
An example calculator is here: http://chemandy.com/calculators/coplanar-waveguide-with-ground-calculator.htm

## Debugging a shorted PCB the lazy way

I recently assembled a set of prototype boards for a particular project at my day job, and ran a math- and memory-intensive test loop to test them out. Two of the boards ran the code fine, but one crashed consistently in a bizarre way that suggested corruption or otherwise unreliability of the RAM contents.

Since the other two identical boards worked, a hardware/assembly defect was the likely explanation. These boards feature a 100-pin TQFP CPU and 48-pin SRAM with 0.5mm lead pitch, all hand-soldered of course, but a good visual inspection turned up no obvious defects.

The first thing I tried was a quick-n-dirty RAM test loop that wrote the entire RAM with an address-specific known value and immediately read it back (in this case, it was just the 32-bit RAM address itself), but this (overly simplistic, it turns out) check didn’t report any failures. However, I did notice the current reading on my bench supply occasionally spiking from 0.01A to 0.04A. This board uses a low-power ARM micro specifically chosen for efficiency, and should rarely draw more than about 15mA at full tilt, so this was a red flag.

With this in mind, the next thing I did was get a higher-resolution look at what was going on with the current. The CPU vendor provides a sweet energy profiling tool for use with its pre-baked development kits, which also double as programmer and debugger for the kit and external boards. The feature works by sampling input current to the development kit via a sense resistor at high speed, and optionally coupling it to the running program’s instruction counter via the debugger to estimate energy usage for each function in your program. By uploading a dummy program to the kit that just puts it into a deep sleep, and tying the external board into its VMCU/GND pins, it can be used with any external target board that draws up to 50mA or so.

Running the memory test again with the profiler active, I got the following:

Current trace as reported by EnergyAware Profiler

Again, the kit can supply a max of 50mA or so, and this graph shows a repeating cycle which spends half the time somewhere a bit northward of this. Sure enough, probing the supply voltage with a scope, the voltage drops a bit whenever the overcurrent occurs. A memory test loop should draw a fairly constant current; it shouldn’t vary with time or data or address as this appears to be doing. So it’s a safe bet that one of the address or data pins to the external RAM is shorted. But where?

I could begin probing address and data lines to find the ones that toggled at the same rate as the dip on the voltage rail, but on a wild hare (or hair) picked up our new secret weapon (not the Handyman’s Secret Weapon), an IR thermal camera.

Just after power-on with the RAM test running, I saw this:

IR thermal view of the CPU when powered and running the external RAM test

The part immediately to the left of the crosshair is the CPU. It wasn’t detectably warm to the touch, but here it’s easy enough to see where the die itself sits inside the chip package. There is also an apparent ‘hotspot’ roughly centered along the lefthand edge of the die. The I/O pins just next to this hotspot are address lines tied to the RAM. While it isn’t *always* the case due to the vagaries of chip layout, the GPIO pin drivers are almost always situated at the edges of the die, right next to the pins they drive. This is about as close to a smoking gun as you can get without the actual smoke. While it’s hard to see exactly which pin or pins this hotspot corresponds to, it does narrow the search quite a bit! For reference, here is how it looks unpowered.

IR thermal view of the CPU when unpowered.

Scoping a handful of adjacent pins, the issue becomes clear.

Probing near the hotspot seen on IR.

Suspicious ‘digital’ address line voltages near the hotspot.

Two adjacent address lines to the memory, immediately next to the hotspot, show this decidedly non-quite-digital-looking waveform on them (bottom trace), lining up pretty well with the voltage droop (top trace). This points to not one shorted pin (to GND or etc.), but two adjacent pins shorted together, their on-chip drivers fighting one another to produce these intermediate voltages and consuming excess current in the process. A quick beep-test confirms the short.

It turned out to be a hair-thin solder bridge between two adjacent pins on the SRAM, pictured below. Do you see it?

Take my word for it, there’s a solder bridge in this photo.

Yeah, neither did I at first. It was more visible only at a very specific angle…

The short is just visible at this angle

Note the other apparent “stuff” crossing pins in this angle wasn’t solder, but remnants of a cotton swab used to clean flux from around the pins.

Mapping the I/O drivers and other stuff

Just for fun, I purposely shorted all the GPIOs available on headers to ground and ran a loop that briefly flipped each one high. Here is the result! Note that not all GPIOs on this board were available on a header (many go to the RAM chip), and they are not necessarily pinned out in a logical numeric order. I haven’t specifically tested it yet, but the same method should be usable to unintrusively map out the location of on-chip modules (core/ALU, voltage regulators, AES engines, etc.) that can be exercised individually.

<pipedream>The day when thermal imaging gets good enough we can use IR attacks instead of power analysis to figure out what a chip is doing (encryption keys, etc.) without decapping it…</pipedream>

## Notes To Myself: EFM32 and heaps of external SRAM

Goal:
Use the EFM32 microcontroller’s External Bus Interface (EBI) to place a large external SRAM and work with data larger than the chip’s internal memory will allow. Support dynamic memory allocation via standard malloc()/calloc() calls probably present in whatever 3rd-party code-snarfed-from-the-internet you are trying to integrate.

Solution:
First off, ignore any notes about needing to ground the 0th address bit on the memory and shift the remaining address lines, as stated in the EFM32 appnotes/manuals. Unless very explicity stated otherwise, 1 address increment == 1 address change at the memory’s word size. For example, changing A[0] on a 16-bit SRAM generally addresses the next 16-bit memory location.

Sidenote about external memory address lines: If they are actually numbered in the RAM’s datasheet, this is an extremely polite suggestion only. In practice, it doesn’t matter if A[0..n] from the MCU map to A[0..n] of the memory in order; if the address lines are swapped around, they are swapped around for both read and write, so it doesn’t matter one bit (har!) to the MCU. Incidentally, same goes for the data lines. So feel free to run them however makes the PCB routing easier.

Setting up the heap in external memory:
You probably want bulk data to go to the external RAM, but your stack and most of your code’s internal housekeeping in the internal memory, which is faster and likely eats less juice. Especially if that code is using malloc() and friends to access that memory, this means creating the heap in external RAM.

The EFM32’s internal RAM starts at 0x20000000. Unless you do something funky, memory on the EBI maps in starting at 0x80000000.

Step 1: Linker has to know about the external memory.
This means tweaks to the vendor-supplied linker file (*.ld) to…

a) Tell it about the memory:
MEMORY { FLASH (rx) : ORIGIN = 0x00000000, LENGTH = 262144 RAM (rwx) : ORIGIN = 0x20000000, LENGTH = 32768 EXRAM (rwx) : ORIGIN = 0x80000000, LENGTH = 0x00200000 /* Add the EXRAM line above. Don't touch the CPU-specific FLASH/RAM base address or length from the original linker file.*/ }

b) Tell it to place the heap there:

 .heap : { __end__ = .; end = __end__; _end = __end__; *(.heap*) __HeapLimit = .; } > EXRAM /* Change 'RAM' to the 'EXRAM' section you just defined */

BUT… As mentioned above, the external RAM has a much higher physical address than the internal RAM. This will confuse a check later in the vendor linker file, which assumes all the memory is allocated in the same segment, the stack is allocated starting from the end of RAM (grows downward) and thus is the highest RAM address anywhere. Since this is no longer true, this check needs to be modified so as not to generate a false stack collision warning:

Change this line

 /* ASSERT(__StackLimit >= __HeapLimit, "region RAM overflowed with stack") */

to this:
 /* The above assumes heap will always be at the top of (same) RAM section. Since it's now in its own section, simply check that the STACK did not overflow RAM. This modified check assumes the '.bss' section is the last one allocated (i.e. highest non-stack allocation) in main RAM. */ ASSERT(__StackLimit >= __bss_end__, "region RAM overflowed with stack")

Step 2: Tell the Compiler.
Now that we’ve told the linker, we need to tell the compiler/assembler. If you just build the code now, you will get a heap starting at 0x80000000 as expected, but with some tiny default size chosen by the vendor. This magic value is defined in the ‘startup_efm32wg.S’ (or part-specific equivalent) file buried in the SDK. This will be at e.g. “X:\path\to\SDK\version\developer\sdks\efm32\v2\Device\SiliconLabs\EFM32WG\Source\GCC\startup_efm32wg.S” . What’s the difference between the ‘.S’ file here (uppercase S) and the ‘.s’ (lowercase s) file located in ‘g++’? Don’t ask me. What’s the difference between either of these in /Device/SiliconLabs vs. the same files in /Device/EnergyMicro ? Don’t ask me. There are also compiler-specific variants (Atollic, etc.) and an ‘ARM’ version. Don’t ask me…

Anyway, once you figure out which one your project is actually using, open it and you should find a line like:
 .section .heap .align 3 #ifdef __HEAP_SIZE .equ Heap_Size, __HEAP_SIZE #else .equ Heap_Size, 0xC00 #endif

The specifics might vary depending on your exact CPU and its memory size of course (assuming the vendor selects a larger default value for those with larger internal memory, but I could be wrong.) So we just have to define __HEAP_SIZE somewhere and bob’s your uncle, right?

Er, sort of. There are two nuances to notice, in case your situation slightly differs from mine. One is that the double underscore before HEAP_SIZE looks like a standard compiler-added decoration (i.e. name mangling). Does the compiler expect you to supply the mangled, unmangled or some semi-mangled version of this name? The other is that the ‘.s’ (or ‘.S’) file is an assembler file, not a C file. So in this case you actually need to pass the magic value to the assembler, not the compiler (and beware that the two may in fact have different name mangling conventions). What a mess!

I figured the easiest way to figure out exactly what was expected was experimentally. If using the Simplicity Studio GUI/IDE, you can mousedance your way into Project -> Properties -> Settings -> Tool Settings -> toolname -> Symbols -> Defined symbols and add the symbol definitions there. So I created six versions in total: all three mangling permutations (HEAP_SIZE, _HEAP_SIZE and __HEAP_SIZE) for both the assembler and the compiler, with a different size value for each, then fished in the .map file after compilation to see which one ‘took’. In my particular case, it was the version passed to the assembler, with the fully mangled (double underscore) name. YMMV. Are there any cases where it must be passed to both the compiler and the assembler? Don’t ask me. When you find out which your particular setup is expecting, set the value to match the external memory size and delete the extra definitions.

Step 3: Fix any remaining braindead checks.
When using dynamic memory allocation (malloc() and friends), they (usually, probably) call a deep internal library function called _sbrk. Among other things, this function performs a check similar to the one we just fixed in the EFM32 linker file, failing nastily if it ever allocates heap memory with a higher address than the lowest stack allocation (at least in GCC). So to get around this, you have to override the builtin _sbrk with a fixed copy. If you are using the vendor’s ‘retargetio.c’ for anything (e.g. delivering printf output to the SWO debug pin), this file redefines a bunch of internal functions including sbrk. Failing that, is ‘just’ creating a function any-old-place with the same name sufficient to guaranteeably override the internal function in all cases? Don’t ask me.

The vendor-supplied copy in retargetio.c looks like the below. Here I’ve modified it crudely to just remove the check entirely. In my case, the external RAM contains only the heap and nothing else, so this should be OK.

caddr_t _sbrk(int incr) { static char *heap_end; char *prev_heap_end; static const char heaperr[] = "Heap and stack collision\n"; if (heap_end == 0) { heap_end = &_end; } prev_heap_end = heap_end; // HACK HACK HACK: This check assumes stack and heap in same memory segment; remove it... //if ((heap_end + incr) > (char*) __get_MSP()) //{ // _write(fileno(stdout), heaperr, strlen(heaperr)); // exit(1); //} heap_end += incr; return (caddr_t) prev_heap_end; }

Now your malloc() calls should stop failing! After performing the above steps, I was able to get a ‘complex’ piece of code with dynamic memory allocation (the SHINE mp3 encoder) running on an EFM32 microcontroller, with a few changes to be reported soon…

BONUS: SHINE particulars:
The encodeSideInfo() function in l3bitstream.c appears to build the mp3 header incorrectly. Try…
 //shine_putbits( &config->bs, 0x7ff, 11 ); // wrong shine_putbits( &config->bs, 0xfff, 12 ); // right //shine_putbits( &config->bs, config->mpeg.version, 2 ); //wrong shine_putbits( &config->bs, 1, 1 ); //right`

It also seems to fail outright (generate incorrect, unplayable bitstreams) for certain input files, depending (probably) on mono vs. stereo and/or bitrate. A stereo .wav file (PCM 16-bit signed LE) at 44100Hz worked.