Archive for the ‘General’ Category

Notes To Myself: Migrating legacy Microchip C18 projects to MPLAB X + XC8 toolchain, Windows 7

First note to myself: NEVER USE MICROCHIP AGAIN. If I didn’t just need to make “a couple tiny updates” to an already selling, on-the-shelf project I’d just scrap the PIC18 for an EFM32TGxxx part, gcc (shaft of light from the sky, harps playing melodically) and be done with this entire shit-show. Insert whining about the month+ long circlejerk with Microchip Support about the bug in the PICKit3 programmer that is now corrupting the config bits on said product here. Of course, if the code from 5 years ago, even with no changes, still compiled and fit onto the chip it was written for and used to fit on 5 years’ worth of versions ago, and current MCC18 did not insist on dragging in the gargantuan (>4KByte) ‘.code_vfprintf.o’ even if it is not used or referenced anywhere in the code, I wouldn’t even have to bother trying it with the new compiler in the first place….

Soooo…. Install MPLAB X (make tea, a sandwich, possibly a baby or two while waiting for the crunching sounds from your harddrive to finish) and XC8. NB: Licensing is done via a Windows batchfile, completely outside any of the devtools OR their installers. If you have the license file, ignore absolutely anything to do with licensing and install as if you want the “free” version.

License: Run said batchfile. Voodoo happens and it should “Just Work”. (It did. Quite surprised.)

Make XC8 “C18 Compatibility Mode” findable:

The fake “C18” that currently serves as the compatibility layer must first be manually setup in MPLAB X (apparently no autodetect). But first-first, you need to workaround a stupid MPLAB X bug that has been unfixed going on two years now. The bug is you are arbitrarily forbidden from having two toolchains set up whose executables are in the same directory. Unfortunately this is EXACTLY WHAT MICROCHIP’S OWN XC8 COMPILER DOES (of course that directory is already used for XC8 itself, which IS autodetected somehow). So you have to create a fake instance of this directory (symlink or hardlink) with a different name to fool MPLAB X.

NB: The below workaround only works if your filesystem is NTFS. If not, you could also try just copypasta-ing the entire contents somewhere else, and hope this doesn’t break a path dependency somewhere or whatever. I haven’t tried this, but worth a shot.

To do this, you have to first-first-first somehow get a Windows console with Administrator privileges. The way I found that works is to create a batchfile with the contents “cmd <carriage return> pause”, then right-click and “Run As Administrator”. (Using the ‘runas’ command, Windows 7’s answer to sudo, apparently does not work for this as it forces you to know the actual administrator password, and will not accept your user password even if you have administrator privileges.

At the console, cd into the XC8 directory directly above the binaries directory (e.g. “C:\Progra~1(X86)\Microchip\xc8\v1.31\”) and type:

mklink /D _c18bin_ bin

This should result in a message indicating a symbolic link named “_c18bin_” was created.

Now you can actually set up the devtool. Ignore anything on the splash page and go to Tools -> Options -> Embedded -> Build Tools tab. Press “Add…” and enter the fake directory you just created. Specify the location of each build tool (if it exists). NB: For some reason the individual devtool settings ‘disappear’ after specifying them (close and re-open this dialog and “C Compiler” is blank again!). Does this mean it doesn’t need to be specified, or this is another MPLAB X bug and your dev tool will never, ever work? Will soon find out…

Now, try to build project (it will fail).

In “Output -> Configuration Loading Error” tab: “Could not generate makefiles for configuration default.” “XMLBaseMakefileWriter::createRuntimeObjectForMakeRule: null”

In “projectname (Build, Load)” tab: make[1]: *** No rule to make target ‘.build-conf’. Stop.

FIXME: Fix this error…

The Internet, things, and you

Ha ha, apparently proselytizing about the “Internet of Things” is trendy again. Don’t hold your breath kids; until IPv6 is a thing that’s really a thing, enjoy your “small home network of things”, where your game console, thermostat and toaster have 192.168.x.x IP addresses dangling from your cablemodem, and require a 3rd-party cloud service to mediate contact with your neighbor’s toaster*.

Seriously though… if anybody but major datamining companies are going to get remotely enthusiastic about this IoT business, two things need to happen: IPv6 and dirt-cheap low-bandwidth wireless uplinks (think cellphone plan with pay-by-the-byte or 512kb/month dataplans and low/no monthly maintenance fees) so that all the applications (smart stoplights, weather/pollution sensors, whatever) that would benefit from not dangling off someone’s cell plan or cablemodem don’t have to do so. Maybe on the 3rd revival of the IoT hype, about 10 years from now, it’ll really catch on and be actually kind of useful. (See also: “M2M”.)

* The latter shit-uation is due in equal parts to headaches around NAT traversal, service/peer discovery, and the fact that nobody serious (read: businesses) wants to throw in for an open platform when there’s a snowflake’s chance they could parlay their own proprietary stuff into the One True IoT Service. Even with IPv6 and cheap-as-free radio/cell/satellite pipes, the “IoT ecosystem” (I cringe just saying that) won’t be completely free of the need for a centralish service/peer discovery mechanism and (for power-limited systems) somebody acting as mailbox/aggregator/push notifier/whatchamacallit so that the low-power endnodes can talk to one another despite randomly popping onto the network for just a few seconds at a time. Still, a backend you can download and drop on your own cheapo web hosting account if you didn’t want to be tied to said 3rd-party cloud service would be huge in making this, well, A Thing.

Haptic hackery for fun and profit

My day-job employer makes fancy piezoelectric actuators. Not long ago I was asked out of the blue: “Hey, the Haptics Symposium is in less than 2 weeks… It’s in Houston, TX. Want to go?”

“*looks out window at yet more falling snow* Hell yeah.”

“Oh yeah, and we’re going to need some demos so…”

Of course, I had no shortage of regularly scheduled urgent worky stuff to do, so any demos had to be done with some haste. In the end I got not-one-but-two cheesy demos going, one of which didn’t even break during the show! In addition, my newest coworker put together an incredibly sweet haptic texture-rendering demo, but I’m sure he’s writing it up on his own blog as I speak :p

Super cheesy heightfield mouse

One of the fun things about piezoelectric bimorphs is that, unlike coin motors, LRAs and voice coil drivers, they can be deflected statically. So it’s possible to set and hold arbitrary linear positions. With this in mind, I scavenged an old ball mouse from the IT junkpile, removed the PS/2 cable and ball, and hacked it up so that the left mouse button now raises and lowers in response to the brightness of the surface directly beneath it. An arbitrary grayscale image placed under the mouse now becomes a tactile experience, felt rather than seen.

The replacement guts consist of a SHIVR actuator, photodetector, 3x AAA battery holder and a small driver board. The driver board consists of a TI DRV8662 piezo driver and a handful of supporting discretes. The DRV8662 functions as a voltage booster and amplifier, stepping up a 3V-5VDC input to 100V and driving a bipolar output in response to a low-voltage (0-3V or so) input signal. The photosensor and an LED were glued up inside the hole where the ball used to be, and the connection between the sensor and a 100k bias resistor was wired directly to the DRV8662 analog input. The actuator was stood off on a piece of scrap metal to match the height of the button. A mechanical stop feature on the underside of the mouse button was Dremeled a bit to give the actuated button a bit larger range of motion. Last but not least, the top shell was spraypainted black to slightly disguise its origins as an old Microsoft ballmouse from about the Soviet era.

Haptic heightfield mouse demo guts partially assembled, actuator shown

Haptic heightfield mouse demo guts

The purple amplifier board was fabbed using OSH Park sometime prior (for experiments just such as this) and pretty much follows the application example in the DRV8662 datasheet, except for the DC modification as follows: Remove the DC blocking capacitors from the IN+/IN- pins, connect your input signal directly to IN+ and connect a midscale reference to IN-. For a typical 3.3V supply voltage and appropriate setting of the gain selects, a 10k/10k resistor divider between 3.3V and GND is just about right. Note that although the datasheet warns against continuous operation of the DRV8662 to avoid overheating, at such low frequencies it doesn’t so much as get warm to the touch. (Actually, I found it nearly impossible to get the evaluation kit into overtemperature even under continuous, harsh drive conditions.)

DRV8662 circuit with PWM input and modification for DC operation

DRV8662 circuit with PWM input and modification for DC operation

Slightly less-cheesy thumpin’ phablet

Another up-and-coming use for linear actuators lately is to provide inertial haptic effects in handheld gadgets. Most folks are familiar with the kind emanating from the weighted motors used in phones and game controllers, but these are fairly limited: they can only shake “all around” (not in a specified direction), the amplitude and frequency cannot be independently controlled (the only way to get more oomph is to spin it faster), and neither can the phasing of the actuator (let alone between multiple actuators) be controlled. Oh yeah, and the spin-up and spin-down times are on the order of 100-400ms depending on the size of the motor, so forget about any sharp, rapid-onset effects. For these reasons, folks are experimenting with linear actuators, which can provide much more precisely controlled sensations (a good example is the proposed Steam controller, which features two touchpads with a linear voicecoil driver under each.)

A fun thing about the piezo bimorphs is they are extremely lightweight (less than 0.5 grams) – so when adding mass at the end to make an inertial driver, it’s basically all payload: that mass isn’t fighting against the dead weight of magnets or metal shielding components. So I decided to make a demo resembling a big phat phablet*, which could be a flashy quadcore phone or some kind of aesthetically addled game controller. Or, you know, a rounded rectangle hogged out of a piece of Delrin. Hey, rounded corners! This demo featured two actuators, one on each side. I slapped a total of 10g tip mass on each, held in place by a stylin’ dab of epoxy.

Handheld inertial haptic demo in phablet-like form factor

Handheld inertial haptic demo in phablet-like form factor

For this demo I laid out a made-for-purpose PCB (not just carved up what I had already hanging around) and sent it off to Gold Phoenix. It arrived juuuust in time, but that’s another story. The board layout had a total of 3 copies of the same DRV8662 circuit, with a spot for a small PIC12 microcontroller at each to supply the waveforms. (The 3rd circuit was to be for a third, surface-bonded actuator, but I didn’t have time to implement it.) The program on each PIC consisted of a simple arbitrary wavetable generator (a handful of basic waveforms such as sine, square and directional “punch” were generated by a python script and slapped into lookup tables) and a series of calls to the waveform generator function with varying amplitudes, frequencies and waveform index to generate the demo effects. The waveform output itself was driven on a PWM pin and filtered to provide a proper analog input to the driver, and a GPIO pin was used for master-slave synchronization between the PICs.

As before, the static deflection capability of the actuators was (ab)used to produce directional effects, such as making the device lunge toward or away from the user (fast drive stroke followed by a slower position-and-hold return stroke), or wiggle by driving them with out-of-phase square waves. With the 10g of mass, the usable frequency range was from about 30Hz to a few hundred Hz. Above 350Hz or so, the drivers reached their power limit and the output waveforms began to distort, producing significant audible noise in addition to motion. Qualitatively, this frequency range goes from a deep rumble to the sensation that there’s a very pissed-off mosquito trapped under your hand. You can’t feel it over the internet, but you can see the actuators throwing in the video below.

If this had been a real smartphone/etc. with a touchscreen, the actuation could respond to touch activity to produce effects like:

  • Simulate surface texturing, i.e. give different screen areas different feels or make areas feel “pushed in” or “popped out”
  • Simulate sticky and slippery spots on the screen by vibrating the screen at high frequencies to modify stiction
  • Create the sensation of inertia or heaviness in the device, resisting as the user shakes or moves it around
  • Create the feeling of compliance, i.e. make the rigid glass screen feel like rubber and bounce when touched
  • Create the illusion of tackiness, where the screen gets pulled with the user’s finger as they let go, along with a vibratory kiss as it pulls free

Demo Code Details

This was for a day-job project, so I can’t provide the actual sourcecode… but can at least describe a bit of how it works.

The waveform generator is pretty straightforward, with one slight tweak to allow for arbitrary amplitude control. The actual waveform data is stored as raw data at a 256-word-aligned boundary. From the beginning of the table, the current entry is moved to the PWM register, followed by a waitloop for a timer overflow flag and then incrementing the table pointer. One call to the waveform output function outputs one complete cycle. The total duration of waveform output is controlled (at the next level up) by how many times this function is called in succession (i.e. how many complete cycles are output at the configured frequency).

The PIC’s 16-bit timer is used to control timing of waveform traversal. It is a little odd, providing a configurable prescale, postscale and period register (PR) setting. The pre/postscale divide the timer by (1:1, 1:4, 1:16) and (1:1 to 1:16) respectively. The PR configures the ‘top’ or rollover value to any value between 1 and 255. Between these settings, a wide variety of rates are possible. Once again, I used a python script (natch!) to build a lookup table which maps a frequency (2Hz increments) to the pre/post/PR combination which comes closest to it. With the on-chip 32MHz oscillator and 128-point wavetable, the realizable waveform frequencies are from 2Hz to somewhere upward of 1KHz.

Each wavetable entry stores one complete cycle of the waveform at 8-bit resolution, full amplitude (i.e. the waveform goes from 0 at its lowest point to 255 at its highest point; 128 is midscale). To achieve variable amplitude without storing scaled copies of the waveform or performing expensive math, some binary arithmetic is used. I describe the actual algorithm in this forum post. To avoid any audible clicking when switching waveforms, the tables are constructed so that the last point in each wavetable ends at midscale, and only multiples of complete wave cycles are output.

The sine, square and triangle waveforms are pretty straightforward. The ‘punch’ waveform consists of a very short quarter-sinewave drive stroke (from -fullscale to +fullscale) followed a linear ramp back to negative fullscale. As with the others, this waveform is time-shifted so that the midscale crossing of the ramp-down occurs at the last point in the table.

Sinusoidal waveform

Sinusoidal waveform

Punch waveform

Punch waveform, consisting of a rapid drive stroke and slow return stroke. In these waveforms the green trace indicates polarity and the red indicates inflection (not used in the demo code).

The Show

The first day consisted entirely of workshops, no exhibitions. Unfortunately our scheduling didn’t permit getting there early and seeing all of them, but did get to check out a couple. One of these was a sweet haptic texture rendering talk from researchers at UPenn. The math behind their approach will make your eyeballs spin backwards into their sockets a bit, but the results were incredibly realistic. On display were a tablet computer and stylus combo that faithfully recreated the sensation of drawing on sandpaper, cardboard and dozens of other textures on the slick glass surface, and the same algorithm implemented in a force-feedback arm for texturing 3D virtual surfaces. The algorithm and texture database are open source and published online.

The other was probably the most badass-looking Brain-Computer Interface setup known to mankind. You think you’re cute with your little Emotiv headset and its 3 thumbtacks touching your mop? Yeah, this one has 128 saline-soaked electrodes for your mind-reading pleasure. You’ll look like a lunchlady wearing it, but everyone will know how ridiculous you think you look. Actually, the takeaway message I got from the BCI stuff on display was don’t believe the hype. The 128-node BCI demo was impressive in that a fresh-from-the-crowd individual could begin to steer a ball left or right onscreen by thinking (specifically, visualizing moving either their left or right arm) within about 10 minutes, without a lengthy training/calibration period. However, even with this very formal setup, those stories you hear about typing with your mind at normal speed, or guessing which picture you’re thinking about, etc… reality isn’t really there yet. (There is a character input scheme known as a P300 speller that does work, but it’s not nearly as straightforward as thinking about a letter and having it show up in your document. Input speeds are measured in characters per minute – not all that many – and require intense concentration.)

A volunteer is wired up to a Brain-Computer Interface consisting of a skull cap with dozens of electrodes

Serious Brain-Computer Interface

The next several days we were exhibiting. Unfortunately that meant we were stuck in our own booth, and couldn’t go check out the demo sessions. On the bright side, many of the demos were left set up between sessions, so we could at least sneak a peek around during e.g. poster sessions (when the exhibition section was a ghost town) and get some idea what they were about. Oh yeah, this is definitely a University-research-heavy conference, so Arduinos everywhere! In retrospect, probably not the ideal venue to try and hawk raw actuators to (nonexistent) smartphone-company scouts, but highly informative regardless.

That said, there were a few industry folks milling around. A few from names you might expect to see here, and even more from companies you’ve probably heard of, but would not expect to be interested in this stuff. Oh yeah, and a haptics show would not be complete without a scout from the notorious “I” company there. Not the rounded corners one, I mean the other “I” company, that seems to elicit groans from the entire haptics community. A guy from there walked up to our booth, so I tried to ask (ahem, tactfully) what exactly they did/sold, what value they added.

“So what do you guys do? Some kind of… software, right?”
“Oh yeah, there’s a software package… we champion the cause of haptics… and mainly, you know, licensing…”

He wasn’t carrying a clipboard or obvious spy camera, so at least there’s a chance we won’t find vague patents filed against everything in our booth appearing in exactly 18 months’ time.

"I am Patent Trollio! I need IP for my bunghole! I would hate for my portfolio to have a holio..."

“I am Patent Trollio! I need IP for my bunghole! I would hate for my portfolio to have a holio…”

Murphy Factor

No such trip is complete without at least something going wrong. In this case it was a problem with the handheld demo. I had built two boards as a precaution (baking on the leadless DRV8662s is fiddly enough, and there are multiple of them on the design – amazingly, both boards worked on the first try) and ordered a couple small LiPol battery samples. There was no time to charge them in the scramble to code the demo the night before, but no problem, we can just recharge this board with any USB cable, right? So the first morning of the show, I started one running and just left it out. After the first hour or so, it started to make an unhappy clicking sound and soon went silent. So I swapped in the 2nd board and plugged the dead 1st one in to charge. After a while, the 2nd board also started to go, so I switched the now-charged (harhar) first back in. Yyyyeah, not so much. It turns out (in later investigation back at the office) I had swapped two resistors during assembly, and the one that should have been standing in for a 10k thermistor was actually a 100k, causing the charger IC to think it was way out of a safe temperature range and shut down. (Hey, they’re all 0402 and completely unmarked; don’t judge.) Lunch involved a bolt to the nearest Radio Shack and MacGyvering a close-enough LiPol charger. Yeah, that’s a bundle of low-value resistors with their ends twisted together (to limit current to ~1C, or about 50mA for this battery), a rectifier diode to drop the 5V from a hacked-off USB cable down closer to 4.2V, and some alligator clips to strategic points on the board. The Shack’s cheapest/only multimeter was used to check the battery voltage periodically to avoid overcharging. Not exactly pretty, but it worked.

handheld_demo_oops3

handheld_demo_oops4

Dinosaurs!

Wait, what? Yes, actually dinosaurs. On the 2nd night there was a banquet held at the Houston Natural History Museum. So, dinner and drinks under the gaping maw of T. rex and a dozen of his closest dead friends. Pretty cool.

Dining with dinos. Bites with trilobites. Ordovician hors d'...Someone please stop me now.

Dining with dinos. Bites with trilobites. Ordovician hors d’…Someone please stop me now.

* Don’t you just love that word – the uneasy intersection of fat flabby phone and a phony wannabe tablet…

Springtime brew

So, I just discovered I’m completely out of Pumpkin Fail (euphemistically, Cinnamon Cream Ale – that I only too late realized I forgot to add the pumpkin to the boil…yeah, don’t ask.). Time for a new brew!

Haven’t decided what to make yet, but at least I know just what to call it…

F*CK SNOW - Seriously, enough of this crap already

(With apologies to Hunger Games fans)

Coming soon to a dominated world near you!

Yep…it’s finally happened!

minion

checklist

Notes to myself: Minibog improvements

I’ve been growing some small carnivorous plants (mainly Nepenthes and sundews) in a tank indoors, but a couple years ago, just for fun I started some Sarracenia (pitcher plant) seeds. Well, what do you know, the darned things actually grew. So this spring, when it was clear I wouldn’t be able to keep them in small pots or winter them over in a fishtank anymore, I put together this freestanding outdoor minibog.

For this first attempt, I picked up a pretty standard (you could even say, “bog standard”, harhar) rectangular planter without drainage holes, put a couple inches of gravel in the bottom, stood a couple pieces of 4″ PVC pipe on the ends, then filled it up with sphagnum peat moss. Well, since the planter was much deeper than the plant pots I was sticking in, I put a couple large rocks at the bottom to take up space too. A small overflow hole drilled just below the soil line prevents it turning into a pond.

Minibog with some s. leucophylla, s. flava and a mystery type. The Nepenthes has already been brought inside due to frost hazard.

Minibog with some s. leucophylla, s. flava and a mystery type. The Nepenthes has already been brought inside due to frost hazard.

This first cut worked out all right. The peat stayed plenty wet with only a bit of watering during dry spells, and water wicked into the pitcher pots (through peat-peat contact via the drainage holes) well enough despite being plastic. The PVC pipes, intended to take up space and provide a place to sink a couple 1L pop bottles as an emergency gravity-watering system, turned out to be unnecessary, and mostly a good place to grow mosquitoes.

Notes for next year:

On watering: The scheme with the PVC pipes (besides taking up the extra space in the planter) was that if the water table ever got to the very bottom and I was on vacation (or just forgetting to water the thing, because that’s something I would do), water-filled pop bottles notched at the bottom would slowly release their contents to maintain 1/4″ or so of water at the bottom (enough to keep things moist enough to stay alive, without immediately evaporating). It turns out the gravel alone works just as well for this. But I did like the fact that the pipes gave an easy visual indication of the water level in the minibog, and a quick way to fill ‘er up. So for next year, a mosquitoproof watering hole / level indicator: It turns out that 1.5″ PVC and a pingpong ball are an almost perfect fit: juuuust enough clearance for the ball to float freely, but not enough to let flying pests reach the water. Brilliant!

1.5" PVC pipe and pingpong ball as water level indicator and mosquito excluder.

1.5″ PVC pipe and pingpong ball as water level indicator and mosquito excluder.

On wintering (to move or not to move): So far I’ve been providing winter dormancy for various plants by keeping them in a fishtank in a cold room of the house. Except for one N. purpurea (Home Depot rescue), I’ve got nothing that’s winter-hardy where I live (USDA zone 5). Reading up on the subject, it looks like people are successfully wintering not-so-hardy pitchers here if they are dug into the ground (no freestanding pots) and mulched. So next year I might take this bog and sink it into the ground, for plants that can be wintered outdoors, and make a smaller one that can easily be brought inside for the tropical stuff (I stuck a Nepenthes in the bog for the season, and it seemed to like it). Keep the pingpong ball watering scheme for both. I’m hoping with the infrastructure of a bog underneath, the fishtank will no longer be necessary for maintaining humidity or keeping things wet over the intervals I remember to water them. (That, and it looks like the fishtank is needed for actual fish now.)

More on moving: Another reason to bury this one is this design of planter was apparently not really intended to be full of water: after a couple seasons, it’s really starting to bow out in the middle.

On wildlife: Something(s) just love to go in there and dig little holes everywhere. I can’t really tell whether it’s squirrels, birds or both. Likewise, when I tried starting some live Sphagnum on the top, it was quickly dug out and removed (probably by nesting birds). In the end I had to cover most of the surface with good sized rocks to keep it from looking like a minefield. Sprinkling the surface with cinnamon and cayenne pepper powder had no effect. Next year, possibly some better wildlife protection scheme…

PID control stove hack for better brewing, sous vide, etc.

“Cook all the foods, hack all the things /
Make all the booze, hack all the things”

In beer brewing, the temperature profile during mashing has important implications for the beer’s eventual body and “mouthfeel”. The requirements for partial mash brewing are a bit more relaxed than all-grain, but still matter (the enzymatic reactions occur over the range of ~150-160F, but progress differently depending on the time spent in various portions of that range.) More to the point, manually babysitting this for a large pot of not-yet-beer on a stovetop is a pain in the arse, so the other night I just modded our electric range for external closed-loop temperature control.

The hack is pretty simple and should be straightforward to apply to most electric stoves. The supplies needed are a temperature controller (typically a PID controller), a temperature probe (typically thermocouple or RTD), and a beefy solid-state relay. I’d also highly recommend a jack for the control signal connection so it can be easily unplugged and stashed somewhere when not in use. In my case, I also added a beefy bypass switch that allowed the mod to be done in a reversible and failsafe way.

I am very fortunate to have a wife who puts up with my relentless tweaking around and warranty-voiding :-p She didn’t even flip a shit when she went for breakfast one morning and saw this:

"But honey... I'm making it *better*..."

“But honey… I’m making it *better*…”

Even so, a mod that involved loose ugly equipment and snarls of exposed wiring was probably not going to fly – so this had to be done with no or minimal visible changes.

Electric range guts in brief
Your parents’ (or maybe grandparents’) generation put a man on the Moon, so you might think a modern electric stove is full of microcontrollery precision and closed-loop systems targeting a specific glass temperature (non-contact IR temp sensors being about a buck fifty in quantity now). But nothing could be further from the truth. Typically, any and all temperature regulation is contained in the control knob itself, using a hacky-sounding mechanical assembly known as an infinite switch. It’s basically a bimetallic strip, like in an old thermostat, stuck on top of a variable resistor set by the knob position. The heat from the resistor makes/breaks the circuit at some interval proportional-ish to knob position, but it doesn’t know or care what kind of thermal load is on the burner. More relevant to this application, it also means there is no nice relay controlling everything, ready to have its nice low-voltage control signal hijacked with an Arduino or similar. This mod requires working directly with mains voltage and current, typically 240VAC (for North America at least), and obviously at enough amps to make things glow.

The mod

The stove I have features several burners of various sizes. I targeted the largest one for modification as this is the only one large enough for the brewpot. This burner features three independent elements that can be enabled in increments (inner, inner+middle, or all) to heat things of various sizes. Upon removing the back panel of the stove, I found a wiring diagram folded up and tucked inside behind the control knobs. This provided a good starting point, and also helpfully reported that the burner pulls 3,000W when all 3 elements are engaged, at a resistance of only 19.2 ohms across the 240V mains. Yikes!

Assuming a full 240V (RMS) from the wall, the burner should pull 12.5A (Ohm’s law). For a safety margin and (mainly) the belief that random solid state relays, usually made in China, may tend to be “optimistic” about their current ratings, I bought one rated for 40A, along with a PID temperature controller (Auber Instruments SYL1512A) and a waterproof RTD (resistance temperature detector) probe. PID control mathematically anticipates the future output for a given input, providing more accurate control (e.g. reducing overshoot), especially in an application like this with plenty of thermal inertia. The output of this controller is directly compatible with typical solid state relays, which are controlled by a DC voltage (typically accepting a wide range such as 3-24V). To avoid any possibility of overstressing the elements from rapid cycling, the controller cycle time setting was lengthened to be closer to the cycle times observed from the manual control, about 15 seconds.

A simplified diagram for the 3-element burner, before and after modification, is shown below. The dotted boxes in the top figure represent various internal switches on the control knob. Note that despite being drawn as completely independent switches, they are actually sequenced by the knob in a manner that isn’t entirely clear (and I didn’t care to reverse-engineer with a meter). I felt the most failsafe approach would be to wire the relay in series with the main switch, with a bypass switch in parallel with the relay, rather than bypass the knob switch with the relay. This provides “normal” (relay bypassed) vs. “external control” (bypass switch open) modes, and ensures that the burner can still be used normally if the relay ever fails, either open or short. The knob must still be turned to one of the ON positions to operate in either mode, so a during-the-night relay or controller failure cannot activate the burner unattended.

Wiring diagram

(Top) Original circuit, (Bottom) Modification for external control

In this stove a fat red bus wire delivers mains power to each infinite switch via a 1/4-inch quick connect, making it easy to patch custom circuitry in between.

In this stove a fat red bus wire delivers mains power to each infinite switch via a 1/4-inch quick connect, making it easy to patch custom circuitry in between.

I made a couple splitters to distribute power between the SSR and bypass switch and back to the infinite switch. Quick-connects make connecting it up, well, quick.

I made a couple splitters to distribute power between the SSR and bypass switch and back to the infinite switch. Quick-connects make connecting it up, well, quick.

Passing 3000W is no small potatoes, so external heatsinking on the solid-state relay is needed. For this I bolted the relay to a scrap aluminum plate, drilled a few holes in the stove’s back panel and bolted it to the inside. Thermal grease on all mating surfaces aids heat transfer. The plate acts as a heat spreader, making the entire back panel one big (if not particularly efficient) heatsink. The relay was positioned to fit into a gap between the control knobs and the main electronics assembly (clock / oven control) inside the front panel.

Solid-state relay installed onto inside of rear cover, allowing the entire cover to act as its heatsink.

Solid-state relay installed onto inside of rear cover, allowing the entire cover to act as its heatsink.

The relay and its wiring fits nicely into a gap between the infinite switches and the oven control assembly.

The relay and its wiring fits nicely into a gap between the infinite switches and the oven control assembly.

Rear cover reassembled. You can hardly tell it's been modified.

Rear cover reassembled. You can hardly tell it’s been modified.

Wiring for the bypass switch and relay control input was snaked through small gaps underneath the front panel, and the switch and a phono jack for the relay control were mounted tucked underneath the front panel using J-B Weld. In the end, unless you get up close and squint just right, you probably couldn’t tell the stove didn’t come this way.

Bypass switch wiring routed through a small opening underneath the front panel.

Bypass switch wiring routed through a small opening underneath the front panel.

Hey, I needed some way to hold the switch in position while the epoxy hardened...

Hey, I needed some way to hold the switch in position while the epoxy hardened…

Testing the thermal regulation using a small pot of water. The controller will be put into a nice enclosure soon.

Testing the thermal regulation using a small pot of water. The controller will be put into a nice enclosure soon.

End result. It looks pretty normal when the external controller isn't in use.

End result. It looks pretty normal when the external controller isn’t in use.

Sidenote: The hacks that didn’t get done
While I had the unit open, there was also the remote possibility of fixing a few cringeworthy design flaws, I mean “features”, in the oven portion, which IS microprocessor-controlled. One is that the temperature display actively lies to you: as soon as the oven meets or exceeds your set temperature, it will lock the displayed “actual” temperature to this value regardless of any subsequent, including major, fluctuations in the actual temperature. This is probably done to hide the poor bang-bang thermal regulation from the customer, but can be very annoying if you’re furiously fanning the door to cool it off in a hurry, or need to lower the set temperature for any reason. The other one is that you cannot – I mean you are actively forbidden to – set an oven temperature lower than 175F (a temperature where bacteria cannot thrive). Doubtless this is the work of lawyers trying to stop stupid people from cooking up Montezuma’s Revenge casseroles, but tough noogies if you wanted to dry some herbs or keep some bread warm.

As an embedded programmer by day, I figure oven firmware should be simple enough that this could be a short afternoon reverse engineering, right? Alas, the microcontroller turns out to be an obscure Renesas part with a rather obtuse-looking in-circuit programming protocol (not PIC, AVR or anything else I can easily borrow a programmer for and try to suck the firmware out). I decided I didn’t care enough to try implementing my own reader on the off chance the firmware was left unprotected.

Notes to myself: Lulzbot TAZ n00b guide

A work in progress…

Critical Specs:
Bed size / Build envelope: 298mm (X) * 275mm (Y) * 250mm (Z)
Nozzle diameter: 0.35mm (shipping default; others available)
Filament diameter: The TAZ’s extruder expects 3mm filament. However, not all off-the-shelf filament will be exactly this diameter; the discrepancy could affect print quality in severe cases (see notes later).

The Parts:

Printer:
-RAMBo – all-in-one Arduino-compatible controller + stepper/heater/etc. driver board (all-in-one variant of RAMPS)
–Marlin – firmware used by RAMPS/RAMBo boards
-Extruder (motor/gearing for filament control, plus hot end)
–Hot End (nozzle, barrel and heater): Budaschnozzle

PC Software:
For open-source 3D printers, this comes typically in 2 parts: a printer interface program and a slicing program. The slicer converts STL to G-code, and the interface is what actually drives the printer (usually includes homing, viewing/adjusting temperatures, and sending the generated G-code to the printer).

Printer controller / interface: TAZ ships with Pronterface, but others are available.

Slicer: TAZ ships with the aptly-named Slic3r, but again, others are available.

Common Problems

Printer “shriek” followed by severe height problems when printing (head crashing into bed or printing in midair) – This is the ONE issue that gave me the most grief. The issue in brief is that the TAZ’s Z-axis (2-motor gantry on high-friction leadscrews) must be driven quite slowly compared to the other two axes, or else the motors will stall. However, most slicers (including the Slic3r configuration that comes with the printer) insist on moving every axis at the maximum possible speed for all non-printing moves (and often even printing moves). In general, this is accomplished by generating G-code specifying arbitrarily high feedrates (F999999 or whatever) and expecting the firmware to limit these moves to a sane speed. For some reason, these limiting values are either missing or incorrectly set on the printer firmware end, so the Z axis motors stall when performing the initial home-raise-position-lower move at the beginning of the print. This results in a number of highly irritating problems such as printing in midair (if it stalls more during the lowering portion), or the nozzle dragging across the bed and shredding the crap out of any films/coatings thereupon (stalled more/entirely during the raise portion), or putting the entire gantry out of whack (if the motors on either side stalled by different amounts – have fun with this one!).

Some more discussion of this issue at http://forum.lulzbot.com/viewtopic.php?f=36&t=91 .

To fix it: At the time of this writing (10/2013), you have to rebuild and reflash the firmware after tweaking some settings. In brief: Download the appropriate firmware sourcecode for your printer here, along with the Arduino IDE from the same page (they specifically endorse and host v. 1.01, but any recent version should in theory work). Oh yeah, and for you Windoze users, some kind of tool for opening .bzip2 archives. I recommend 7-Zip. (Great utility, but be wary of fake/spam download links from their file host when downloading…) Unzip the firmware files, go into the configuration.h file and uncomment the //#define EEPROM_SETTINGS and //#define EEPROM_CHITCHAT lines. This will let you set maximum speeds using an M code (to be explained later). While you’re in there, may as well sane up the default values too. Scroll up and lower the Z value for DEFAULT_MAX_FEEDRATE (the values are X, Y, Z and Extruder, respectively). The default is 10, which for my printer is too fast. On mine, a value of 5 was slow enough but excited some nasty resonance that caused the whole printer to buzz angrily. In the end I ended up with 4.2, which provided much smoother operation. (YMMV of course.)

Open the Arduino IDE, and from there open “marlin.ino” in the firmware files. Set the board as “Arduino Mega 2560” (Tools -> Board) and the serial port used by your printer (Tools -> Serial Port), press the Upload button and cross your fingers.

Assuming everything worked, you can now use M commands to set speed limits for each axis. Use:

M203 X### Y### Z### E###

to set speeds, and M503 to display the current values (if EEPROM_CHITCHAT is enabled). You can add this code to your slicer’s startup routine to ensure the speeds are not overridden by other users or software settings (most slicers have a place to add custom G-code before and after the slicer output.)

NOTE: Pay careful attention to expected units. The M command expects (by default?) values in mm/min., whereas if you set a default in the firmware as above, this value is expected in mm/sec. Particularly braindead slicers and/or host software might even set your machine into Imperial mode, blech.

Print does not stick to bed / breaks loose – Very common, and semi-related to the curling issue below. The big thing to check is the nozzle height when printing the first layer. It should be basically touching, smooshing the initial layer down nice and flat. The output should not look like a round bead of toothpaste being squeezed out of a tube. Check bed leveling, Z screw and repeatability, etc. Next things are to play around with the bed temperature and possibly extrusion temperature. Ensure the bed is fully up to temp before printing. For parts with low bed contact area or small protrusions that get caught and lifted, you can try various software-generated adulterations (raft or brim) to help with adhesion. These are typically autogenerated by the slicer program at your command. A raft is a solid-fill first layer covering typically a bounding box around the footprint of the print, followed by a sparse layer that allows for breaking the raft off afterward. A brim is a widening of all features on the first layer (as if your print were a bit melty and thrown at the bed with some force) to increase the surface area and move any “peel point” away from your actual part geometry. Brims are good to help stick down thin features that are prone to being lifted during the early print passes. After removal, the thin brim can be easily cut off with an X-Acto knife.

UPDATE: I tried the “lulzjuice” (acetone glue) trick, and it seems to work beautifully – completely solved my adhesion problems! Basically, for printing ABS, dissolve a bit of filament in some acetone, and wipe/brush a thin layer onto the buildplate before printing. Acetone has a low boiling point, so if you have a heated buildplate, do this *before* heating it up to avoid a bubbly mess. The flip side is that unsticking the print (intentionally) can be a little harder – if all else fails, try hitting it with coldspray (or an inverted can of computer duster).

Edges of print peel / curl away from bed while printing – Caused by a combination of poor adhesion to the bed and changing temperature. Check all of the stuff for the previous problem, then see what you can do about controlling the rate of cooling (overall or between layers). Try lowering the extruder temp slightly or raising the bed temp(?), dealing with any sources of cold breezes (enclosing the build area if necessary), or fiddling with software-based thermal controls (delays between layers, etc.).

Fat, floppy, and/or blobby print (edges extending beyond where they “should have been”) – Often a side-effect of the edges peeling upward and pushing the outer surfaces up toward the nozzle (the excess material has nowhere to go but outward…). If there is no peeling problem, most likely too much material is being extruded. See “Filament Diameter Problems” below…

Filament Diameter Problems – Use good, red-handled calipers to measure the filament diameter in several places – it may be a bit fatter or thinner than the nominal dimension, especially extra-cheap or noname-brand stuff. A fraction of a mm difference may not sound like much, but considering that’s being squeezed down from, say, 3mm to 0.35mm, it can become significant! Most slicers have a means for compensating for the filament diameter.

Popping sounds from extruder during print – Trapped humidity in filament (think popcorn) – try gentyl baking or storing in a drybag when not used for extended periods.

Trouble bridging – The extruded filament should be getting “pulled” slightly during print; not “pushed” (i.e. travel rate should slightly exceed the extrusion rate). See “Filament Diameter Problems” or related compensations offered by your slicer.

Hole diameters in printed parts – plastic may “squish out” a bit during extrusion, causing your hole diameters to be slightly narrower than expected. Assuming the amount of extrusion is actually correct (see several of the above), I’ve heard of folks compensating with an Excel table of expected vs. achieved hole size (it might calculate this) – don’t know where to get this though.

Tim Tears It Apart: Sensitech TempTale4(R) data logger

One of these devices appeared in a large shipment of temperature-sensitive raw materials at my work, amid a pile of dry ice chips. While I don’t know the MSRP or actual retail price of this gadget, the shipper packs one in with every order and tacks on $60-70 for it as a line-item; nonreturnable as far as I know.

So, we can’t return it and we can’t read it out, so what do we do?

I think you know what we do :p

TempTale4 Front Panel

TempTale4 Front Panel

The device is aimed at exactly this application – telling if your temperature-sensitive stuff stayed within a defined temperature band during all phases of shipping and handling. With timestamps, you could probably tell exactly which party in the shipment chain screwed the pooch. There is a fancy term for this kind of tracking – cold chain certification.

As far as interfaces go, it doesn’t get much more simple. A button to start logging, a button to stop logging, a simple LCD display, and a couple blanks to enter a shipper’s name and the PO# of the shipment. The current temperature and recording status (started / stopped) is displayed on the LCD. If the temperature went outside the allowed band while recording, an alarm symbol (bell) also appears. A pair of LEDs exposed through the front panel allow the device to be configured and recorded data to be read out to a PC via an optical doohicky (more on this later).

TempTale4 rear cover removed, showing the plastic 'cage' that holds the battery pack in place

TempTale4 rear cover removed, showing the plastic ‘cage’ that holds the battery pack in place

The enclosure is a basic 2-piece sandwich, with the half-thickness (.031″) PCB fixed into the front by 3 screws. A separate plastic ‘cage’ holds the battery pack in place; fingers on the backside clip around the edges of the PCB. The entire weight of the battery pack – and impact loads it imparts during rough shipping – are borne solely by the PCB and ultimately those 3 screw points. They might not anticipate extremely rough shipping for cold-chain cargoes (although a broken logger could be spun as a rough-shipping-detection “feature”).

On the plus side, note the O-ring ensuring a water-resistant seal between the case halves. Any other case openings are covered by the front overlay “sticker”, eliminating any fluid intrusion paths.

Battery cage removed

Battery cage removed

Battery pack removed. That piece of double-stick tape really ties the room together.

Battery pack removed. That piece of double-stick tape really ties the room together.

The battery pack is Zip-tied to the plastic “cage”, although not to the PCB or any part of the case. A piece of foamy double-stick tape helps hold and cushion the battery pack where it sits against the PCB. This and the overlay sticker on the front suggest this device is intended for enforcing low temperatures. High-temperature overlays and even double-stick tape exist, but it’s pretty fancy stuff. This tape isn’t fancy.

The battery pack consists of two series-connected Tadiran lithium primary cells (AA size), with a nominal 7.2V output. Now, in an industry that is constantly pushing toward ever lower voltages to reduce power consumption, this is weird! Especially for a gadget that needs to go a long time without a battery change. A single 3.6V cell will easily power a 1.8-3.3V device, and maintain this output voltage until nearly depleted. Some wild guesses at the reason behind the unusually high voltage:

1) High-current operation at very low temperatures. This shouldn’t be an issue during normal operation (just datalogging should use very little current on average), but if someone wanted to read it out over the optical interface in the Arctic, it may be another story. This may provide extra headroom against the inevitable cold-battery voltage sags that would occur as the transmit LED fires.

2) Voltage-happy LCD? Some LCDs require a higher voltage such as this to operate (usually generated by an onboard charge pump circuit), but these are typically graphical matrix LCDs (many independent rows/columns) – the rows/cols are typically energized one at a time, and to refresh the entire display faster than the eye can perceive flicker, each one is on for only a very short time – higher voltages help them reach their final dark/light state within that time and retain it until the next refresh. I can’t imagine this little segment LCD having such a requirement.

3) They needed 2 batteries to get the milliamp-hours up regadless, and wiring them in series was easier/cheaper (no worries about cell balancing). This is not an ideal way to get more mAh (increasing resistive losses in the series-connected pair, and conversion losses in any regulator, especially linear/LDO), but I suppose it works well-enough, and the price is right.

Bottom side of PCB exposed

Bottom side of PCB exposed

With the battery pack out of the way, we can see the bottom side of the PCB. Not much there! A couple things to note though:

1) A secret button hidden inside the device. As it turns out, this button resets/clears the device for reuse. The entire LCD will blink every segment for a couple minutes, then the device is factory-fresh again (probably).

2) No components apart from the secret button, but a fair number of empty pads (non-stuffed components).

PCB removed - rear of LCD accessible

PCB removed – rear of LCD accessible

Top of PCB. Notable points incude "secret" magnetic switch, accessible programming header, and an overall dearth of actual components.

Top of PCB. Notable points incude “secret” magnetic switch, accessible programming header, and an overall dearth of actual components.

Finally, we get to the topside of the PCB, the actual meat of the device! Erm, wait a minute, where’s all the meat?

The $70 pricetag notwithstanding, it should be becoming clear that this device is cheap-cheap-cheap to make. The bill of materials consists mainly of a few pushbuttons, a small handful of discretes and a glob-top MCU/ASIC. The segment LCD and an EEPROM in the top-left of the photo (a note on the manufacturer’s web site says it could be 2KB or even a whopping 16KB of storage) complete the ensemble. I have to snicker a bit about that after testing a 32GByte uSD card in my own day-job datalogger design the same day, but again, this device is designed to be throwaway cheap, and 640k (ahem, 16k) ought to be enough for anybody – for temperature data, anyway. (The astute reader will see “32K” stamped on the chip; either a clever misdirection or these loggers have grown more spacious than the web site lets on.)

Some notable points:

1) The temperature sensing element appears to be a simple RTD – no thermocouple or even brand-name digital sensor, but probably accurate enough.

2) The glass tube designated S3 is a magnetic reed switch. This almost certainly is used to trigger entry into the download/configure mode, either with a magnetic wand or a magnet built into a monolithic reader device that aligns to the LEDs.

3) The LCD is affixed to the top shell, not the PCB, and contact is made by an elastomer strip (zebra strip). Don’t lose this!

4) The neat row of capacitors at the bottom of the photo (C2, C3, C10 ~ 13) are probably part of a charge pump circuit for the LCD. There goes that theory about the battery voltage.

5) As with the bottom side, note the prevalence of non-stuffed component pads. Aside from a good handful of discretes, there are spots that appear to accept a second RTD temperature sensor and a humidity sensor. Most likely, this same board and ASIC become the “TempTale4 Humidity” with the addition of these components.

6) Besides the holes for extra sensors, pay particular attention to the two – two! – sets of non-stuffed headers (J1, J2). The latter pins directly into the globtop, suggesting the likely possibility of an in-circuit programming header (or even JTAG, holiest of holy grails).

See-thru view of front case, showing button flexures and a small opening in the plastic to bring the temperature sensor closer to the outside environment.

See-thru view of front case, showing button flexures and a small opening in the plastic to bring the temperature sensor closer to the outside environment.

In this final photo, you can see the shell cutouts as they relate to the overlay sticker. The temperature sensor normally sits in the small notch in the middle, leaving only the thin bit of sticker between the sensor and the outside environment.

A new feature: “Tim Tears It Apart”!

So, as you might have guessed, I’m an electronics engineer, and I like to tear things apart – especially gadgets. I don’t usually post about it, because a) someone else has probably already posted a teardown of that gadget, and b) I’m lazy as balls.

But then I realized a good teardown is not all about the pretty pictures, but reverse-engineering the mind and intentions of the original designer. After about a deca*cough* some time in the industry, at the age where I tell kids to get off my lawn*cough* pull up their damn pants, I’m getting a pretty decent feel for not just how a gadget works, but why it works the way it does – i.e. the budgetary constraints, schedule pressures and technical constraints behind specific design decisions. So maybe it is worth posting those teardowns after all :p

I can’t guarantee it’ll be a frequent feature, but there are a few torn-apart gadgets I could throw my 2 cents in on.

Notes to myself: Test a Bluetooth Low Energy device on Raspberry Pi, the quick way

Testing if the new nRF8001-based Mosquino BLE shield I built actually works.
With the unmodified library and example code, it purports itself to the a Nordic heartrate monitor.

mq-shield-ble

Much of the below based on Michael Saunby’s blog post on checking out a TI SensorTag.

Install bluez and hcitool (plus any dependencies). As of today, current version available from a stock raspbian is 4.99-2. NOTE: “gksu synaptic” from the console to get a working graphical package manager, if you’re into that sort of thing. (“Gtk-WARNING **: cannot open display: :0” probably means you used sudo instead of gksu; bad dog.)

Then…

$ sudo hcitool hci0 up
$ sudo hcitool lescan

If all goes well, output like:

LE Scan …
DF:32:3A:73:A3:1C Nordic HRM V1.0
DF:32:3A:73:A3:1C (unknown)

If that’s your device, congratulations, it’s working!

Connecting to it…

$ gatttool -b DF:32:3A:73:A3:1C –interactive
[ ][DF:32:3A:73:A3:1C][LE]> connect
[CON][DF:32:3A:73:A3:1C][LE]> char-read-hnd 0x01
[CON][DF:32:3A:73:A3:1C][LE]>
Characteristic value/descriptor: 00 18
[CON][DF:32:3A:73:A3:1C][LE]>

Don’t ask me how to find out the handles your device supports or what the resulting data means; that’s an exploration for another day…

Bonus trick: make the computer beep everytime it gets an advertising packet:

$ sudo stdbuf -oL hcitool -i hci0 lescan | while read; do beep -l 20 -f 1000; done

Good for range testing (I have not tested it).

Refraction Fail

Poor promotional poster placement

Poor promotional poster placement

Fun expression of the day: “flip a coil”

There is supposedly an Afrikaans expression that translates as “flipping a coil” or “flipping the cone”, etc. Kind of a more evocative equivalent to “shitting a brick”. It refers to the act of turning one’s underwear inside-out to dump out the results of having shit one’s pants, e.g. due to extreme rage or surprise. The imagery of course is of the canonical cartoon representation of a pile of poop as a conical coil, bearing an uncanny resemblance to the top of a soft-serve ice cream.

“Johnson is totally gonna flip a cone when the paternity test results come back.”

I wonder if this is somehow the origin of similar English expressions such as “flip out” or “flip your shit”.

Rez Trance Vibrator mystery LEDs, and how to control them

ASCII Trance Vibrator with the LEDs populated

ASCII Trance Vibrator with the LEDs populated

So, a friend of a friend recently managed to buy the original Rez, complete with new-in-box Trance Vibrator. But the peripheral didn’t work – it would be detected, but otherwise not do anything. Knowing I had produced a compatible open source version of this hardware, she brought it over for a look. The motor was seized up, probably due to not having spun in over a decade, and easily fixed with a drop of sewing machine oil. But while I had it open, I decided to resolve a longstanding mystery about the device. You know I’m a sucker for LEDs.

Mystery LEDs

Looking at the PCB reveals footprints for 3 non-placed LEDs and associated current limiting resistors. Facing them on the front side of the case are 3 rectangular pockets of an appropriate size to receive such LEDs, but no openings exposing them to the outside world. (Similar walled structures are used around LEDs in close proximity in other gadgets to block light from the neighboring LEDs.) It’s possible such openings were originally intended to be drilled later… or the mold design was hurriedly changed to remove them, but the pockets themselves remained.

So I stuck some 3mm LEDs there to see what would happen. I assume they were probably meant to be Red/Green/Blue, but I just used the colors I had on hand, along with a trio of 1k 0805 surface-mount resistors.

I don’t have Rez or a PS2, so I fired it up with the test utility I wrote for the Drmn’ Trance Vibrator project. The LEDs light in a changing pattern in response to changes in motor drive level. I wasn’t sure if it was intrinsically tied to motor intensity or some other data, so a quick test was in order. A faithful implementation of the ASCII vibrator’s USB protocol includes some redundant data, which could be omitted without affecting device operation, so I suspected the smoking gun would lie in these extra bytes.

Sure enough, the LEDs can be set arbitrarily via a byte in the USB packet, independently of motor control. The low byte of wIndex, specifically the 3 LSBs, directly set the LED states. The LSB controls “A1” and the next two control “A2” and “A3”, respectively. Although the LEDs can be controlled independently of the motor, the official Rez game always sets the low byte of wIndex equal to the low nibble of the motor power level it is commanding. Observing a USB packet dump of a level playthrough reveals no attempt to drive the LEDs independently. (Since that feature never made it into the released Trance Vibrator itself, it’s not surprising that the game’s developers didn’t agonize over what they could do with it. I’m a little surprised the game passes any data for that byte at all.)

LED control is all-or-nothing. At this point I suspected the high byte of wIndex (always sent as 0x03) might control intensity, but fuzzing around with this value or the others (e.g. the bRequest of 0x00) has no effect.

Here is a quick Python script demonstrating blinking the trance vibe LEDs.

And a video of them in action:

But why were the LEDs removed?
The first answer that may spring to mind is cost cutting… but it doesn’t make too much sense at first, because LEDs are pretty cheap. Or are they? Simply poking them out of holes the front of the enclosure would wreck the watertightness of the enclosure without some kind of gaskets, sealants, or clear windows to cover the openings. Even though the game’s creator says the device was not intended to be sexual, if you put humans near something that vibrates, the first thing they will do is stick it on their fun bits. No. Exceptions. Having openings in an ultimately line-powered device meant to be used that close to the body might have opened them up to regulatory headaches, e.g. medical-grade power supply isolation, and might just not have been a good idea overall.

Tickle_me_elmo

Oh boy… that tickles!

The most logical reason for the removal of the LEDs, though, is that they just wouldn’t really work. Unless you are a contortionist, there are very few ways one could use this device as intended and still have them in your line of sight. I suppose it’s possible they were meant to work as a very early Ambilight-type system if played in a darkened room, but you’d still need some pretty bright LEDs and a pretty small room. It also wouldn’t work from a shirt or pants pocket, or with the protective cover on.

The rest of the circuit

Operationally, the circuit inside is actually pretty close to Drmn’ Trance Vibe, although approached in a very circuitous way. Both generate a PWM signal that controls a transistor to modulate the motor speed. In the ASCII vibrator though, rather than directly generate the PWM signal from the microcontroller, it instead outputs an 8-bit value via I/O pins to a crude DAC (R-2R ladder), which feeds into an opamp (buffer?) and eventually a TL5001 discrete PWM generator. This drives a transistor, which may in turn drive another identical transistor to drive the motor. I have no idea what the 3rd large transistor, next to the PWM generator, is for – I didn’t feel like digging to that level of detail on borrowed hardware that I had to return in working condition :-)

There is a non-stuffed mystery jumper near the large inductor at center, which receives (or would) its output from a similarly non-stuffed diode, and leads to an I/O pin on the microcontroller. In my quick testing, this appeared to be an analog signal which remained at or near ground through all normal operation. I suspect it connects to the overcurrent signal on the PWM generator, and would cause the CPU to halt the motor if tripped, but don’t bet the farm on this. It doesn’t appear to invoke a bootloader, diagnostic mode or anything similarly interesting.

Here are hi-res pics of the PCB, if anyone is interested:

Top of ASCII Trance Vibrator PCB

Top of ASCII Trance Vibrator PCB

Bottom of ASCII Trance Vibrator PCB

Bottom of ASCII Trance Vibrator PCB

Solar seed warmer to get a jump on spring

Spring is coming… here is a tiny little hacklet from the bench of Tim.

I live in New England. I don’t mind it here, but the growing season is a bit short. So here is a scheme to give outdoor direct-seeding a little head start.

Seeds for many food plants, such as melons and peppers, will not germinate until temperatures rise above a certain point consistently. They could in theory be started indoors, but I’ve never had good luck with this in a prison-windowed New England house: even if I remember to water the seedlings consistently (hint: I don’t), regardless of how well I position them in my best south-facing window, they still end up weak and spindly for lack of sunlight. If they don’t outright die when hardened off and transplanted, they seem to go into some kind of shock and stop growing for several critical weeks. I’ve found direct-seeding outdoors is a lot more reliable overall. The tradeoff is that by the time it gets warm enough long enough to trigger germination, they will not set fruit until the tail end of the growing season. What a pain!

The previous homeowner was nice enough to leave behind some things, including some solar garden lights. A quick tweak to them makes them into seed warmers.

Solar garden light converted into a nighttime seed warmer

Solar garden light converted into a nighttime seed warmer

String of low-valued resistors insulated with black heatshrink

String of low-valued resistors insulated with black heatshrink

Assembled view of the seed warmer element

Assembled view of the seed warmer element

Ingredients:
1 old solar garden light
Heat shrink tubing
A few low-value resistors (5-10 ohms)
Small bit of copper tubing

Process:
Sould be pretty self-explanatory from the pictures.
solder one to a few of the resistors in series to achieve the desired length. Connect to each end of the resistor string with some thin insulated wire. Heat shrink this assembly so that the wires exit on the same side (the heat shrink will prevent the bodies of the resistors from wearing through against the tubing and shorting out). Cut a piece of copper tubing to a sufficient length that this can be stuffed inside. For best results, the assembly should be a snug fit inside the tubing to ensure good thermal contact. Finally, seal the ends of the tubing with RTv or similar watertight material, and optionally coat the copper with something to prevent corrosion.

Remove the LED from the solar light, and wire the heating tube in its place. DONE!

Now, when you plant a hill of outdoor seeds, drive the heating tube into the center of the hill, and place the solar light off to the side a bit (so it is not shading the hill). The sun will help keep them warm during the day, and the heater will take over during the cold nights.

Notes:
How much heating will you get? Short answer is “it depends”. In theory, you can measure the voltage output by the light unit and use Ohm’s Law to calculate the power dissipated over your chosen resistors (power in Watts = I*V = V^2 / R). Depending on the design of your solar lights, the output voltage may not be remotely constant or easily characterized, and the circuit inside may have its own current-sourcing limit, reducing your total output. The actual amount of heating you get may be pretty modest. You will not (and should not) find the tube uncomfortably warm to the touch during operation. Fortunately, dirt is a pretty good insulator.

Some plants just plain don’t like to grow in the cold, even if you can trick them into germinating early. For better results, combine with a coldframe to keep the aboveground bit a little warmer too.

So, it appears MakerBot have gone full evil now…

We shuddered when it was announced that MakerBot were taking the next version of their RepRap-based printer design closed-source. We crossed our fingers when the CEO responded to the flap saying they’d be “as open as possible“. We watched with popcorn the various flaps about Thingiverse, legal mumbojumbo, attribution and moral rights.

But now this. MakerBot has been awarded a patent on the conveyor belt. (Specifically, use of a conveyor belt “with a 3D printer”.)

I don’t know about you, but I can’t possibly think of any device for converting a computer file to a tangible work product that uses rollers to clear its work product from the work area to make room for subsequent work product. Certainly no such analogous device exists, or else 3D printers wouldn’t have such a clever and unique name.

While I am here, to forestall successful patent attempts on other obvious means of clearing work product from a work area, I hereby disclose the following novel invention:

1) A work producing system and method comprising a work producing machine, a means of executing stored instructions (sometimes called a “computer”), a set of instructions (sometimes called a “program”, or “software”) that instructs the work producing machine to produce a work product responsive to a description (sometimes called a “file”) describing the work product, a means of conveying said description to said system, and a means of conveying said instructions to said machine. (The system description may optionally include such novel and non-obvious components as RAM, a CPU, wires, wifi, power from the power company, etc.)

2) The claims in Claim 1 where the work-producing machine further includes a method of clearing prior work products from its work area.

3) The claims in Claim 2 where the work-clearing means includes a pushing means to push the old work products from the work area.

4) The claims in Claim 2 where the work-clearing means includes a pulling means to pull the old work products from the work area.

5) The claims in Claim 2 where the work-clearing means includes a scraping means to scrape the old work products from the work area.

6) The claims in Claim 2 where the work-clearing means includes a gravitational means to remove the old work products from the work area. An example of such a means is a tilting means which tilts the work surface, a rotational means which rotates the work surface to a nonhorizontal position, or an antigravity device which causes a local gravity inversion in the vicinity of the work surface.

7) The claims in Claim 2 where the work-clearing means includes a vibrational means to shake loose the old work products.

8) The claims in Claim 2 where the work-clearing means includes additional work surfaces which can be exchanged with the work surface on which work products have previously been produced, and a means of exchanging said work surfaces. (The unused work surface may, for example, be physically exchanged with the used work surface of the same machine, or exist in a second work-producing machine which takes over work production jobs while the first work surface is full.)

9) The claims in Claims 3-7 inclusive, where zero or more said means are combined in such a way as to improve the reliability of clearing work products from the work area.

10) The claims in Claim 9 where the pushing means further comprises a solid object configured to move across the work area, thereby pushing work product out of the work area. Compare “broom”, “push bar”, “squeegee”, “bulldozer”. Since patent examiners have the imagination of a goldfish, I should point out at this time that moving the work surface with respect to the pushing device is the same as moving the pushing device with respect to the work surface.

11) The claims in Claim 9 where the pulling means further includes a magnet. Magnets are magical. (Computer-controlled electromagnets are even more magical because computers are magical and electricity is magical.)

12) The claims in Claim 9 where the pulling means further comprises a suction mechanism and a means of moving said mechanism into contact with the work product and to a location outside the work area. (Compare: “vacuum pick and place”)

13) The claims in Claim 9 where the combined pushing and pulling means further comprises a robot arm and a means of moving said mechanism into contact with the work product and to a location outside the work area. (Compare: Industrial pastry sorting robots). Since I may have been unfair toward goldfish in Claim 10, I should point out that the non-difference between moving the work surface vs. the pushing device also exists for a *pulling* device. Or basically any other device or combination of such devices.

14) “Pushing”, “bumping”, “kicking”, “nudging”, etc. are the same thing. Just throwing it out there.

15) The claims in Claim 2 where the work-producing machine is configured to produce works which are of a 3-dimensional nature.

16) The claims in Claim 15 where Claims 3-14 are restated here by reference.

17) The claims in Claim 16 where the system further includes a means of collecting the removed work products (sometimes called a “bin” or “bucket”).

The IOC, Trademark Law and You

Yes, it’s true! With the London Olympic Games and Paralympic Games Act 2006 (UK), and similar laws pushed through in other countries as a condition of hosting a certain large quadrennial event (US- Amateur Sports Act of 1978; Canada- Olympic and Paralympics Marks Act), any infringing use of IOC ‘properties’ (similar to, but stronger than, trademarks) such as combinations of the words ‘summer/games’, ‘summer/2012’, the interlocked rings, etc., are a criminal (not civil) matter.

The following Venn diagram explains in more detail.

Not Dead

Really! Just busy with some real-life stuff, namely wedding related and home renovations. I haven’t forgotten about this pick & place stuff! Lately I’ve been spending most of my project time on getting Mosquino toward an official 1.0 release. The rev2 boards just came in, so once all the parts are in I should have one ready to test soon. Here they are!

As usual, click for fullsize. Clockwise from the bottom-left are a bistable display shield, microSD shield, low power boost board (as low as 0.6V to 3.3V), Peltier shield (thermal to electricity, ~20mV to 4.1V), vibration energy harvesting shield, a stackable LiPol / thinfilm battery power shield, and of course the Mosquino mainboard itself. You may have seen early versions of some of these on the Mosquino page already, but these implement bug fixes and the latest/finalized Mosquino pinout. Can’t wait to get playing with these!

(And no, purple is not the official / final color – The PCBs were made via Laen’s sweet batch PCB service; he likes to experiment with the colors from time to time. It’s not a ripoff of Lilypad…although the Peltier board can potentially harvest from typical bodyheat gradients (>=2degC), which is an interesting development for wearable computing projects to say the least!)

How to use your own modem with Comcast

Typical frog in a hot pot scenario; when I joined Comcast the modem lease was like $1.50 a month, and I didn’t even think about it. As of recently it’s now crept up to $7.00 a month, which kind of made me sit up in shock. How much do those things actually cost, anyway?

Answer: $16 on eBay!

Ditching the leased Comcast cable modem in favor of your own is a surprisingly simple process. In my experience, Comcast won’t even try to (intentionally) stonewall your request or tell you it can’t be done in order to keep the revenue. Unfortunately, their techs are not exactly the brightest lights in the harbor, so you might have to train them a little on how to do it. Here’s how…

Step 1: Buy the modem
Go to your favorite new or used equipment source and buy the modem. Make sure your purchase includes any necessary power cord (wall wart); if not, buy that too. Theoretically, any DOCSIS 2 or later modem will work with most cable Internet packages, but to be sure, check this list for modems tested and approved by Comcast for compatibility. Extremely fast or fancy internet packages might have special requirements. Personally, I just searched eBay for the exact model # of my existing leased modem, and bought that one. My total cost was about $21 for the modem and a 12V wall adapter.

Step 2: Plug in the modem
Before you go connecting anything, turn the new modem over and copy down the “HFC MAC” number printed on the bottom to someplace more convenient. Note, there may be several different numbers printed on the modem; the “HFC MAC” is what you want. Technically the “number” is in hexadecimal, so it can also include the letters A-F. Double- and triple-check that you copied it correctly!

Disconnect your leased modem and plug the new one in its place. Verify that the lights come on and blink just as with your old one. (It will still ‘see’ a modem signal when connected, even if it’s not activated yet.) Once it’s lighting and blinking, power-cycle your wireless router (or whatever attaches to the modem Ethernet cable) to make sure it picks up a fresh IP address from the new modem. Just to be safe, reboot your computer(s) after this to make sure the newly rebooted router gives them a fresh address too. Now your modem, wireless router and computer will be “connected” to one another as far as your home network is concerned, although they won’t be able to reach the Internet through the new modem yet.

Step 3: Activate the modem
The one and only piece of information you (and Comcast) will need for this is the “HFC MAC” number you wrote down earlier. Call the Customer Service # on your Comcast bill, and say to them:
“Hi, I’d like to use my own modem and return my leased modem.” When I did this, the main customer service gave me a separate phone # dedicated to handling this request. Call that # and repeat the request.

The Comcast person on the phone will ask for the number from your modem. Not all of them are smart or well-trained, so they may not know which number, nor tell you the correct number to provide. Whatever they say (or don’t), give the “HFC MAC” you copied earlier. Now, this is important! Have the Comcast person input the number and then recite the number back to you, to make SURE they input it correctly. This is important!

Before you hang up, start accessing Web sites and see if they start working. If the Comcast person input the # correctly, your Internet should start working again almost immediately. If not, login to your wireless router’s status page (consult its manual for how to do this) and make sure it obtained an IP address, gateway IP and DNS servers from the modem. This information may be listed under a section titled “DHCP” (a protocol for devices to request and assign IP addresses.) Try powercycling the router again while the modem is activated to make sure it gets an address.

Step 4: Return the old modem
Hopefully, everything is working now! The last thing to do is to pack up the old modem and its wall plug in a box and return it to Comcast. If it came in an official Comcast box (e.g. “self-install kit”) and you still have that box, use that box – but if not, my experience is they aren’t that picky (I used a shoe box). There is a brick-and-mortar Comcast service/payment center by my house, so I just returned it in person. If this is not an option, ask the Comcast person how to return it by mail. My experience at the Comcast payment center was very positive – just handed the modem over, they scanned a barcode on the bottom and it was automatically credited to my account. They handed back a receipt with my name/account # and the modem details on it and I was on my way. My next bill had a partial refund for the part of the month I was no longer leasing the modem. Done and done!

If all does not go well…

If your new modem isn’t delivering the Internet goods after activation, the Comcast person (billing department) will transfer you to a separate department (tech support), who have the power to ‘ping’ your modem and make sure it is visible on their end. ‘Your’ modem in this case is defined as the modem matching the HFC MAC # linked to your account, which is why it’s very important that the billing person has input the correct #, and input it correctly, BEFORE this point. Otherwise they can ‘ping’ all day and not get any result because their system is looking for the wrong damn modem! The Tech Support person has the power to ping but NOT the power to add or correct MAC #s on your account, so this sucks. Likewise, the billing department has the power to enter MAC #s, but NOT to ping the modem! If this magic number entry gets cocked up somehow, it will take a 3-way conference call between you, tech support and billing to sort it out, and not all Comcastic techs know how to pull this off with all those complicated phone buttons. I spent two hours bouncing between departments because the barely-English-speaking billing person miskeyed the # the first time.

For the insanely bored or curious…

“MAC” number stands for Medium Access Control number, which is a globally unique number (burned into the device by the manufacturer) that identifies YOUR device among the millions of others out there just like it. The “medium” referred to is the physical cable. Since your block’s local cable segment is a shared resource, this number is necessary to identify you as a paying customer and route the right bits to and from YOUR specific modem. The difference between the separate “HFC” and “CPE” MAC #s is that the HFC number (I’m told this stands for “Hybrid Fiber-Coax”, i.e. residential cable networks) is the one that’s visible on the coax (cable) side of the modem that your ISP sees, and the other (“Customer-Provided Equipment”) is the Ethernet-side number that’s visible to your equipment (e.g. wireless router). Don’t tell Comcast that number by mistake; they can’t see it on the cable end.

I P, U P, everybody (DHC)Ps…

My page that tells you your IP address is up and running again, after a PHP configuration change by my web host knocked it out. Anyway, enjoy the glory of finding out your external IP address without getting socked by porn popups!