Fun with 3D Printing: Print a Parametric Peristaltic Pump

So, I’ve been playing around with the Lulzbot we got at work. Inspired by emmett’s sweet planetary gear bearing design, I adapted the design to be not a bearing but a peristaltic pump. Like the original bearing design, the pump prints as a single piece – no assembly required! – with captive rollers and no rotary bearing/wear surfaces. The only extra part needed is a length of surgical tubing to thread through the mechanism. This initial print is a proof-of-concept and is driven by a standard 1/4″(?) hex nut driver or Allen key: for a real application, you’d want to add a mount for an electric motor or similar. This one (or one like it) will probably end up attached to a small gearmotor and solar panel in my greenhouse to slowly trickle water through an aquaponics tower.

Printable Peristaltic Pump with captive rollers and minimal wear surfaces

Printable Peristaltic Pump with captive rollers and minimal wear surfaces

Pump with latex surgical tubing installed

Pump with latex surgical tubing installed

The pump design is written in OpenSCAD and pretty much fully parametric: the desired diameter, height, tubing geometry and a few other parameters can be tweaked as needed. There are a couple warts I’ll discuss later on.

Download:
You can download the OpenSCAD file here.

Video:
Here is a video of the pump in operation.

General:
Peristaltic pumps operate on the same principle as your esophagus and intestines (yes, really – yuck…) – a squishy length of hose is squeezed starting from one end and ending at the other, forcing any contents along for the ride. This type of pump has several properties that make it useful in certain applications:

  • Self-priming – can pump air or fluid reasonably well
  • Able to pump viscous, chunky or otherwise particulate-filled liquids that would gum up or damage an impeller pump
  • Gives great head – Ehhem… “head” refers to the maximum height the pump can push fluid. For a comparable energy input, a peristaltic pump can generally push fluid up much larger elevation gains than typical impeller types. Flowrate is another story of course.
  • Precise volume delivery – the amount of fluid (or air, etc.) dispensed per rotation of the motor is much more predictable than with an impeller pump. Using a servo or stepper motor, the volume pumped can be very accurately controlled. For this reason, peristaltic pumps are commonly used in medical equipment to meter out IV fluids, handle body fluids or dispense drugs.
  • Corrosion-free, isolated fluid path – Also of great relevance to medical applications, the fluid makes contact only with the tubing, making it very self-contained and minimizing the risk of contamination – e.g. all the nooks and crannies where foreign matter and bacteria could hide in other pumps. Very important when pumping bodily fluids out of someone and then back in (e.g. dialysis). Likewise, if your pump guts were metal, pumping corrosive fluids would be OK at the two never touch.

I really can’t stress the medical angle enough: in a hospital setting, peristaltic pumps are everywhere. Being able to print them off for practically free is huge.

Of course they are not without drawbacks; among them are fairly low flowrates, often “spurty” output, added friction losses and finite tubing life.

Assembly:
The pump prints out pretty much preassembled, but you still have to supply the tubing. Latex or Tygon surgical tubing is ideal, but most any pliable tubing (PVC fishpump tubing, etc.) can be used. To install, poke the tubing into one of the holes on the side of the mechanism (move the rollers if necessary so it is not blocked), pull through the desired amount of slack, then slowly advance the rollers to push the tubing up against the inner wall. When it reaches the other hole, push through and pull out any remaining slack. Note, the design is symmetric, so the concept of “inlet” and “outlet” port just depends on which direction you turn the rollers.

Design Considerations:

The diameter and wall thickness of the tubing dictate the pump geometry to some degree: the rollers and corresponding track must be wide enough to accommodate the tubing’s width when squished flat, and the clearance between the two must be enough to squeeze it flat without applying excessive force. This can be adjusted via the tubing_squish_ratio variable. The pump shown used a value of 0.5 with good results, but if you don’t need excessive pressure/head, lower values should work fine and reduce friction.

In general, a larger overall pump diameter will minimize wear on the tubing.

When using an FDM (plastic-extruding) printer, crazy overhangs in the geometry can’t be printed without support material (which defeats the purpose of a print-n-play design). The parameter allowed_overhang controls the level of overhang in the output based on what your printer can print, between 0 (no overhang whatsoever) and 1 (“infinite”, i.e. 90-degree overhang). Of course a ’0′ setting is not very practical. 45-60 degree overhang should be OK for most FDM printers (I used a raw value of 0.75 for this one).

Warts / Future Improvements:

In the current version, the final OD will actually be slightly larger than the value you enter (specifically, by the calculated tubing squished thickness. This is a result of laziness on my part; keep this in mind or fix it if you need a very exact OD on the outer ring.

When operating at high speed, I’ve noticed the tubing sometimes has a tendency to slowly “walk” in the direction opposite of travel, being slowly pulled through the pump. A compression / baffle feature at the inlet and outlet would help prevent this by friction-locking the tubing in place. Alternately, it could probably just be fixed in place with a bit of glue.

Tim Tears It Apart: Kidde KN-COB-B Carbon Monoxide Alarm

Of course it happens this way: stuff works for you, but breaks as soon as you have guests and drives them crazy. In this case, the missus and I were out of the house having a baby and her folks were in to hold down the fort. A carbon monoxide detector had failed in the most irritating possible way, emitting a very short low-battery chirp just often enough to drive everyone batty, but intermittently enough to be very time-consuming to track down. My poor father-in-law eventually managed to find the source of the racket, and changed the batteries.

The chirping continued.

He then trashed those batteries and put in another set of fresh batteries, from a new package.

The chirping continued.

And then took the damn thing off the ceiling and removed the batteries for good.

The chirping continued.

Oh yeah, it turns out that not one, but TWO detectors had failed simultaneously. And not for want of batteries, either.

It turns out the detector elements in most modern CO detectors have an indeterminate-but-finite lifespan, and are programmed to self-destruct when their time’s up. The actual sensor lifespan depends on the usual factors like operating temperature, humidity, CO exposure, etc., but most manufacturers take the easy way out and simply define a conservative time value where it may need replacing. In this case, it is 7 years. (I bought the house about 7 years ago, hmmm…)

Self-destruct timer disclaimer on back of detector

Self-destruct timer disclaimer on back of detector

Although design-to-fail schemes are occasionally on legally shaky ground, this product-death-timer is actually required by UL for CO detector products whose detector has a limited lifespan (which is most of them).

While they still power-on and blink (it’s not clear if the timer expiration also explicitly disables CO detection, but the labeling on the back suggests so), these units are basically landfill fodder now. I think you know what that means…

Front of detector with battery door removed. The marking indicating the direction to pull to release it from the nails in the ceiling is NOT factory stock :-)

Front of detector with battery door removed. The marking indicating the direction to pull to release it from the nails in the ceiling is NOT factory stock :-)

Top side of PCB

Top side of PCB

Top side of PCB with piezo horn removed

Top side of PCB with piezo horn removed

Bottom side of PCB

Bottom side of PCB

Main parts:
CPU: PIC16LCE625 – One-time programmable 8-bit microcontroller with 2k ROM / 128 byte RAM, 128 byte EEPROM.

MCP6042I/P – Dual Low power (0.6uA) opamp – guard ring attached to pin 7

LM385-1.2 (package marking 385B12) – 1.2V voltage reference with minimum operating current of 15uA.

Noisemaker: Ningbo East Electronics EFM-290ED piezoelectric horn claiming 90dB(A) sound output @ 9V/10mA @ 30cm.
Has GND, main and feedback connection.

Ningbo East Electronics ELB-38 or ELB-74 (?) – 3-terminal inductor (autotransformer) generating a stepped-up AC voltage to drive the horn.

A scattering of bog-standard transistors (2n3904/3906) rounds out the silicon ensemble.

The detector is a large metal cylinder marked with a Kidde part number and has a silica gel (dessicant) package shrink-wrapped to the front of the detection end. The detector is soldered to the board and not replaceable.

Business end of CO sensor showing silica gel dessicant covering aperture

Business end of CO sensor showing silica gel dessicant covering aperture

Some points of interest:

Idiot resistance: One thing to notice even before taking the unit apart are the little red spring-loaded tabs underneath each battery socket. I couldn’t find anything on the purpose of these in a quick web search, but my guess is they are there to block you from putting the battery door back on with no batteries in, e.g. after pulling them to silence a chirping alarm at 3am, and then forget to put new ones in.

Horn drive: Piezo horns are resonant systems with a very high Q; they must be driven at resonance to produce anywhere near their maximum sound output. However, due to manufacturing tolerances the exact resonant frequency may differ significantly between individual units. Another issue for this device is that piezo horns need comparatively high voltages to operate: this one has a rated voltage of 9V, but can probably go a fair amount higher (>100V drive signals for larger piezo sounders are not uncommon). But, the 3x AA batteries in this device can deliver a maximum of only ~4.5V. The self-resonant oscillator formed by Q2 and L1 efficiently solves both problems. The ‘feedback’ pin connects to a small patch of piezo material on the horn that acts as a sensor, translating deflection to voltage (more or less). Using this as the control signal for a simple oscillator allows it to automatically pull in to the piezo’s resonant frequency. The autotransformer coil, L1, is basically a step-up transformer with one end of its primary and secondary windings tied together and connecting to the 2nd pin. (You can think of it as a single winding with an asymmetric center-tap if you prefer.)

Detector analog frontend: The FR4 material the PCB is made of is a pretty good insulator, but its resistance is not infinite. With sensitive high impedance signals in the tens of Megaohm or more, even the tiny leakage currents across the PCB can induce a measurement error – especially when dust, finger oils from manufacture, other residue and humidity from the air combine on the surface. Notice the exposed silver trace that completely circumscribes the PCB area occupied by the sensor, with its green soldermask covering purposely omitted. This is almost certainly a guard ring intended to intercept such PCB leakage currents before they reach the connection points of the chemical CO sensor. The trace will be attached to a low-impedance circuit node whose voltage is as close to the sensor terminal voltage as possible, minimizing the voltage difference between them, and thus the current that can leak across. The trace is tied to pin 7 of the opamp.

Closeup of guard ring trace surrounding analog frontend

Closeup of guard ring trace surrounding analog frontend

End-of-life-lockout: As mentioned previously, this device is programmed to commit suicide after 7 years. There is no battery backup inside the device, nor any discrete realtime clock or other means of telling the time. How does it know when 7 years have elapsed? The CPU is clocked by a 32.768KHz crystal oscillator, otherwise known just as a “watch crystal” due to their ubiquitous use in watches, clocks and other timekeeping applications. While running the CPU at such a low speed also has certain power advantages relevant to a battery-powered system, this crystal is providing an accurate timebase. Needless to say, it is counting 7 years of power-on time, not wall time (even if it sat on the shelf quite a while, your alarm will not be dead and chirping the moment you remove it from the package). The CPU sports 128 bytes of EEPROM, which are used to store the peak CO reading (over the product’s lifetime or since the last alarm; not sure which) and most likely periodically count down its remaining lifetime. Basic operation of a CO detector is to stick batteries in and forget about it (unexpected powercycles will be infrequent), so the timekeeping can be very coarse, e.g. decrementing a couple-byte EEPROM countdown every time a very long counter rolls over some preprogrammed value.

I pulled the CPU, hooked it up to an ancient PIC programmer and tried dumping the firmware to see exactly how this worked, just in case they had left it unprotected, but no such luck. The code protect fuses are all set and readout attempts return all 0s. The EEPROM in this particular chip is actually implemented as a separate I2C “part”, either on the same die or a separate die copackaged with the CPU die, with the two I2C control pins and a power control line memory-mapped into a register. So there is no access to the EEPROM contents through a PIC programmer either.

Enclosure: At first glance, it’s about what you expect from a low cost consumer product that is designed to be thrown away periodically. There is not a screw to be found anywhere – everything, from the PCB to the enclosure halves themselves, clicks together via little plastic tabs. But wait a minute… hold this up to the light just right, and you can see hand-finishing marks where extra plastic (e.g. overmold) from the injection molding process has been filed or sanded off. On the *inside* of the enclosure, where nobody will see it! And yes, these marks appear to be from work applied to the finished enclosure itself, not the master mold it came from – the sanded portions go slightly in, not out.

Manual finishing marks on inside of plastic enclosure

Manual finishing marks on inside of plastic enclosure

Hidden Features: There are a few hidden features suggesting this same PCB, CPU and firmware are used for several models of alarm, including a fancier one. The most obvious is a non-stuffed footprint for another pushbutton switch, marked ‘PEAK’. When pressed, it causes the green test LED to flash a number of times in a row (presumably corresponding to the peak CO level ever measured by this detector – my 2 dead units show 9 and 10 blinks, respectively). Near the center of the board is a non-stuffed 6-pin header, with the outer two being power & ground, and the middle four signals going straight to CPU pins. Scoping these reveals unidirectional SPI signalling on 3 of the pins (CS\, CLK, DATA) that would probably drive an LCD readout on a more expensive version of this detector. Capturing the data in various modes doesn’t produce any obvious pattern (e.g. ASCII, numeric, BCD or raw 7-segment data). Finally, there are two mystery pads on the back of the PCB. Shorting them causes both the alarm and test LEDs to light, and the green LED to produce 5 extremely rapid blinks every few seconds. Doing this does not reset the timer-of-death, clear the PEAK reading or have any other long-term effects that I can ascertain. Both the PEAK switch and mystery jumper noticeably change the data pattern sent to the nonexistent LCD.

BUT… I did find a sequence of inputs that put the detector into some kind of trick mode permanently (persisting across powercycles). I believe the exact sequence of events that triggered it was to have S2 shorted at powerup, then short PEAK once the blinking sequence starts. It’s not clear if S2 must remain shorted during this time or only at powerup. The unit this sequence occurred on is now permanently in a mode where it emits long, repeating rapid blink sequences on the green LED (red lit continuously) and draws some 40mA continuously. The repeating sequence is 1 (pause) 63 (pause) 68 (pause) 24 (pause) 10 (last blink is longer) (pause) 21 (pause) 82 (pause) 82 (pause) 14 (long pause).

It’s official – I have spawn!

So…
That day I never thought would come, and a younger me once *hoped* would never come…has come!

Our first spawn, Max Charles G, was born 7/26/2014 at around 6:40am.

Glad that’s over!

I kid, but seriously, the hospital part is the only real sucky part (for us, at least – Max is pretty chill). The part where you’ve both already been awake for a day and a half, the Mrs. is just all sorts of tore up, and the Mr. is camped out on this “thing” pretending to be a chair pretending to be a fold-out bed and failing outright at both. In otherwords, a medieval sleep-deprivation torture experiment of some kind. And then comes this tiny human that starts a one-sided screaming match every couple hours while you newbs haven’t more than a few books and a Google’s worth of a clue what to do about it. And about the second or third day of this you lay there with the tiny human in your arm, rubbing your eyes, thinking: Oh shit. This is what our lives will be now.

(Sometime on day 2-or-something of hospital)
T: I don’t even know what day this is.
K: It’s Monday.
T: It FEELS like a monday.
K: Get ready, every day’s going to feel like a Monday.

But… then you go home, sleep in your own bed, start getting the hang of all this feeding and changing business, and find out: those first several days are a fluke, and hey, this ain’t so bad after all! In fact, much like marriage for a dude, either I won the wife/baby lottery or the hype is BS: this is turning out to be much better than I expected.

Ah yes, and the tiny human is growing on me. I wasn’t expecting that either.

Anyway, here he is. If you don’t give a flying frog about pictures of Other Peoples’ Kids, you should probably look away now.

Max's first day...with that tasty, tasty hand

Max’s first day…with that tasty, tasty hand

That's my boy.

That’s my boy.

Either a yawn or an audition

Either a yawn or an audition

Max and the proud parents

Max and the proud parents

Notes To Myself: Fix for Windows 7 can’t delete file/folder on network drive (“in use”)

Problem: When trying to delete or rename a folder, typically on a network drive, Windows 7 reports the action can’t be performed because a file is in use, even when you definitely don’t have any files open in that folder (or even have a subfolder displayed in another Explorer window), and haven’t for quite some time. Typical error message popup:

“The action can’t be completed because the file is open in Windows Explorer. Close the file and try again.”

Apparently it is a longstanding bug in Windows explorer (that M$ has known about but will not fix!) where Windows creates hidden files (thumbs.db) to cache image thumbnails, but sometimes forgets to close them.

Workaround: Disable thumbnail caching:

  • Run ‘gpedit.msc’ (Click Start -> Run, type gpedit.msc in the search box and hit enter)
  • Drill down to User Configuration -> Administrative Templates -> Windows Components -> Windows Explorer
  • Highlight “Turn off the caching of thumbnails in hidden thumbs.db files”. Right-click this entry and choose ‘Edit’, and then enable this setting.

You probably have to reboot for this to take effect (mainly to clear any existing thumbs.db files that are already locked open). Don’t fiddle with any other gpedit settings.

It may also help to disable thumbnails on network drives entirely – folders with images will display much faster! To do this, enable the setting named “Turn off the display of thumbnails and only display icons on network drives” in the same location. Note there are two similarly named options (one omits the “…on network drives” part), so be sure to select the one you want.

This fix comes from a rather lengthy exchange about the bug on Microsoft’s forums.

How to Fragment Your File System

Here is a tiny little python script to generate file system fragmentation.

“But Tiiiiiiim! Tools are supposed to defragment your filesystem! Why would you ever want a script to fragment one?”

In one of the gadgets I’m working on, I had a need to evaluate disk (well, memory card) performance in real-world and worst-case scenarios. If you are sampling high-speed data with a puny microcontroller, you cannot afford your disk going into lalaland while your puny buffer RAM runneth over. While file fragmentation is – in theory – not a big deal for Flash media as it is for spinning rust drives (no mechanical heads to reposition), your filesystem driver still needs to grovel through the filesystem to find the next free block to write. In a typical implementation, writing to a FAT filesystem with a giant file in the middle of it could incur a significant write hiccup as the FS driver encounters the file and has to seek through its entire FAT chain, potentially fetching and parsing numerous sectors of allocation data before finally finding a free cluster for the next write. This script allows testing of such scenarios.

What it does:

Give it the name of a disk to fragment, and it will begin creating files full of junk data on the disk until it receives a write error (normally indicating the disk is full). It will then delete a random subset of these files, leaving free-space holes scattered throughout the disk. For most filesystems and OSes, the freespace will not be automatically consolidated, and will remain fragmented until the remaining files are deleted (or the disk formatted, etc.) or a defragmentation utility is run. You can then evaluate the performance of your (software, device, etc.) on this disk on a realistic simulation of a well-used drive.

Configuration options:

path – set this to the directory to generate the fragments in. On FAT filesystems, it is necessary to use a subdirectory and not the root directory, due to a limitation on the number of files that can be stored in the root directory.

filldensity – this value, ranging from 0.0 to 1.0, sets the percentage of junk files to remain at the end of operation. A higher value means more files left behind, i.e. less freespace gaps.

minfragsize, maxfragsize – this sets the size range of junk files to create. The size of each file created will be selected at random from this range.

CAVEATS:

I only tested this on a Windows PC, for reasonable file sizes (MB, not GB) and card sizes (a few GB). If your device’s size is measured in rooms, gigaquads or Libraries of Congress, it may not work, or your device may be obsolete by the time it finishes. The “junk” to make the junk files is stored in RAM out of laziness; you probably want to fix this if making multi-GB junkfiles.

This script was written to test a FAT-based device. Not all filesystems respond the same way to fragmentation, so YMMV on other filesystem types.

This emulates fragmentation only. Many other factors could affect your embedded Flash media performance, such as Flash cell wear (aka hot count, or total number of write/erase cycles performed), write amplification, operating temperature and/or voltage (depending on the memory technology and controller), phase of the moon, etc. This script does not emulate any of these other factors. On the bright side, it should be a more faithful test for other memory technologies, e.g. FRAM/MRAM, that are fast and relatively insensitive to cell wear, and will better reflect software delays due to filesystem parsing.

Notes To Myself: ‘Paste Plain Text’ keyboard shortcut/macro for Excel

Very common need: Copy some data into an Excel cell from an arbitrary other source (including another Excel sheet, or webpage, etc.). In the process, strip any external formatting, HTML tables, etc. with extreme vengeance and only paste the plain text.

Traditional way: Mouse fandango (Excel 2013: Home -> Paste -> Paste Special…->Text->OK) for every time you want to do this.

Better: Create “PastePlainEffingText” macro activated by a nice fast keyboard shortcut equivalent to Ctrl-V. Store this macro persistently in the Excel “Personal Workbook”, not the currently open document, so it is available in any open document.

Steps:
1) View -> Unhide -> Personal etc. (The ‘Personal’ workbook is hidden by default. Attempting to save a macro to it generates an extremely helpful message saying to use the ‘unhide’ option, without giving the option to just do this, nor telling you where this setting is.)
2) View -> Record Macro
3) Mousedance as above (Paste Special etc.)
4) View -> Stop Recording
5) Assign keyboard shortcut. I just assigned it to “Ctrl-B” since it’s right next to Ctrl-V. This means I can no longer Ctrl-B to make text bold, but for the once-a-year I’d actually want to do this, it’s a plenty acceptable tradeoff.
6) Optional: Re-hide the “Personal” workbook.

Caveats:

When assigning the keyboard shortcut, the “Ctrl” portion is mandatory and cannot be changed. Excel will automatically insert a ‘Shift’ in addition to this if you happened to type an uppercase letter in the sole letter box provided (they way keyboard shortcuts are usually represented in text). This is somewhat unintuitive and does not mean Excel is blocking you from overwriting an existing shortcut – just change the letter to lowercase and it’ll go away. There is no warning if you do overwrite an existing shortcut, so you’ll have to check on this yourself.

At the time of this writing, Excel does not allow writing an actual macro (code) in the Personal Workbook directly. You have to ‘Record Macro’ and physically do whatever action/mousedance to initially generate the equivalent code. But once this is done, you can edit the actual code. To write/paste an arbitrary code macro, you can probably just “Record Macro” some trivial dummy operation (paste some text, etc.) then just replace the autogenerated code with your own.

The equivalent code for this macro is:

Sub PastePlainEffingText()
'
' PastePlainEffingText Macro
' Strip formatting when pasting buffer contents
'
ActiveSheet.PasteSpecial Format:="Text", Link:=False, DisplayAsIcon:= _
False
End Sub

Notes To Myself: J-Link SWOViewer with Silabs/EnergyMicro EFM32 CPUs

The EFM32 SWO port operates from a 14MHz timebase regardless of the current CPU frequency. Autodetection of actual frequency is feasible, but irrelevant. Manually specify 14MHz for “CPUFreq” in SWO Viewer. The corresponding calculated SWOFreq should be 900KHz. Tested and working as of SWOViewer version 4.84f.

Notes To Myself: Fixing TortoiseCVS breakage (permissions, crashes, missing overlays) on Windows 7 64-bit

Problem 1) TortoisePlink.exe crashes when attempting CVS operations.

Win7 throws the error message “A problem caused this program to stop working correctly” (gee, thanks, that’s a most helpful crash dump) and checks The Interclouds for solutions (finding none). Groveling down to the actual crash report (Control Panel -> Administrative Tools -> Event Viewer -> Windows Logs -> Application, scrolllll down to the most recent relevant “Error” entry, and bathe your mouse-clicking finger in icewater) reveals:

Faulting application name: TortoisePlink.exe, version: 1.12.5.6, time stamp: 0x4d3d6cef
Faulting module name: MSVCR90.dll, version: 9.0.30729.4940, time stamp: 0x4ca2ef57
Exception code: 0xc0000417
Fault offset: 0×00051380
Faulting process id: 0xfc4
Faulting application start time: 0x01cf7b616bc5e4c9
Faulting application path: C:\Program Files\TortoiseCVS\TortoisePlink.exe
Faulting module path: C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.30729.4940_none_50916076bcb9a742\MSVCR90.dll
Report Id: ab10ecd7-e754-11e3-aa78-b8ca3abe82c0

Solution: At the time of this writing, the version of TortoisePlink that comes with TortoiseCVS is several years old, even for the experimental “new” (2012) RC1 build, the datestamp claims 2011 and the filesize is 200-some KB. A related project, TortoiseSVN, has a much newer version (400-some KB; datestamp claims 4/2014). Unfortunately I found no trustworthy places to download a standalone copy. So, download and install TortoiseSVN, copy-pasta its TortoisePlink.exe over the copy in TortoiseCVS, and you can then uninstall TortoiseSVN if you like.

Alternate solution: TortoiseCVS now has internal SSH support. If you don’t need to pass any external arguments to the SSH stuff (e.g. the “avoid re-entering password” trick (-pw mypassword)) or use the SSH-keypair-in-place-of-password thing, you can go into all your ‘Root’ files (inside the hidden .CVS directories added all over the place) and change every occurrence of :ext: to :ssh: , which will use the internal support instead of fobbing it off to the crashing TortoisePlink. Note that you will have to do this for EVERY. SINGLE. FILE.

Problem 2) Permission Denied error when trying to “CVS Commit” and possibly other operations.

Some other operations (“CVS Diff”) might still work. Example error message:

In P:\WVR_RIF\04_Design\Electronic\Software\wvr_workspace\wvr_navy_v1: “C:\Program Files
(x86)\CVSNT\cvs.exe” commit -M .cproject
CVSROOT=:ext:username@example.com:/home/username/cvsroot

cvs [commit aborted]: cannot open file .cproject for comparing: Permission denied
cvs commit: Committed on the Free edition of March Hare Software CVSNT Client
Upgrade to CVS Suite for more features and support:

http://march-hare.com/cvsnt/

Error, CVS operation failed

My own repositories happen to be on a network drive (my employer’s setup), so I don’t know if this error is unique to this situation.

Solution: This error seems to have been introduced in a more recent version. The solution is similar to that above, except you need to downgrade to a version without the bug. TortoiseCVS actually ships with two separate collections of programs, TortoiseCVS proper (32- and 64-bit) and a separate “CVSNT” (32-bit only, at least the version that comes with TortoiseCVS), which does some of the underlying dirty work. The bug is in the “CVSNT” portion of this matryoshka. I don’t know the exact version where the bug was introduced, but copying the version from my old PC (cvs.exe dated 7/5/2006; identifying as “cvsnt 2.5.03 (Scorpio) Build 2382″, and the rest of the folder) did the trick.

Sidenote: Notice also that recent versions accompany this specific error message with a smarmy note about updating to a paid version for “support”. Indeed, TortoiseCVS appears to be somewhat abandoned in favor of the paid/professional “CVSNT” by the same author. Makes one wonder…

Problem 3) File/folder icon overlays do not appear, or only appear sometimes but not always (e.g. every other reboot).

Solution: Windows Explorer provides a limited number of ‘slots’ (16 to be exact?) for programs to define icon overlays. In Win7 x64 (at least), about a half-dozen of these are eaten up for “SkyDrive”, Microsoft’s foray into cloud file hosting. (What, you did not voluntarily install SkyDrive, and possibly never even heard of it? Welcome to the club.) Anyway, to fix:

Open registry editor and navigate to HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers . Now start nuking entries that seem least likely to be useful to you (SkyDrive, Offline Files, …) until the total is down to 16 or less.

Note, if you’ve done any version mix-n-match and/or reinstalled TortoiseCVS (I’m not sure exactly what triggers it), you may have a bunch of obsolete entries in there from Tortoise itself. For example, my machine has a TortoiseNormal and a 1TortoiseNormal, etc. It appears that the current version of TortoiseCVS (1.12.5 stable, 1.12.6 beta) uses the unnumbered ones – start by trying nuking those. If this doesn’t work, just nuke ALL the Tortoise entries from orbit and then uninstall-reinstall the program “TortoiseOverlays” (may be available standalone from some other source, e.g. TortoiseSVN, or by fully uninstalling TortoiseOverlays and reinstalling TortoiseCVS, which includes it.).

Problem 4) ” end of file from server (consult above messages if any)”

Solution (maybe): The hits just keep coming, don’t they? This error could mean just about anything (server side, client side, bad-behaving firewall or network appliance, sunspot activity, voodoo curse…), but one likely culprit is a crash in an external program (namely, TortoisePlink.exe) used to perform the connection. One easy thing to check is to run TortoisePlink.exe on its own (e.g. doubleclick) and see if it crashes. In my case, this threw the error:

“The program can’t start because MSVCR110.dll is missing from your computer. Try reinstalling the program to fix this problem.”

In theory, installing TortoiseCVS also installs the necessary runtimes, but somehow during the circle-jerk of uninstall-reinstall cycles to diagnose the other problems above (or some other app I installed the next day, or who knows really), this file got wiped out. Installing it from Microsoft cleared that up.

Alternate solutions: I’ve had this problem with previous TortoiseCVS installs, but the “end of file from server” message came not immediately, but only after replying to the password prompt. In this case, it was “fixed” by supplying the “-pw mypassword” argument to the external SSH tool, bypassing the password dialog (and presumably crash). Your IT folks may frown on you doing this however, since it leaves your password in cleartext on the machine.

Another thing you can try (assuming it’s a client side problem) is as above, change all the “:ext:” to “:ssh:” in all your CVSROOTs. Well, try it on ONE first and see if it fixes the problem before spending the rest of your day updating the rest of them.

Palram Mythos Greenhouse Hacks / Improvements

Palram Mythos 6x8 Greenhouse. Pretty nice overall, but could use a bit of shoring-up for longevity.

Palram Mythos 6×8 Greenhouse. Pretty nice overall, but could use a bit of shoring-up for longevity.

My brother-in-law and I put this together over a long afternoon. Much of that time was spent building and leveling a 4×4 frame – the actual construction went pretty smoothly.

On the other hand…

It stayed intact for about 24 hours. The very next day, a typical springtime storm rolled through with a bit of wind (the weather report claimed 30mph gusts). When I got home from work, the door side of the greenhouse was crumpled in, some of the horizontal supports bent backwards on themselves and a few twinwall panels were blowing around the yard.

The window panels are standard-ish, 4mm polycarbonate twinwall (mostly 2ft x 4ft? sections) and can be sourced easily online, but the metal structural parts are custom and replacement parts can’t be bought separately – so wrecking any is a big deal!

Anti-Flex / Anti-Fall-Apart-In-A-Stiff-Wind Fixes
This revealed the main apparent design flaw: Many of the structural components are joined together by nothing more than the friction of a bolt head – not even passed through complete holes in both parts (which would somewhat fix the parts together even if the bolt were to loosen), but often via a U-shaped notch in one or both parts, or with the bolt sliding freely in a t-slot. Major places this appears to be a problem are:

  • where the vertical rails for the walls slot into the base
  • where the upper and lower halves join together at the ends (mainly the upper bolts in the horizontal metal supports about halfway up either end)
  • where the verticals around the door bolt into the horizontal near the ceiling

Add to this the fact that many of these end bolts must be tightened only after installing the twinwall panels, which renders the heads nearly inaccessible, and flimsy cross-bracing (more on that later), and you end up with a major structural problem. Each time a gust of wind hits, the top of the greenhouse can sway back and forth a bit with respect to the base (the diagonal support straps simply flex). Each time this happens is an opportunity for these friction-held bolts to very slightly work themselves apart. Enough cycles of this (a day’s worth, depending on the day) are enough to separate the vertical wall rails from the base, or the bolted notches at the above-indicated spots from one another.

If you live in a breezy location, one of the best favors you can do for yourself is scrap these flimsy diagonal straps on either end in favor of some sturdy aluminum angle or U-channel stock from your nearest hardware store. One catch, I’ve only seen such stock for sale in the US in 4-ft and 8-ft lengths, while the pieces for the greenhouse are 51″. So to do it proper you’d have to get 8ft pieces and have nearly half of each piece as scrap. Not a huge problem if you have other uses for this material, but otherwise it’s annoying. Since the lower bolt each one mates to is in a slot in the greenhouse’s vertical rails and can slide freely, you can maybe cheat and use 4-ft lengths by not having them go all the way to the bottom. Probably still better than the straps it came with.

Original diagonal brace (left) and one cut from aluminum U extrusion. Stiffening these prevents wind gusts from rocking the greenhouse back and forth and working the bolts loose.

Original diagonal brace (left) and one cut from aluminum U extrusion. Stiffening these prevents wind gusts from rocking the greenhouse back and forth and working the bolts loose.

In addition, I found the following small tweaks very helpful in keeping the thing together:

  • Ditch that silly tube-thing that comes with the greenhouse and is supposed to act as a nut driver. Use a proper nut driver. You just can’t torque them down tight enough with that tube-thing.
  • Wherever those U-shaped notches occur on the endwall pieces, replace the standard nut with a locknut and (on the head side) lockwasher. The square-headed bolts that come with the greenhouse appear to be 1/4″, but with a non-standard thread pitch (non-standard = not what the Home Depot sells). So you may as well replace the bolt too (these end ones don’t require the square heads for anything) – preferably with the widest head you can find. Locknuts tend to have a wide flange around them…and, well, be locking. This should help them get a better grip on those U-shaped notchy bits.
  • Find, buy or fashion some thin tool you can slip between the horizontal supports and the twinwall panels to hold the bolt heads in place while you torque them down. I got extremely lucky and found a thin stamped-metal “crescent wrench” (from some Ikea furniture, I think) lying around that was a perfect fit, that I could slip in and juuust grab the edge of those square-headed bolts. You can probably fashion something using a hacksaw and any thin piece of metal (like one of those useless diagonal straps).

One final comment on this. After it blew apart the first time and things shifted a bit, I discovered the vertical members on either side of the door were now “too short” (or the ceiling assembly “too tall”) for the two to bolt together reliably anymore. On further inspection, the stamped metal base on this side seems to have “sagged”, so when the vertical wall supports were bolted to it, they no longer adequately reached the part it’s supposed to bolt to. Of course, anyone stepping or even brushing their feet against the base on the way in/out will just make this worse. To remedy, I cut some braces out of some aluminum stock I had handy and wedged them under the lip to prop it up at the edges of the doorframe.

Where important bolts pass through U-shaped notches instead of proper holes, replace the standard bolt and washer to add a lockwasher and flanged lock nut for added grip. Somehow hold the bolt head so you can tighten the everloving shit out of these.

Where important bolts pass through U-shaped notches instead of proper holes, replace the standard bolt and washer to add a lockwasher and flanged lock nut for added grip. Somehow hold the bolt head so you can tighten the everloving shit out of these.

More questionable U-notch attachments, above the door. In addition, you may find (now or in the future) that these verticals near the door have become too short to fully mate with this horizontal support near the ceiling.

More questionable U-notch attachments, above the door. In addition, you may find (now or in the future) that these verticals near the door have become too short to fully mate with this horizontal support near the ceiling.

To avoid the eventual "too short" problem, wedge something underneath them to prop up the lip of the base and prevent it from sagging over time.

To avoid the eventual “too short” problem, wedge something underneath them to prop up the lip of the base and prevent it from sagging over time.

Spare Parts
After completing assembly, I found I had at least a half-dozen square-headed bolts left over. The instructions make oblique reference to there being spares of some parts, but if I had known I’d have this many, I’d have dropped the extras down the vertical wall supports to provide extra attachment points. This could be handy to double-up the cross-brace straps along the sidewalls (if you followed the very strong recommendation above, you should have 4 spare ones now), or provide a way to hang small tools, etc.

More Windproofing
The doorhandle is pretty loose and can be easily lifted by the wind, letting the door fly open and thrash itself and everything it touches into oblivion. If you bought the accessory plant-hanging hooks (little plastic doohickies that twist-lock into the t-slots along the walls and ceiling), you can insert one on the inside of the door behind the handle, providing a convenient place to hook a spring or rubber band to maintain some downward tension on the handle.

Online reviews for a cheaper greenhouse from another vendor (sounds like ‘Hazard Fraught‘) recommend caulking in the twinwall panels to prevent them being popped out by the wind. I haven’t done this yet, but plan to.

A hanging plant hook (optional accessory) is a convenient place to hook a spring or rubber band to prevent winds from lifting the door latch.

A hanging plant hook (optional accessory) is a convenient place to hook a spring or rubber band to prevent winds from lifting the door latch.

Tim Tears It Apart: Honeywell R8184 Oil-fired boiler controller

Honeywell R8184G oil burner control

Honeywell R8184G oil burner control

Its official designation is “R8184 Intermittent Ignition Oil Primary”.

“But Tiiiim! That sounds booooorrrring. Why this thing, and not one of those fancy cloud-enabled thermostats containing more RAM than the desktop computer you had in college and not less than five processors capable of running Angry Birds at a playable framerate?”

Yes, excitement-wise this one sounds right up there with having your toenails waxed, but there are a few interesting bits regardless. Also, I have a broken one sitting in my basement right now, and what do we do with broken gadgets?…

Underside of oil burner controller

Underside of oil burner controller

Here is the underside showing the PCB. This should give some sense as to the age of this design. These curvacious traces are something you just don’t see in the era of computer-aided PCB design. This board may very well have been laid out literally by hand, the master trace pattern drawn in magic marker. Speaking of which, I drew an arrow in marker pointing to the likely culprit for this unit’s failure: a cold solder joint on one of the relay terminals, specifically, the one that energizes the orange wire leading to the burner and motor. You can also see some strategic cuts in the board itself, providing a physical air gap to isolate the low-voltage stuff from the line-powered sections nearby.

Oil burner topside

Oil burner topside

Here is the topside. There’s really nothing much to it! You can probably take a stab at how this all works just by inspection, but in case not, Honeywell provides the actual schematic on their website.

The fat transformer at top-left steps the 120VAC from the line down to around 24VAC to drive its own circuitry and the thermostat (red and white wire normally connected to the “T” terminals). I peeled back the tape on the primary winding a bit so you can see the difference in wire diameter, allowing for many more turns on the primary side. Without documentation or proper test equipment, you could use this to visually determine its function as a step-down transformer and maybe even make a loose guesstimate of the turns ratio.

Oil burner 24VAC relay

Oil burner 24VAC relay

Kitty-corner from this transformer is a big honkin’ relay, armed with a similarly fat bundle of wire. This coil is powered right from the AC off the transformer; notice the large metal weight clamped to the top end of the part that actually moves. I suspect this is to provide added inertia to keep the contactor in-place and prevent buzzing during the low periods in the AC cycle where the magnetic force ordinarily holding it drops out. Energizing this relay closes two separate pairs of contacts; one (with the cold solder joint) powers up the boiler via the orange wire, and the other completes the circuit (transformer center tap, or ~12VAC) for the safety lockout logic, which I’ll get to in a moment.

In an oil burning boiler, turning on the boiler engages a large motor that both blows air into the combustion chamber and forces oil through an atomizing nozzle. The oil is ignited by a spark plug of sorts, formed by a high voltage transformer and a conductor near the nozzle. Home heating oil is otherwise known as diesel fuel. Needless to say, you want this atomized fuel to burn away in a quick and controlled way, not let large quantities of it accumulate and then go up suddenly.

To prevent your basement turning into a Super Mario Bros. boss level if the fuel doesn’t ignite in a timely fashion, there is a “flame sensor” (photocell) and lockout timer built in. The label on the front of the unit specifies a lockout time of 45 seconds. As you probably noticed, there are no microcontrollers, quartz crystals, counters or any other obvious timing devices on this board, so how does this work?

The answer may wow you, either with its ghetto-ness or its ingenious simplicity. Much like the electric stove guts described in an earlier post, the timer is thermal. The top-right component contains a heating element attached to a bimetallic strip, which in turn connects to some contacts and a mechanical latch. This is attached to the bit of circuitry at the bottom-left, which connects to the photocell (flame sensor) normally connected at the ‘F’ terminals. For the grisly details, look at the schematic linked above. Ordinarily, when the thermostat is on 24VAC flows through R1 and R2 to the “bilateral switch” (there’s a symbol and part you don’t see everyday), which trips the TRIAC and ultimately begins warming the heating element, eventually curling the metallic strip inside the lockout mechanism enough to trip and cut power to the boiler. Note, the schematic shows the gate of the “bilateral switch” not connected to anything, but in reality it is shorted back to the first terminal (at R2), turning this device into basically a voltage threshold detector. Light falling on the sensor lowers its resistance from near-infinite down to the k-Ohm range or less, forming a resistor divider with R1. This lowers the voltage at the bilateral switch below its turn-on threshold, cutting power to the heating element before it trips the lockout.

Protectorelay thermal safety / lockout switch with latching feature

Protectorelay(R) thermal safety / lockout switch with latching feature

A look through the clear plastic case of this device shows the heating element is an ordinary 1W flameproof resistor. A metal slug, no doubt carefully sized to provide the right thermal inertia for the desired lockout time, is clamped around it. On the side of the device is an access hole for a setscrew, which applies pressure to a spring-loaded plate behind the bimetallic element. This most likely sets the initial position/tension of the strip against the pushbutton latch, and so allows fine-tuning the trip time.

Here is a video of the mechanism in action.

If the previous TTIA installment was any indication, the burning question is how much the thing cost to manufacture. As before, the off-the-shelf parts are pretty cheap but the presence of complex custom parts makes it hard to pin down a number. A comparable step-down transformer can be had for about $5-8 bucks on Digikey. The discretes would run probably another buck total, and give another $3-5 bucks for wiring, the solder-on screw terminals and the blank PCB itself. The transformer is a bit harder – it’s a custom Honeywell part and can’t be sourced off the shelf – but comparably sized transformers might run in the $20 range in onesies. Now for that lockout switch assembly, that’s a real piece of work. Not heavy on any expensive metals, but plenty of NRE sunk into this part, and plenty of mechanical parts to assemble (possibly some or all by hand). I’ll pull a $15 out of my ass for that component.

Notes To Myself: Migrating legacy Microchip C18 projects to MPLAB X + XC8 toolchain, Windows 7

First note to myself: NEVER USE MICROCHIP AGAIN. If I didn’t just need to make “a couple tiny updates” to an already selling, on-the-shelf project I’d just scrap the PIC18 for an EFM32TGxxx part, gcc (shaft of light from the sky, harps playing melodically) and be done with this entire shit-show. Insert whining about the month+ long circlejerk with Microchip Support about the bug in the PICKit3 programmer that is now corrupting the config bits on said product here. Of course, if the code from 5 years ago, even with no changes, still compiled and fit onto the chip it was written for and used to fit on 5 years’ worth of versions ago, and current MCC18 did not insist on dragging in the gargantuan (>4KByte) ‘.code_vfprintf.o’ even if it is not used or referenced anywhere in the code, I wouldn’t even have to bother trying it with the new compiler in the first place….

Soooo…. Install MPLAB X (make tea, a sandwich, possibly a baby or two while waiting for the crunching sounds from your harddrive to finish) and XC8. NB: Licensing is done via a Windows batchfile, completely outside any of the devtools OR their installers. If you have the license file, ignore absolutely anything to do with licensing and install as if you want the “free” version.

License: Run said batchfile. Voodoo happens and it should “Just Work”. (It did. Quite surprised.)

Make XC8 “C18 Compatibility Mode” findable:

The fake “C18″ that currently serves as the compatibility layer must first be manually setup in MPLAB X (apparently no autodetect). But first-first, you need to workaround a stupid MPLAB X bug that has been unfixed going on two years now. The bug is you are arbitrarily forbidden from having two toolchains set up whose executables are in the same directory. Unfortunately this is EXACTLY WHAT MICROCHIP’S OWN XC8 COMPILER DOES (of course that directory is already used for XC8 itself, which IS autodetected somehow). So you have to create a fake instance of this directory (symlink or hardlink) with a different name to fool MPLAB X.

NB: The below workaround only works if your filesystem is NTFS. If not, you could also try just copypasta-ing the entire contents somewhere else, and hope this doesn’t break a path dependency somewhere or whatever. I haven’t tried this, but worth a shot.

To do this, you have to first-first-first somehow get a Windows console with Administrator privileges. The way I found that works is to create a batchfile with the contents “cmd <carriage return> pause”, then right-click and “Run As Administrator”. (Using the ‘runas’ command, Windows 7′s answer to sudo, apparently does not work for this as it forces you to know the actual administrator password, and will not accept your user password even if you have administrator privileges.

At the console, cd into the XC8 directory directly above the binaries directory (e.g. “C:\Progra~1(X86)\Microchip\xc8\v1.31\”) and type:

mklink /D _c18bin_ bin

This should result in a message indicating a symbolic link named “_c18bin_” was created.

Now you can actually set up the devtool. Ignore anything on the splash page and go to Tools -> Options -> Embedded -> Build Tools tab. Press “Add…” and enter the fake directory you just created. Specify the location of each build tool (if it exists). NB: For some reason the individual devtool settings ‘disappear’ after specifying them (close and re-open this dialog and “C Compiler” is blank again!). Does this mean it doesn’t need to be specified, or this is another MPLAB X bug and your dev tool will never, ever work? Will soon find out…

Now, try to build project (it will fail).

In “Output -> Configuration Loading Error” tab: “Could not generate makefiles for configuration default.” “XMLBaseMakefileWriter::createRuntimeObjectForMakeRule: null”

In “projectname (Build, Load)” tab: make[1]: *** No rule to make target ‘.build-conf’. Stop.

FIXME: Fix this error…

The Internet, things, and you

Ha ha, apparently proselytizing about the “Internet of Things” is trendy again. Don’t hold your breath kids; until IPv6 is a thing that’s really a thing, enjoy your “small home network of things”, where your game console, thermostat and toaster have 192.168.x.x IP addresses dangling from your cablemodem, and require a 3rd-party cloud service to mediate contact with your neighbor’s toaster*.

Seriously though… if anybody but major datamining companies are going to get remotely enthusiastic about this IoT business, two things need to happen: IPv6 and dirt-cheap low-bandwidth wireless uplinks (think cellphone plan with pay-by-the-byte or 512kb/month dataplans and low/no monthly maintenance fees) so that all the applications (smart stoplights, weather/pollution sensors, whatever) that would benefit from not dangling off someone’s cell plan or cablemodem don’t have to do so. Maybe on the 3rd revival of the IoT hype, about 10 years from now, it’ll really catch on and be actually kind of useful. (See also: “M2M”.)

* The latter shit-uation is due in equal parts to headaches around NAT traversal, service/peer discovery, and the fact that nobody serious (read: businesses) wants to throw in for an open platform when there’s a snowflake’s chance they could parlay their own proprietary stuff into the One True IoT Service. Even with IPv6 and cheap-as-free radio/cell/satellite pipes, the “IoT ecosystem” (I cringe just saying that) won’t be completely free of the need for a centralish service/peer discovery mechanism and (for power-limited systems) somebody acting as mailbox/aggregator/push notifier/whatchamacallit so that the low-power endnodes can talk to one another despite randomly popping onto the network for just a few seconds at a time. Still, a backend you can download and drop on your own cheapo web hosting account if you didn’t want to be tied to said 3rd-party cloud service would be huge in making this, well, A Thing.

Haptic hackery for fun and profit

My day-job employer makes fancy piezoelectric actuators. Not long ago I was asked out of the blue: “Hey, the Haptics Symposium is in less than 2 weeks… It’s in Houston, TX. Want to go?”

“*looks out window at yet more falling snow* Hell yeah.”

“Oh yeah, and we’re going to need some demos so…”

Of course, I had no shortage of regularly scheduled urgent worky stuff to do, so any demos had to be done with some haste. In the end I got not-one-but-two cheesy demos going, one of which didn’t even break during the show! In addition, my newest coworker put together an incredibly sweet haptic texture-rendering demo, but I’m sure he’s writing it up on his own blog as I speak :p

Super cheesy heightfield mouse

One of the fun things about piezoelectric bimorphs is that, unlike coin motors, LRAs and voice coil drivers, they can be deflected statically. So it’s possible to set and hold arbitrary linear positions. With this in mind, I scavenged an old ball mouse from the IT junkpile, removed the PS/2 cable and ball, and hacked it up so that the left mouse button now raises and lowers in response to the brightness of the surface directly beneath it. An arbitrary grayscale image placed under the mouse now becomes a tactile experience, felt rather than seen.

The replacement guts consist of a SHIVR actuator, photodetector, 3x AAA battery holder and a small driver board. The driver board consists of a TI DRV8662 piezo driver and a handful of supporting discretes. The DRV8662 functions as a voltage booster and amplifier, stepping up a 3V-5VDC input to 100V and driving a bipolar output in response to a low-voltage (0-3V or so) input signal. The photosensor and an LED were glued up inside the hole where the ball used to be, and the connection between the sensor and a 100k bias resistor was wired directly to the DRV8662 analog input. The actuator was stood off on a piece of scrap metal to match the height of the button. A mechanical stop feature on the underside of the mouse button was Dremeled a bit to give the actuated button a bit larger range of motion. Last but not least, the top shell was spraypainted black to slightly disguise its origins as an old Microsoft ballmouse from about the Soviet era.

Haptic heightfield mouse demo guts partially assembled, actuator shown

Haptic heightfield mouse demo guts

The purple amplifier board was fabbed using OSH Park sometime prior (for experiments just such as this) and pretty much follows the application example in the DRV8662 datasheet, except for the DC modification as follows: Remove the DC blocking capacitors from the IN+/IN- pins, connect your input signal directly to IN+ and connect a midscale reference to IN-. For a typical 3.3V supply voltage and appropriate setting of the gain selects, a 10k/10k resistor divider between 3.3V and GND is just about right. Note that although the datasheet warns against continuous operation of the DRV8662 to avoid overheating, at such low frequencies it doesn’t so much as get warm to the touch. (Actually, I found it nearly impossible to get the evaluation kit into overtemperature even under continuous, harsh drive conditions.)

DRV8662 circuit with PWM input and modification for DC operation

DRV8662 circuit with PWM input and modification for DC operation

Slightly less-cheesy thumpin’ phablet

Another up-and-coming use for linear actuators lately is to provide inertial haptic effects in handheld gadgets. Most folks are familiar with the kind emanating from the weighted motors used in phones and game controllers, but these are fairly limited: they can only shake “all around” (not in a specified direction), the amplitude and frequency cannot be independently controlled (the only way to get more oomph is to spin it faster), and neither can the phasing of the actuator (let alone between multiple actuators) be controlled. Oh yeah, and the spin-up and spin-down times are on the order of 100-400ms depending on the size of the motor, so forget about any sharp, rapid-onset effects. For these reasons, folks are experimenting with linear actuators, which can provide much more precisely controlled sensations (a good example is the proposed Steam controller, which features two touchpads with a linear voicecoil driver under each.)

A fun thing about the piezo bimorphs is they are extremely lightweight (less than 0.5 grams) – so when adding mass at the end to make an inertial driver, it’s basically all payload: that mass isn’t fighting against the dead weight of magnets or metal shielding components. So I decided to make a demo resembling a big phat phablet*, which could be a flashy quadcore phone or some kind of aesthetically addled game controller. Or, you know, a rounded rectangle hogged out of a piece of Delrin. Hey, rounded corners! This demo featured two actuators, one on each side. I slapped a total of 10g tip mass on each, held in place by a stylin’ dab of epoxy.

Handheld inertial haptic demo in phablet-like form factor

Handheld inertial haptic demo in phablet-like form factor

For this demo I laid out a made-for-purpose PCB (not just carved up what I had already hanging around) and sent it off to Gold Phoenix. It arrived juuuust in time, but that’s another story. The board layout had a total of 3 copies of the same DRV8662 circuit, with a spot for a small PIC12 microcontroller at each to supply the waveforms. (The 3rd circuit was to be for a third, surface-bonded actuator, but I didn’t have time to implement it.) The program on each PIC consisted of a simple arbitrary wavetable generator (a handful of basic waveforms such as sine, square and directional “punch” were generated by a python script and slapped into lookup tables) and a series of calls to the waveform generator function with varying amplitudes, frequencies and waveform index to generate the demo effects. The waveform output itself was driven on a PWM pin and filtered to provide a proper analog input to the driver, and a GPIO pin was used for master-slave synchronization between the PICs.

As before, the static deflection capability of the actuators was (ab)used to produce directional effects, such as making the device lunge toward or away from the user (fast drive stroke followed by a slower position-and-hold return stroke), or wiggle by driving them with out-of-phase square waves. With the 10g of mass, the usable frequency range was from about 30Hz to a few hundred Hz. Above 350Hz or so, the drivers reached their power limit and the output waveforms began to distort, producing significant audible noise in addition to motion. Qualitatively, this frequency range goes from a deep rumble to the sensation that there’s a very pissed-off mosquito trapped under your hand. You can’t feel it over the internet, but you can see the actuators throwing in the video below.

If this had been a real smartphone/etc. with a touchscreen, the actuation could respond to touch activity to produce effects like:

  • Simulate surface texturing, i.e. give different screen areas different feels or make areas feel “pushed in” or “popped out”
  • Simulate sticky and slippery spots on the screen by vibrating the screen at high frequencies to modify stiction
  • Create the sensation of inertia or heaviness in the device, resisting as the user shakes or moves it around
  • Create the feeling of compliance, i.e. make the rigid glass screen feel like rubber and bounce when touched
  • Create the illusion of tackiness, where the screen gets pulled with the user’s finger as they let go, along with a vibratory kiss as it pulls free

Demo Code Details

This was for a day-job project, so I can’t provide the actual sourcecode… but can at least describe a bit of how it works.

The waveform generator is pretty straightforward, with one slight tweak to allow for arbitrary amplitude control. The actual waveform data is stored as raw data at a 256-word-aligned boundary. From the beginning of the table, the current entry is moved to the PWM register, followed by a waitloop for a timer overflow flag and then incrementing the table pointer. One call to the waveform output function outputs one complete cycle. The total duration of waveform output is controlled (at the next level up) by how many times this function is called in succession (i.e. how many complete cycles are output at the configured frequency).

The PIC’s 16-bit timer is used to control timing of waveform traversal. It is a little odd, providing a configurable prescale, postscale and period register (PR) setting. The pre/postscale divide the timer by (1:1, 1:4, 1:16) and (1:1 to 1:16) respectively. The PR configures the ‘top’ or rollover value to any value between 1 and 255. Between these settings, a wide variety of rates are possible. Once again, I used a python script (natch!) to build a lookup table which maps a frequency (2Hz increments) to the pre/post/PR combination which comes closest to it. With the on-chip 32MHz oscillator and 128-point wavetable, the realizable waveform frequencies are from 2Hz to somewhere upward of 1KHz.

Each wavetable entry stores one complete cycle of the waveform at 8-bit resolution, full amplitude (i.e. the waveform goes from 0 at its lowest point to 255 at its highest point; 128 is midscale). To achieve variable amplitude without storing scaled copies of the waveform or performing expensive math, some binary arithmetic is used. I describe the actual algorithm in this forum post. To avoid any audible clicking when switching waveforms, the tables are constructed so that the last point in each wavetable ends at midscale, and only multiples of complete wave cycles are output.

The sine, square and triangle waveforms are pretty straightforward. The ‘punch’ waveform consists of a very short quarter-sinewave drive stroke (from -fullscale to +fullscale) followed a linear ramp back to negative fullscale. As with the others, this waveform is time-shifted so that the midscale crossing of the ramp-down occurs at the last point in the table.

Sinusoidal waveform

Sinusoidal waveform

Punch waveform

Punch waveform, consisting of a rapid drive stroke and slow return stroke. In these waveforms the green trace indicates polarity and the red indicates inflection (not used in the demo code).

The Show

The first day consisted entirely of workshops, no exhibitions. Unfortunately our scheduling didn’t permit getting there early and seeing all of them, but did get to check out a couple. One of these was a sweet haptic texture rendering talk from researchers at UPenn. The math behind their approach will make your eyeballs spin backwards into their sockets a bit, but the results were incredibly realistic. On display were a tablet computer and stylus combo that faithfully recreated the sensation of drawing on sandpaper, cardboard and dozens of other textures on the slick glass surface, and the same algorithm implemented in a force-feedback arm for texturing 3D virtual surfaces. The algorithm and texture database are open source and published online.

The other was probably the most badass-looking Brain-Computer Interface setup known to mankind. You think you’re cute with your little Emotiv headset and its 3 thumbtacks touching your mop? Yeah, this one has 128 saline-soaked electrodes for your mind-reading pleasure. You’ll look like a lunchlady wearing it, but everyone will know how ridiculous you think you look. Actually, the takeaway message I got from the BCI stuff on display was don’t believe the hype. The 128-node BCI demo was impressive in that a fresh-from-the-crowd individual could begin to steer a ball left or right onscreen by thinking (specifically, visualizing moving either their left or right arm) within about 10 minutes, without a lengthy training/calibration period. However, even with this very formal setup, those stories you hear about typing with your mind at normal speed, or guessing which picture you’re thinking about, etc… reality isn’t really there yet. (There is a character input scheme known as a P300 speller that does work, but it’s not nearly as straightforward as thinking about a letter and having it show up in your document. Input speeds are measured in characters per minute – not all that many – and require intense concentration.)

A volunteer is wired up to a Brain-Computer Interface consisting of a skull cap with dozens of electrodes

Serious Brain-Computer Interface

The next several days we were exhibiting. Unfortunately that meant we were stuck in our own booth, and couldn’t go check out the demo sessions. On the bright side, many of the demos were left set up between sessions, so we could at least sneak a peek around during e.g. poster sessions (when the exhibition section was a ghost town) and get some idea what they were about. Oh yeah, this is definitely a University-research-heavy conference, so Arduinos everywhere! In retrospect, probably not the ideal venue to try and hawk raw actuators to (nonexistent) smartphone-company scouts, but highly informative regardless.

That said, there were a few industry folks milling around. A few from names you might expect to see here, and even more from companies you’ve probably heard of, but would not expect to be interested in this stuff. Oh yeah, and a haptics show would not be complete without a scout from the notorious “I” company there. Not the rounded corners one, I mean the other “I” company, that seems to elicit groans from the entire haptics community. A guy from there walked up to our booth, so I tried to ask (ahem, tactfully) what exactly they did/sold, what value they added.

“So what do you guys do? Some kind of… software, right?”
“Oh yeah, there’s a software package… we champion the cause of haptics… and mainly, you know, licensing…”

He wasn’t carrying a clipboard or obvious spy camera, so at least there’s a chance we won’t find vague patents filed against everything in our booth appearing in exactly 18 months’ time.

"I am Patent Trollio! I need IP for my bunghole! I would hate for my portfolio to have a holio..."

“I am Patent Trollio! I need IP for my bunghole! I would hate for my portfolio to have a holio…”

Murphy Factor

No such trip is complete without at least something going wrong. In this case it was a problem with the handheld demo. I had built two boards as a precaution (baking on the leadless DRV8662s is fiddly enough, and there are multiple of them on the design – amazingly, both boards worked on the first try) and ordered a couple small LiPol battery samples. There was no time to charge them in the scramble to code the demo the night before, but no problem, we can just recharge this board with any USB cable, right? So the first morning of the show, I started one running and just left it out. After the first hour or so, it started to make an unhappy clicking sound and soon went silent. So I swapped in the 2nd board and plugged the dead 1st one in to charge. After a while, the 2nd board also started to go, so I switched the now-charged (harhar) first back in. Yyyyeah, not so much. It turns out (in later investigation back at the office) I had swapped two resistors during assembly, and the one that should have been standing in for a 10k thermistor was actually a 100k, causing the charger IC to think it was way out of a safe temperature range and shut down. (Hey, they’re all 0402 and completely unmarked; don’t judge.) Lunch involved a bolt to the nearest Radio Shack and MacGyvering a close-enough LiPol charger. Yeah, that’s a bundle of low-value resistors with their ends twisted together (to limit current to ~1C, or about 50mA for this battery), a rectifier diode to drop the 5V from a hacked-off USB cable down closer to 4.2V, and some alligator clips to strategic points on the board. The Shack’s cheapest/only multimeter was used to check the battery voltage periodically to avoid overcharging. Not exactly pretty, but it worked.

handheld_demo_oops3

handheld_demo_oops4

Dinosaurs!

Wait, what? Yes, actually dinosaurs. On the 2nd night there was a banquet held at the Houston Natural History Museum. So, dinner and drinks under the gaping maw of T. rex and a dozen of his closest dead friends. Pretty cool.

Dining with dinos. Bites with trilobites. Ordovician hors d'...Someone please stop me now.

Dining with dinos. Bites with trilobites. Ordovician hors d’…Someone please stop me now.

* Don’t you just love that word – the uneasy intersection of fat flabby phone and a phony wannabe tablet…

Springtime brew

So, I just discovered I’m completely out of Pumpkin Fail (euphemistically, Cinnamon Cream Ale – that I only too late realized I forgot to add the pumpkin to the boil…yeah, don’t ask.). Time for a new brew!

Haven’t decided what to make yet, but at least I know just what to call it…

F*CK SNOW - Seriously, enough of this crap already

(With apologies to Hunger Games fans)

Coming soon to a dominated world near you!

Yep…it’s finally happened!

minion

checklist

Notes to myself: Minibog improvements

I’ve been growing some small carnivorous plants (mainly Nepenthes and sundews) in a tank indoors, but a couple years ago, just for fun I started some Sarracenia (pitcher plant) seeds. Well, what do you know, the darned things actually grew. So this spring, when it was clear I wouldn’t be able to keep them in small pots or winter them over in a fishtank anymore, I put together this freestanding outdoor minibog.

For this first attempt, I picked up a pretty standard (you could even say, “bog standard”, harhar) rectangular planter without drainage holes, put a couple inches of gravel in the bottom, stood a couple pieces of 4″ PVC pipe on the ends, then filled it up with sphagnum peat moss. Well, since the planter was much deeper than the plant pots I was sticking in, I put a couple large rocks at the bottom to take up space too. A small overflow hole drilled just below the soil line prevents it turning into a pond.

Minibog with some s. leucophylla, s. flava and a mystery type. The Nepenthes has already been brought inside due to frost hazard.

Minibog with some s. leucophylla, s. flava and a mystery type. The Nepenthes has already been brought inside due to frost hazard.

This first cut worked out all right. The peat stayed plenty wet with only a bit of watering during dry spells, and water wicked into the pitcher pots (through peat-peat contact via the drainage holes) well enough despite being plastic. The PVC pipes, intended to take up space and provide a place to sink a couple 1L pop bottles as an emergency gravity-watering system, turned out to be unnecessary, and mostly a good place to grow mosquitoes.

Notes for next year:

On watering: The scheme with the PVC pipes (besides taking up the extra space in the planter) was that if the water table ever got to the very bottom and I was on vacation (or just forgetting to water the thing, because that’s something I would do), water-filled pop bottles notched at the bottom would slowly release their contents to maintain 1/4″ or so of water at the bottom (enough to keep things moist enough to stay alive, without immediately evaporating). It turns out the gravel alone works just as well for this. But I did like the fact that the pipes gave an easy visual indication of the water level in the minibog, and a quick way to fill ‘er up. So for next year, a mosquitoproof watering hole / level indicator: It turns out that 1.5″ PVC and a pingpong ball are an almost perfect fit: juuuust enough clearance for the ball to float freely, but not enough to let flying pests reach the water. Brilliant!

1.5" PVC pipe and pingpong ball as water level indicator and mosquito excluder.

1.5″ PVC pipe and pingpong ball as water level indicator and mosquito excluder.

On wintering (to move or not to move): So far I’ve been providing winter dormancy for various plants by keeping them in a fishtank in a cold room of the house. Except for one N. purpurea (Home Depot rescue), I’ve got nothing that’s winter-hardy where I live (USDA zone 5). Reading up on the subject, it looks like people are successfully wintering not-so-hardy pitchers here if they are dug into the ground (no freestanding pots) and mulched. So next year I might take this bog and sink it into the ground, for plants that can be wintered outdoors, and make a smaller one that can easily be brought inside for the tropical stuff (I stuck a Nepenthes in the bog for the season, and it seemed to like it). Keep the pingpong ball watering scheme for both. I’m hoping with the infrastructure of a bog underneath, the fishtank will no longer be necessary for maintaining humidity or keeping things wet over the intervals I remember to water them. (That, and it looks like the fishtank is needed for actual fish now.)

More on moving: Another reason to bury this one is this design of planter was apparently not really intended to be full of water: after a couple seasons, it’s really starting to bow out in the middle.

On wildlife: Something(s) just love to go in there and dig little holes everywhere. I can’t really tell whether it’s squirrels, birds or both. Likewise, when I tried starting some live Sphagnum on the top, it was quickly dug out and removed (probably by nesting birds). In the end I had to cover most of the surface with good sized rocks to keep it from looking like a minefield. Sprinkling the surface with cinnamon and cayenne pepper powder had no effect. Next year, possibly some better wildlife protection scheme…

PID control stove hack for better brewing, sous vide, etc.

“Cook all the foods, hack all the things /
Make all the booze, hack all the things”

In beer brewing, the temperature profile during mashing has important implications for the beer’s eventual body and “mouthfeel”. The requirements for partial mash brewing are a bit more relaxed than all-grain, but still matter (the enzymatic reactions occur over the range of ~150-160F, but progress differently depending on the time spent in various portions of that range.) More to the point, manually babysitting this for a large pot of not-yet-beer on a stovetop is a pain in the arse, so the other night I just modded our electric range for external closed-loop temperature control.

The hack is pretty simple and should be straightforward to apply to most electric stoves. The supplies needed are a temperature controller (typically a PID controller), a temperature probe (typically thermocouple or RTD), and a beefy solid-state relay. I’d also highly recommend a jack for the control signal connection so it can be easily unplugged and stashed somewhere when not in use. In my case, I also added a beefy bypass switch that allowed the mod to be done in a reversible and failsafe way.

I am very fortunate to have a wife who puts up with my relentless tweaking around and warranty-voiding :-p She didn’t even flip a shit when she went for breakfast one morning and saw this:

"But honey... I'm making it *better*..."

“But honey… I’m making it *better*…”

Even so, a mod that involved loose ugly equipment and snarls of exposed wiring was probably not going to fly – so this had to be done with no or minimal visible changes.

Electric range guts in brief
Your parents’ (or maybe grandparents’) generation put a man on the Moon, so you might think a modern electric stove is full of microcontrollery precision and closed-loop systems targeting a specific glass temperature (non-contact IR temp sensors being about a buck fifty in quantity now). But nothing could be further from the truth. Typically, any and all temperature regulation is contained in the control knob itself, using a hacky-sounding mechanical assembly known as an infinite switch. It’s basically a bimetallic strip, like in an old thermostat, stuck on top of a variable resistor set by the knob position. The heat from the resistor makes/breaks the circuit at some interval proportional-ish to knob position, but it doesn’t know or care what kind of thermal load is on the burner. More relevant to this application, it also means there is no nice relay controlling everything, ready to have its nice low-voltage control signal hijacked with an Arduino or similar. This mod requires working directly with mains voltage and current, typically 240VAC (for North America at least), and obviously at enough amps to make things glow.

The mod

The stove I have features several burners of various sizes. I targeted the largest one for modification as this is the only one large enough for the brewpot. This burner features three independent elements that can be enabled in increments (inner, inner+middle, or all) to heat things of various sizes. Upon removing the back panel of the stove, I found a wiring diagram folded up and tucked inside behind the control knobs. This provided a good starting point, and also helpfully reported that the burner pulls 3,000W when all 3 elements are engaged, at a resistance of only 19.2 ohms across the 240V mains. Yikes!

Assuming a full 240V (RMS) from the wall, the burner should pull 12.5A (Ohm’s law). For a safety margin and (mainly) the belief that random solid state relays, usually made in China, may tend to be “optimistic” about their current ratings, I bought one rated for 40A, along with a PID temperature controller (Auber Instruments SYL1512A) and a waterproof RTD (resistance temperature detector) probe. PID control mathematically anticipates the future output for a given input, providing more accurate control (e.g. reducing overshoot), especially in an application like this with plenty of thermal inertia. The output of this controller is directly compatible with typical solid state relays, which are controlled by a DC voltage (typically accepting a wide range such as 3-24V). To avoid any possibility of overstressing the elements from rapid cycling, the controller cycle time setting was lengthened to be closer to the cycle times observed from the manual control, about 15 seconds.

A simplified diagram for the 3-element burner, before and after modification, is shown below. The dotted boxes in the top figure represent various internal switches on the control knob. Note that despite being drawn as completely independent switches, they are actually sequenced by the knob in a manner that isn’t entirely clear (and I didn’t care to reverse-engineer with a meter). I felt the most failsafe approach would be to wire the relay in series with the main switch, with a bypass switch in parallel with the relay, rather than bypass the knob switch with the relay. This provides “normal” (relay bypassed) vs. “external control” (bypass switch open) modes, and ensures that the burner can still be used normally if the relay ever fails, either open or short. The knob must still be turned to one of the ON positions to operate in either mode, so a during-the-night relay or controller failure cannot activate the burner unattended.

Wiring diagram

(Top) Original circuit, (Bottom) Modification for external control

In this stove a fat red bus wire delivers mains power to each infinite switch via a 1/4-inch quick connect, making it easy to patch custom circuitry in between.

In this stove a fat red bus wire delivers mains power to each infinite switch via a 1/4-inch quick connect, making it easy to patch custom circuitry in between.

I made a couple splitters to distribute power between the SSR and bypass switch and back to the infinite switch. Quick-connects make connecting it up, well, quick.

I made a couple splitters to distribute power between the SSR and bypass switch and back to the infinite switch. Quick-connects make connecting it up, well, quick.

Passing 3000W is no small potatoes, so external heatsinking on the solid-state relay is needed. For this I bolted the relay to a scrap aluminum plate, drilled a few holes in the stove’s back panel and bolted it to the inside. Thermal grease on all mating surfaces aids heat transfer. The plate acts as a heat spreader, making the entire back panel one big (if not particularly efficient) heatsink. The relay was positioned to fit into a gap between the control knobs and the main electronics assembly (clock / oven control) inside the front panel.

Solid-state relay installed onto inside of rear cover, allowing the entire cover to act as its heatsink.

Solid-state relay installed onto inside of rear cover, allowing the entire cover to act as its heatsink.

The relay and its wiring fits nicely into a gap between the infinite switches and the oven control assembly.

The relay and its wiring fits nicely into a gap between the infinite switches and the oven control assembly.

Rear cover reassembled. You can hardly tell it's been modified.

Rear cover reassembled. You can hardly tell it’s been modified.

Wiring for the bypass switch and relay control input was snaked through small gaps underneath the front panel, and the switch and a phono jack for the relay control were mounted tucked underneath the front panel using J-B Weld. In the end, unless you get up close and squint just right, you probably couldn’t tell the stove didn’t come this way.

Bypass switch wiring routed through a small opening underneath the front panel.

Bypass switch wiring routed through a small opening underneath the front panel.

Hey, I needed some way to hold the switch in position while the epoxy hardened...

Hey, I needed some way to hold the switch in position while the epoxy hardened…

Testing the thermal regulation using a small pot of water. The controller will be put into a nice enclosure soon.

Testing the thermal regulation using a small pot of water. The controller will be put into a nice enclosure soon.

End result. It looks pretty normal when the external controller isn't in use.

End result. It looks pretty normal when the external controller isn’t in use.

Sidenote: The hacks that didn’t get done
While I had the unit open, there was also the remote possibility of fixing a few cringeworthy design flaws, I mean “features”, in the oven portion, which IS microprocessor-controlled. One is that the temperature display actively lies to you: as soon as the oven meets or exceeds your set temperature, it will lock the displayed “actual” temperature to this value regardless of any subsequent, including major, fluctuations in the actual temperature. This is probably done to hide the poor bang-bang thermal regulation from the customer, but can be very annoying if you’re furiously fanning the door to cool it off in a hurry, or need to lower the set temperature for any reason. The other one is that you cannot – I mean you are actively forbidden to – set an oven temperature lower than 175F (a temperature where bacteria cannot thrive). Doubtless this is the work of lawyers trying to stop stupid people from cooking up Montezuma’s Revenge casseroles, but tough noogies if you wanted to dry some herbs or keep some bread warm.

As an embedded programmer by day, I figure oven firmware should be simple enough that this could be a short afternoon reverse engineering, right? Alas, the microcontroller turns out to be an obscure Renesas part with a rather obtuse-looking in-circuit programming protocol (not PIC, AVR or anything else I can easily borrow a programmer for and try to suck the firmware out). I decided I didn’t care enough to try implementing my own reader on the off chance the firmware was left unprotected.

Notes to myself: Lulzbot TAZ n00b guide

A work in progress…

Critical Specs:
Bed size / Build envelope: 298mm (X) * 275mm (Y) * 250mm (Z)
Nozzle diameter: 0.35mm (shipping default; others available)
Filament diameter: The TAZ’s extruder expects 3mm filament. However, not all off-the-shelf filament will be exactly this diameter; the discrepancy could affect print quality in severe cases (see notes later).

The Parts:

Printer:
-RAMBo – all-in-one Arduino-compatible controller + stepper/heater/etc. driver board (all-in-one variant of RAMPS)
–Marlin – firmware used by RAMPS/RAMBo boards
-Extruder (motor/gearing for filament control, plus hot end)
–Hot End (nozzle, barrel and heater): Budaschnozzle

PC Software:
For open-source 3D printers, this comes typically in 2 parts: a printer interface program and a slicing program. The slicer converts STL to G-code, and the interface is what actually drives the printer (usually includes homing, viewing/adjusting temperatures, and sending the generated G-code to the printer).

Printer controller / interface: TAZ ships with Pronterface, but others are available.

Slicer: TAZ ships with the aptly-named Slic3r, but again, others are available.

Common Problems

Printer “shriek” followed by severe height problems when printing (head crashing into bed or printing in midair) – This is the ONE issue that gave me the most grief. The issue in brief is that the TAZ’s Z-axis (2-motor gantry on high-friction leadscrews) must be driven quite slowly compared to the other two axes, or else the motors will stall. However, most slicers (including the Slic3r configuration that comes with the printer) insist on moving every axis at the maximum possible speed for all non-printing moves (and often even printing moves). In general, this is accomplished by generating G-code specifying arbitrarily high feedrates (F999999 or whatever) and expecting the firmware to limit these moves to a sane speed. For some reason, these limiting values are either missing or incorrectly set on the printer firmware end, so the Z axis motors stall when performing the initial home-raise-position-lower move at the beginning of the print. This results in a number of highly irritating problems such as printing in midair (if it stalls more during the lowering portion), or the nozzle dragging across the bed and shredding the crap out of any films/coatings thereupon (stalled more/entirely during the raise portion), or putting the entire gantry out of whack (if the motors on either side stalled by different amounts – have fun with this one!).

Some more discussion of this issue at http://forum.lulzbot.com/viewtopic.php?f=36&t=91 .

To fix it: At the time of this writing (10/2013), you have to rebuild and reflash the firmware after tweaking some settings. In brief: Download the appropriate firmware sourcecode for your printer here, along with the Arduino IDE from the same page (they specifically endorse and host v. 1.01, but any recent version should in theory work). Oh yeah, and for you Windoze users, some kind of tool for opening .bzip2 archives. I recommend 7-Zip. (Great utility, but be wary of fake/spam download links from their file host when downloading…) Unzip the firmware files, go into the configuration.h file and uncomment the //#define EEPROM_SETTINGS and //#define EEPROM_CHITCHAT lines. This will let you set maximum speeds using an M code (to be explained later). While you’re in there, may as well sane up the default values too. Scroll up and lower the Z value for DEFAULT_MAX_FEEDRATE (the values are X, Y, Z and Extruder, respectively). The default is 10, which for my printer is too fast. On mine, a value of 5 was slow enough but excited some nasty resonance that caused the whole printer to buzz angrily. In the end I ended up with 4.2, which provided much smoother operation. (YMMV of course.)

Open the Arduino IDE, and from there open “marlin.ino” in the firmware files. Set the board as “Arduino Mega 2560″ (Tools -> Board) and the serial port used by your printer (Tools -> Serial Port), press the Upload button and cross your fingers.

Assuming everything worked, you can now use M commands to set speed limits for each axis. Use:

M203 X### Y### Z### E###

to set speeds, and M503 to display the current values (if EEPROM_CHITCHAT is enabled). You can add this code to your slicer’s startup routine to ensure the speeds are not overridden by other users or software settings (most slicers have a place to add custom G-code before and after the slicer output.)

NOTE: Pay careful attention to expected units. The M command expects (by default?) values in mm/min., whereas if you set a default in the firmware as above, this value is expected in mm/sec. Particularly braindead slicers and/or host software might even set your machine into Imperial mode, blech.

Print does not stick to bed / breaks loose – Very common, and semi-related to the curling issue below. The big thing to check is the nozzle height when printing the first layer. It should be basically touching, smooshing the initial layer down nice and flat. The output should not look like a round bead of toothpaste being squeezed out of a tube. Check bed leveling, Z screw and repeatability, etc. Next things are to play around with the bed temperature and possibly extrusion temperature. Ensure the bed is fully up to temp before printing. For parts with low bed contact area or small protrusions that get caught and lifted, you can try various software-generated adulterations (raft or brim) to help with adhesion. These are typically autogenerated by the slicer program at your command. A raft is a solid-fill first layer covering typically a bounding box around the footprint of the print, followed by a sparse layer that allows for breaking the raft off afterward. A brim is a widening of all features on the first layer (as if your print were a bit melty and thrown at the bed with some force) to increase the surface area and move any “peel point” away from your actual part geometry. Brims are good to help stick down thin features that are prone to being lifted during the early print passes. After removal, the thin brim can be easily cut off with an X-Acto knife.

UPDATE: I tried the “lulzjuice” (acetone glue) trick, and it seems to work beautifully – completely solved my adhesion problems! Basically, for printing ABS, dissolve a bit of filament in some acetone, and wipe/brush a thin layer onto the buildplate before printing. Acetone has a low boiling point, so if you have a heated buildplate, do this *before* heating it up to avoid a bubbly mess. The flip side is that unsticking the print (intentionally) can be a little harder – if all else fails, try hitting it with coldspray (or an inverted can of computer duster).

Edges of print peel / curl away from bed while printing – Caused by a combination of poor adhesion to the bed and changing temperature. Check all of the stuff for the previous problem, then see what you can do about controlling the rate of cooling (overall or between layers). Try lowering the extruder temp slightly or raising the bed temp(?), dealing with any sources of cold breezes (enclosing the build area if necessary), or fiddling with software-based thermal controls (delays between layers, etc.).

Fat, floppy, and/or blobby print (edges extending beyond where they “should have been”) – Often a side-effect of the edges peeling upward and pushing the outer surfaces up toward the nozzle (the excess material has nowhere to go but outward…). If there is no peeling problem, most likely too much material is being extruded. See “Filament Diameter Problems” below…

Filament Diameter Problems – Use good, red-handled calipers to measure the filament diameter in several places – it may be a bit fatter or thinner than the nominal dimension, especially extra-cheap or noname-brand stuff. A fraction of a mm difference may not sound like much, but considering that’s being squeezed down from, say, 3mm to 0.35mm, it can become significant! Most slicers have a means for compensating for the filament diameter.

Popping sounds from extruder during print – Trapped humidity in filament (think popcorn) – try gentyl baking or storing in a drybag when not used for extended periods.

Trouble bridging – The extruded filament should be getting “pulled” slightly during print; not “pushed” (i.e. travel rate should slightly exceed the extrusion rate). See “Filament Diameter Problems” or related compensations offered by your slicer.

Hole diameters in printed parts – plastic may “squish out” a bit during extrusion, causing your hole diameters to be slightly narrower than expected. Assuming the amount of extrusion is actually correct (see several of the above), I’ve heard of folks compensating with an Excel table of expected vs. achieved hole size (it might calculate this) – don’t know where to get this though.

Tim Tears It Apart: Sensitech TempTale4(R) data logger

One of these devices appeared in a large shipment of temperature-sensitive raw materials at my work, amid a pile of dry ice chips. While I don’t know the MSRP or actual retail price of this gadget, the shipper packs one in with every order and tacks on $60-70 for it as a line-item; nonreturnable as far as I know.

So, we can’t return it and we can’t read it out, so what do we do?

I think you know what we do :p

TempTale4 Front Panel

TempTale4 Front Panel

The device is aimed at exactly this application – telling if your temperature-sensitive stuff stayed within a defined temperature band during all phases of shipping and handling. With timestamps, you could probably tell exactly which party in the shipment chain screwed the pooch. There is a fancy term for this kind of tracking – cold chain certification.

As far as interfaces go, it doesn’t get much more simple. A button to start logging, a button to stop logging, a simple LCD display, and a couple blanks to enter a shipper’s name and the PO# of the shipment. The current temperature and recording status (started / stopped) is displayed on the LCD. If the temperature went outside the allowed band while recording, an alarm symbol (bell) also appears. A pair of LEDs exposed through the front panel allow the device to be configured and recorded data to be read out to a PC via an optical doohicky (more on this later).

TempTale4 rear cover removed, showing the plastic 'cage' that holds the battery pack in place

TempTale4 rear cover removed, showing the plastic ‘cage’ that holds the battery pack in place

The enclosure is a basic 2-piece sandwich, with the half-thickness (.031″) PCB fixed into the front by 3 screws. A separate plastic ‘cage’ holds the battery pack in place; fingers on the backside clip around the edges of the PCB. The entire weight of the battery pack – and impact loads it imparts during rough shipping – are borne solely by the PCB and ultimately those 3 screw points. They might not anticipate extremely rough shipping for cold-chain cargoes (although a broken logger could be spun as a rough-shipping-detection “feature”).

On the plus side, note the O-ring ensuring a water-resistant seal between the case halves. Any other case openings are covered by the front overlay “sticker”, eliminating any fluid intrusion paths.

Battery cage removed

Battery cage removed

Battery pack removed. That piece of double-stick tape really ties the room together.

Battery pack removed. That piece of double-stick tape really ties the room together.

The battery pack is Zip-tied to the plastic “cage”, although not to the PCB or any part of the case. A piece of foamy double-stick tape helps hold and cushion the battery pack where it sits against the PCB. This and the overlay sticker on the front suggest this device is intended for enforcing low temperatures. High-temperature overlays and even double-stick tape exist, but it’s pretty fancy stuff. This tape isn’t fancy.

The battery pack consists of two series-connected Tadiran lithium primary cells (AA size), with a nominal 7.2V output. Now, in an industry that is constantly pushing toward ever lower voltages to reduce power consumption, this is weird! Especially for a gadget that needs to go a long time without a battery change. A single 3.6V cell will easily power a 1.8-3.3V device, and maintain this output voltage until nearly depleted. Some wild guesses at the reason behind the unusually high voltage:

1) High-current operation at very low temperatures. This shouldn’t be an issue during normal operation (just datalogging should use very little current on average), but if someone wanted to read it out over the optical interface in the Arctic, it may be another story. This may provide extra headroom against the inevitable cold-battery voltage sags that would occur as the transmit LED fires.

2) Voltage-happy LCD? Some LCDs require a higher voltage such as this to operate (usually generated by an onboard charge pump circuit), but these are typically graphical matrix LCDs (many independent rows/columns) – the rows/cols are typically energized one at a time, and to refresh the entire display faster than the eye can perceive flicker, each one is on for only a very short time – higher voltages help them reach their final dark/light state within that time and retain it until the next refresh. I can’t imagine this little segment LCD having such a requirement.

3) They needed 2 batteries to get the milliamp-hours up regadless, and wiring them in series was easier/cheaper (no worries about cell balancing). This is not an ideal way to get more mAh (increasing resistive losses in the series-connected pair, and conversion losses in any regulator, especially linear/LDO), but I suppose it works well-enough, and the price is right.

Bottom side of PCB exposed

Bottom side of PCB exposed

With the battery pack out of the way, we can see the bottom side of the PCB. Not much there! A couple things to note though:

1) A secret button hidden inside the device. As it turns out, this button resets/clears the device for reuse. The entire LCD will blink every segment for a couple minutes, then the device is factory-fresh again (probably).

2) No components apart from the secret button, but a fair number of empty pads (non-stuffed components).

PCB removed - rear of LCD accessible

PCB removed – rear of LCD accessible

Top of PCB. Notable points incude "secret" magnetic switch, accessible programming header, and an overall dearth of actual components.

Top of PCB. Notable points incude “secret” magnetic switch, accessible programming header, and an overall dearth of actual components.

Finally, we get to the topside of the PCB, the actual meat of the device! Erm, wait a minute, where’s all the meat?

The $70 pricetag notwithstanding, it should be becoming clear that this device is cheap-cheap-cheap to make. The bill of materials consists mainly of a few pushbuttons, a small handful of discretes and a glob-top MCU/ASIC. The segment LCD and an EEPROM in the top-left of the photo (a note on the manufacturer’s web site says it could be 2KB or even a whopping 16KB of storage) complete the ensemble. I have to snicker a bit about that after testing a 32GByte uSD card in my own day-job datalogger design the same day, but again, this device is designed to be throwaway cheap, and 640k (ahem, 16k) ought to be enough for anybody – for temperature data, anyway. (The astute reader will see “32K” stamped on the chip; either a clever misdirection or these loggers have grown more spacious than the web site lets on.)

Some notable points:

1) The temperature sensing element appears to be a simple RTD – no thermocouple or even brand-name digital sensor, but probably accurate enough.

2) The glass tube designated S3 is a magnetic reed switch. This almost certainly is used to trigger entry into the download/configure mode, either with a magnetic wand or a magnet built into a monolithic reader device that aligns to the LEDs.

3) The LCD is affixed to the top shell, not the PCB, and contact is made by an elastomer strip (zebra strip). Don’t lose this!

4) The neat row of capacitors at the bottom of the photo (C2, C3, C10 ~ 13) are probably part of a charge pump circuit for the LCD. There goes that theory about the battery voltage.

5) As with the bottom side, note the prevalence of non-stuffed component pads. Aside from a good handful of discretes, there are spots that appear to accept a second RTD temperature sensor and a humidity sensor. Most likely, this same board and ASIC become the “TempTale4 Humidity” with the addition of these components.

6) Besides the holes for extra sensors, pay particular attention to the two – two! – sets of non-stuffed headers (J1, J2). The latter pins directly into the globtop, suggesting the likely possibility of an in-circuit programming header (or even JTAG, holiest of holy grails).

See-thru view of front case, showing button flexures and a small opening in the plastic to bring the temperature sensor closer to the outside environment.

See-thru view of front case, showing button flexures and a small opening in the plastic to bring the temperature sensor closer to the outside environment.

In this final photo, you can see the shell cutouts as they relate to the overlay sticker. The temperature sensor normally sits in the small notch in the middle, leaving only the thin bit of sticker between the sensor and the outside environment.

A new feature: “Tim Tears It Apart”!

So, as you might have guessed, I’m an electronics engineer, and I like to tear things apart – especially gadgets. I don’t usually post about it, because a) someone else has probably already posted a teardown of that gadget, and b) I’m lazy as balls.

But then I realized a good teardown is not all about the pretty pictures, but reverse-engineering the mind and intentions of the original designer. After about a deca*cough* some time in the industry, at the age where I tell kids to get off my lawn*cough* pull up their damn pants, I’m getting a pretty decent feel for not just how a gadget works, but why it works the way it does – i.e. the budgetary constraints, schedule pressures and technical constraints behind specific design decisions. So maybe it is worth posting those teardowns after all :p

I can’t guarantee it’ll be a frequent feature, but there are a few torn-apart gadgets I could throw my 2 cents in on.

This blog is protected by Dave\'s Spam Karma 2: 518856 Spams eaten and counting...