Monday, May 10, 2010

No Impact Man

We watched "No Impact Man" a few days ago. I was really looking forward to the movie, as I've often thought about the same exact themes in my own life.

I found the movie to be informative, entertaining, but I also fell that it fell short in a number of important ways.

The basic idea of the movie is to ask if it's possible to live in such a way that you produce as little impact as possible on the environment around you. Impact is defined in a number of ways:
  • Trash
  • Personal transportation (= direct pollution)
  • Food transportation, electricity generation (= indirect pollution)
  • Buying stuff (= consumer culture, which also leads to direct and indirect pollution)
The movie explores how much we need in order to live a happy life vs. how much we want for the sake of convenience, or because modern society has conditioned us for to want it. The protagonist and his family take some of the following steps to reduce their impact:
  • Don't buy new things as much as possible.
    • Instead, buy old things that someone else no longer wants.
    • For instance, no new clothes, buy all clothes used.
    • This reduces direct impact (no packaging trash) and indirect impact (no resources consumed to produce new items).
    • This is a reaction to modern consumer culture.
  • Reuse things as much as possible.
    • For instance, no kleenex (use a handkerchief), no toilet paper (use textile rags that can be washed and reused).
    • This is a reaction to the culture of using something once and throwing it away.
  • Buy food locally.
    • Locally here is defined as a 250 mile radius around NY.
    • This is a reaction to the fact that modern agriculture is very oil-intensive: food is produced using fertilizer (generally, oil-derived) and transported from far away (also using oil).
  • Stop using electricity.
    • Live by sun-light alone, use candles at night.
    • Electricity generation is very dirty, more than 50% of electricity in the world today comes from coal.
  • Don't drive anywhere.
    • Bike or walk.
Overall, the family manages to pull through this year long experiment and find that their life, while radically changed in many ways, was still largely happy and enjoyable. For instance, they traded TV for more quality time with friends and family; they lost weight and got into much better physical shape from eating less sugar-rich highly-processed foods and biking/walking everywhere; and so on.

What the movie did not address, unfortunately, is that such a life-style, while possible, depends on a number of unstated assumptions:
  1. Time. You need lots more time to walk everywhere, cook meals from raw materials (as opposed to buying them pre-processed), and so on. In my own life, time is a scarce commodity, even though I'm keenly aware of it and try to budget it carefully.
  2. Money. You have to pay the rent, pretty much no matter where you live. The movie hardly explored the fact that the wife had a high-paying job that covered their bills, and allowed the husband to basically not work for a year and stay home to conduct this experiment (with all that entails).
  3. Distance. To make such a lifestyle possible, you have to be able to walk or bike reasonable distances to get food, or to go to work, etc. This is possible in NY, since it's one of the densest cities in the world. This may not be possible in a more rural, or even less dense city somewhere else.
  4. Luck. Trading the fridge turned out to be very difficult because their food spoiled fast. In my opinion, the family was lucky that they didn't get sick during the second half of the movie. They probably mitigated this by buying their food daily or every other day and not storing it over any length of time. This is possible, but requires even more time investment.
Some of these issues could be addressed by living on a self-sufficient farm -- a mostly closed-loop system that provides for most of your needs, without needing to go outside it for other stuff. It's much less clear to me if an impact-free life is possible in a modern urban environment, especially one that depends on fossil fuel for energy. After all, your food must come from outside the city, and for that you basically need oil for transportation.

Even with these shortcomings, the movie was still entertaining and informative. I liked the fact that the movie took a very optimistic tone and genuinely tried to look at these problems and see what solutions might exist.

The movie also highlighted the fact that one person's actions do matter. Many people get discouraged by the fact that they might be alone in a sea of other people who don't care or are unwilling to change, so why bother? The protagonist answers, and I agree: "Being optimistic [...] is the most radical political act there is."

In terms of our own life, it prompted me to think harder about what other changes could we make to reduce our impact:
  • Could we reduce single-use items (like Kleenex, shaving cream cans) in favor of multiple-use items (like handkerchiefs, shaving soap)?
  • Could we go to the farmers market down the street every week instead of buying so much packaged food at grocery stores?
  • Could we reduce TV/Internet use in favor of other activities?
  • Could we buy more stuff used (craigslist, antique stores, etc.) instead of new?
Given where we live and where my job is located, it is unlikely that I will be able to reduce the impact of transportation, at least for the time being. But I remain optimistic.

Sunday, May 09, 2010

Diodes and demodulation

As we described before, a diode lets current flow in only one direction. This is essential for demodulation: the process of extracting information from a signal. To understand demodulation we need to first understand the first and simplest kind of radio transmission -- amplitude modulation.

The goal of radio is, ultimately, to transmit sound over very long distances. A band is playing in New York and I would like to hear it in San Francisco. One way to do this is to build a sound amplifier so loud that the sound waves themselves travel directly from the origin to my ear. This is obviously impractical: it would be intolerably loud at the origin and barely audible at the destination; furthermore, you could not have multiple radio stations broadcasting at the same time, they would all clobber each other in a cacophony of noise.

Another way to do this is:
  • Convert the sound wave (anywhere from 1 Hz - 10 kHz) to another equivalent wave (measured in hundreds of kHz or even MHz).
  • The equivalent wave, or modulated wave, contains the original sound wave information but in a different representation.
    • The carrier wave is not audible to the human ear since it is in a totally different frequency spectrum.
    • Sound waves travel by making the air vibrate. The amount of energy required to make air vibrate over long distances is enormous (think loud rock concert).
    • Higher frequency waves are electromagnetic waves. They can travel much longer distances using much less energy.
  • Once the modulated wave arrives at my radio's antenna, the radio translates this modulated wave back into a sound wave that I can hear. This is called demodulation, and it is made possible by diodes.
 Let's first see what a modulated wave looks like (courtesy of yourdictionary.com):


The carrier wave is basically the radio station frequency. When you tune into KFRC, you tell the radio to look for sound information embedded in carrier frequency 1550 kHz.

The modulating wave is the sound. This is what is embedded into the carrier frequency and, ultimately, the "information" we want to hear.

The modulated wave is the combined wave that travels from the radio tower to my radio. Visually, it roughly looks like a combination between the carrier wave and the modulating wave, which should hopefully agree with your intuition and some of the descriptions above.

Now, we wish to turn this modulated wave into sound. To understand how that works we need to first understand how a loudspeaker works, as shown on this diagram (courtesy of soundonmind.com):


  • The magnet provides a fixed, constant magnetic field.
  • The signal input provides the sound wave we wish to ultimately hear.
  • When the signal input goes into the voice coil, the voice coil becomes an electromagnet.
  • The voice coil's magnetic field "pushes against" the magnet's field, based on the strength of the signal input.
  • The voice coil is attached to the diaphragm, which is basically a piece of cardboard.
  • When the voice coil moves, the diaphragm moves, and pushes air to varying extents, generating a sound wave we can hear.
    • If you've ever touched a loudspeaker that was playing music, you can actually feel the movement of the diaphragm with your fingers.
Let's step back and look at the complete picture:
  • We have a modulated wave that contains the sound information embedded in a carrier wave. This modulated wave has very high frequency, measured in hundreds or thousands of kHz, so it is not audible by the human ear.
  • We have a loudspeaker that can convert a wave into sound by vibrating a piece of cardboard.
What would happen if we feed the modulated wave directly into the loudspeaker? Think about it for a minute before reading on.

The answer is: absolutely nothing:
  • The modulated wave has very high frequency, which means that the "peaks" and "throughs" come in rapid succession one after the other.
  • When a "peak" arrives at the voice coil, it starts to move the voice coil out; this takes a bit of time, as the voice coil has to physically move in order to push the diaphragm and make a sound wave.
  • However, a "through" quickly follows the "peak" and starts to pull the voice coil back in the opposite direction.
  • The "peaks" and the "throughs" effectively cancel each other out as far as the diaphragm is concerned and no sound comes out the speaker.
What would happen if we feed the modulated wave through a diode first, and then feed the output from the diode to the speaker? A diode lets current flow in only one direction, so the modulated wave would basically be cut in "half":


Now, if we feed the demodulated wave into the speaker:
  • The first peak will start to push the voice coil out.
  • There is no through following this peak, simply an empty space, or "absence of signal".
    • How is absence of signal different from a through?
    • A through is a negative signal -- it starts to pull the voice coil in the opposite direction.
    • Absence of signal is no signal -- the voice coil is left where it is will at most react based on its own inertia or the elasticity of the diagram.
  • The next peak will start to push the voice coil further out.
  • As you can see, the voice coil (and the attached diagram) react only to the peaks, in other words they both move according to a wave that follows the top of the peaks.
The wave that follows the top of the peaks is our original sound wave! Look back at the first diagram to see it, or just trace the peaks above with your finger.

The coil and diaphragm end up vibrating the air in accordance to the original sound wave, therefore producing sound we can hear. The diode demodulates the modulated wave back into sound we can hear.

Tuesday, April 20, 2010

My first month with a Mac

About a month ago I upgraded my company laptop from a Thinkpad to a MacBook Pro. Although I've used Apple computers in the past, it's never been my "main" computer or even close to it. I've always used Thinkpads as my work laptop and felt they were and still are exceptional laptops. I'm pretty proficient with the Mac now; I still miss the Thinkpad every so often, but overall I'm happy. Here are some of my impressions thus far from the transition.

My most important requirement for a computer is that it lets me be "productive". I define productivity as "doing my job as fast as I know how using the given tool". If the machine gets in the way (by being unreliable, slow, lacking software, etc.) then that's a deal breaker and I'm not interested. I don't have particular allegiances to certain companies or hardware manufacturers, so long as I can be productive. In that respect, the Mac got off to a surprisingly good start, and I can now safely say that I'm about 95% as productive on it as I used to be on my Thinkpad. I honestly did not expect this, so it was a rather pleasant surprise.

In terms of software, the applications I run most on my Mac are:
  • Chrome = duh
  • iTerm = SSH client
  • TextMate = general purpose programming text editor
  • OpenOffice = word processing, spreadsheet, presentations
  • Eclipse/Android = work
  • iTunes = streaming radio
  • Picasa = photos
  • VLC/QuickTime/Flip4Mac = media
  • Skype = video conferencing
  • Solitaire etc. = blow off steam
All these programs work almost flawlessly. I don't detect almost any difference from the Thinkpad, they run fast, they're reliable, they're as bug-free as they are on the PC.

The only application that took some work was iTerm -- I run Emacs in screen, and getting the keybindings to work was a bit painful. In particular, by force of habit I'd often hit Command+W to copy in Emacs, which would close the current tab. Fortunately, it's easy to re-bind the keys for any menu to some other key combination, so that's what I did. This is actually a remarkable feature of OSX, and one that I'm not sure exists on Windows -- you can specify any app and any menu entry in that app (by name), and then give it another key combination. Cool!

The main drag was getting used to the new key combinations. Command+{C, X, Z, TAB} all work the same and fairly intuitive for someone used to PCs. But that's where the similarities end:
  • I used Home/PgUp/PgDn/End/Del a lot on the PC, and the Mac is inadequate here. Having to hit Fn+{Left, Right, Up, Down} just isn't the same. Not to mention that different apps interpret the key combinations differently, sometimes it recognizes Fn+, other times Command+.
  • I still find it confusing to have to deal with Control, Alt, and Command. I know how they work, but I still have to think about it. Why do we need 3 keys to do what a PC does with just Control and Alt?
  • Switching between applications is done via Command+TAB, switching between multiple windows of the same application is done via Command+~ Is this really necessary? Command+TAB should be enough IMO, why the extra aggravation?
Another minor drag is connecting an external monitor. I sometimes use an external digital monitor (DVI), other times an external analog monitor (VGA). Apple decided that they couldn't use a single connector for this, you need two dongles: one for DVI (DVI-D to be precise), and another for VGA. This is crazy. The mini-port can clearly put out both digital and analog signal, and DVI connectors are perfectly capable of relaying said signals. I suspect the reason Apple insisted on a DVI-D dongle is to speed the demise of VGA monitors. This is part of the general company culture (we know what's best for you, trust us), which I personally find patronizing.

Now that we got the bad out of the way, there's plenty of stuff to like:
  • The touchpad is fabulous. Scrolling with 2 fingers, clearing the desktop with 4 fingers -- it's just beautiful. Furthermore, I simply could not use the touchpad on my Thinkpad, I always used the little red knob instead; the PC touchpad is so sensitive it's useless. The Mac touchpad simply works, and works well. If you've never tried to use one, I do recommend you try it.
  • The laptop feels extremely sturdy and well built. I used to think that Thinkpads were the best built laptops out there, but the unibody design wins hands down. It's not even a comparison.
  • The keyboard feels good, the screen looks beautiful. I'd say on par with the Thinkpad.
  • The battery life is amazing. I get 4+ hours easily, which was never possible with any Thinkpad I've ever used. The magnetic power connector is really neat and has saved wire accidents a few times already.
  • The wireless configuration just works. As soon as the laptop comes out of standby, it's connected to the wireless, which is fantastic (the Thinkpad used to take a good 15-20 seconds, which gets old fast). The laptop also knows to stay connected to the current SSID, unlike the Thinkpad which would switch between SSIDs seemingly randomly if more than one "preferred" network was in range (I guess they thought this was a feature?)
  • Last but not least, going in and out of standby is remarkably fast and reliable -- it's almost instant ON/instant OFF. The Thinkpad would sometimes take 1 minute (yes, 60 seconds) to do this, and lock-up in the process. I've experienced this with every Thinkpad I've used.
All in all, I think the Mac has come a long way in the area I care the most about -- productivity. I would go so far as to say that if I had some Windows-only apps I needed to run, I'd run them in VMWare (which is very fast and nice to use on the Mac).


The Mac is a remarkable beautiful piece of engineering. The rumours are true, there's no laptop quite like it. It's also expensive, almost double the price of an equivalent Thinkpad. This is the reason I'd have trouble justifying the price if it came out of my own pocket. But I do love using it, and I can understand why some people never settle for less.

Sunday, March 14, 2010

Vacuum Tubes

Part of the reason I worked on restoring this old radio is because I wanted to learn how vacuum tubes work, as well as how radio transmission works.

The simplest vacuum tube is a diode -- a device that allows current to pass only in one direction:
  • Green element = the heater. This is a filament that gets hot when current flows through it. Its only purpose is to radiate heat onto the red element = the cathode.
  • Red element = the cathode, or the emitter. This is a filament coated in a special substance that can emit electrons when it's heated up.
  • Blue element = the anode, the collector, or the plate. This is a flat piece of metal which collects the electrons emitted by the cathode.
The key feature of the diode is that current can only go from red to blue, not the other way. In other words, if the heater is hot, then current can flow from red to blue as shown above. However, if we flip the polarity of the battery (the + would be connected to the red element), no current can  flow.

Notes:
  1. The heater circuit operates at low voltage, around 5V.
  2. The anode/cathode circuit operates at much higher voltage, frequently above 200V (in my radio, the anode/cathode rail goes up to 750V). This is because, even in a vacuum, the voltage has to be high enough to force electrons to jump across the small gap between the two leads. This high voltage makes older appliances dangerous, they can definitely kill you if you're not careful.
  3. The reason tubes are vacuum'ed is because the air molecules get in the way of electrons "jumping" from the cathode to the anode.
  4. All vacuum tubes "wear out" in time: the substance that covers the cathode is literally stripped away and eventually stops emitting electrons entirely. This is why vacuum tubes in old appliances had to be replaced every so often (typically measured in years, but it depends on how heavily the device is used).
This is it. All vacuum tubes are variations on this theme. They contain various additional elements that serve to amplify or dampen the flow of current between the cathode and the anode, but the basic principle is the same.

So why is a diode useful? Who cares that we have a device that lets current flow in only one direction?

To answer that question we need to look at the simplest form of radio transmission: Amplitude Modulation, or AM. More on that in the next post.

Friday, March 12, 2010

Radio Frankenstein

About 6 months ago I wrote the first post about my classic radio restoration project. The project is done, the radio is functional, but I somehow never got around to writing anything more about it. I'm going to try to fix that in the next series of posts.

I read a few useful guides on how to get started restoring an old radio. Bringing a radio up to life for the first time is risky-business, since the radio components can become compromised over time, and you run the real risk of burning up the radio (literally) if you're not careful.

The most useful I found was from Phil's Old Radios, recommending roughly the following:
  • Spot gross defects first: leaks, stains, smells, etc.
  • The electrolytic capacitors are probably dead: replace them.
  • Use a variac to slowly turn the radio on.
At the high-level, my Philco was in reasonable shape: it was missing two vacuum tubes, but aside from that it wasn't leaking or showing other signs of gross physical damage.

I proceeded to test all the capacitors in the radio: there's about 10-15 of them total, so it's not that hard. The most important test for a capacitor is to verify that it's not shorted, in other words the resistance between its two leads should be infinite. Over time, electrolytic capacitors dry-up (literally), and as a result short circuit the leads, which can be catastrophic (it can burn up the power transformer, which is hard and expensive to replace).

My Philco has two kinds of capacitors:

1. Two very large electrolytic capacitors (8 muF each). These are visible on the top of the main board, and serve to smooth the DC coming out of the rectifier bridge. It is essential that these not be shorted as I explain above. Sure enough, when I measured them, one was shorted, and the other was on its way. I bough two replacement Sprague Atom's, disconnected the leads from the existing electrolytics and connected the new capacitors in place. Incidentally, the Spragues are very solid, high-quality capacitors, great to work with. Here's a sample:


2. Bakelite capacitors: these look like a little "vat" with 6-7 leads. Inside it are 2-3 capacitors, sealed in a hard, black, gunky insulator. Fortunately, none of the bakelites were bad, they all tested good on the multi-meter and, fortunately, ended up working in the end.

Serious radio enthusiasts try to "hide" the new capacitors inside the old electrolytics, in order to make the restoration even more authentic. This is a process called "re-capping", it works like this:
  1. Use a Dremel to cut open the bottom of the electrolytic capacitor
  2. Gut its contents, leaving an empty aluminum shell
  3. Insert the new electrolytic capacitor in the old shell, solder each end to the two leads
  4. Glue the bottom of the electrolytic back with epoxy glue
I got so far as step 2 above: the "gunk" inside my electrolytics was a very nasty, black, foul smelling tar-like substance that wouldn't come out no matter what. I read that some people use a hair-dryer to literally melt this stuff out of the shell, but I decided that was too much for me, so I just put the electrolytics back on the circuit board (without the bottom) and left it at that.

The next step was to replace the missing vacuum tubes. I got the complete set of schematics and repair bulletins from the excellent Philco Repair Bench. It was easy to detect that the two missing tubes were:
These are ancient tubes, in fact the names alone should give you an idea: the tube manufacturers simply started to count at 1 and worked their way up to around 100 or so, each number representing one type of tube. There were only a handful of tube manufacturers at the time (Philco, Tung-Sol, RCA), and they made a majority of all the tubes on the market.

I thought I'd be totally out of luck finding replacements for them. In fact, it turned out quite the opposite: there a vibrant community of old-timers who sell tubes like these for cheap. I was able to find both tubes for around $15. Not only that, but I was actually able to find them NOS = "New Old Stock", which means that the tubes were in mint condition, they had never been used! Imagine: these are tubes made 80 years ago and they still work. Here's what the rectifier looks like:


Notice how it says "Made in USA" on it. When was the last time you saw that printed on anything?

With both tubes in their sockets, now came the real test: turning on the radio. I don't have a variac, and I thought it too expensive to buy one. So instead I built a cheap home-made one: a dim-bulb tester. I first tested the radio with a large 100W bulb, and then with a smaller 45W bulb. In both cases the vacuum tubes slowly started to glow, and no weird smells or pops came out of the radio, so I decided it was safe to plug it directly into the wall.

Imagine my surprise and awe when, after 15 seconds of warm-up, the radio actually went on and started hissing! I kept a finger on the antenna lead, and was able to tune into a few AM radio stations (one was broadcasting a baseball commentary, and another had a show about aliens invading the earth).

A beautiful thing.

Tuesday, March 09, 2010

Shuffling tricks

I recently read this interesting article about an error in the EU Microsoft Browser Ballot. Briefly:
  • In the EU, Microsoft must give users a choice of browsers in Windows.
  • This is done via a message box with 5 top-choices (Firefox, Opera, Safari, MSIE, and Chrome), and a bunch of other intermediate-choices.
  • The box should display the 5 top-choices in random order.
As it turns out, the order is not exactly random! For instance, MSIE is more likely than not to appear towards the right-end of the list, and Chrome towards the left-end.

The reason for this is programmer error, as described in the article. The programmer implemented the shuffle algorithm incorrectly by assuming that JavaScript uses QuickSort, when in reality it uses some other algorithm.

It turns out that shuffling is notoriously hard to get right.


Consider the following two simple shuffling algorithms:
  1. For each entry Ex in the array: swap Ex with any other entry in the array.
  2. For each entry Ex in the array: swap Ex only with entries after it.
Intuitively, it seems like either approach should produce a good shuffle. However, your intuition would be wrong: the first approach produces a non-uniform shuffle, while the second approach produces a uniform shuffle.

Better writers than I have explained how and why this happens. One good explanation of this is at Coding Horror: The Dangers of Naivete. The crux of the proof is:
  • The total number of entries in a shuffle of n cards is n! (n factorial).
  • The total number of arrangements of cards where you swap each card with any other card is n^n (n to the power n).
  • n! does not divide evenly into n^n, so the shuffle cannot be uniform.
Prove to yourself that the 2nd algorithm above does not suffer from the problem described above. For extra credit, also prove that it is uniform.

In thinking about this problem, I was struck by one non-obvious fact: how can it be that, in the first (non-uniform) algorithm some configurations occur more than others? I mean, think about it:
  • Start with an array in some order.
  • For each element, randomly swap it with some other element.
Somehow, this makes it so that some configurations occur more often than others, and reliably so! Although the algorithm is random, somehow it favors certain configurations. This is quite surprising indeed. Really, spend a minute to let this sink in.

I haven't found a good "intuitive" explanation for why this favoritism happens. This explanation, from stackoverflow.com, comes close: "And that's the "intuitive" explanation: in your first algorithm, earlier items are much more likely to be swapped out of place than later items, so the permutations you get are skewed towards patterns in which the early items are not in their original places."

Magician-turned-statistics-professor Persi Diaconis has done a lot of very interesting work in randomness, nature, and our perception thereof. One interesting question he answered was: how many times do you have to shuffle a deck of cards so that it's "random"? Another is: how random is a coin flip, really? Both have surprisingly non-obvious and interesting answers.

Wednesday, February 24, 2010

Vampire energy

Standby power (a.k.a. "vampire energy") is the power consumed by devices that are simply plugged in but not really turned on. For example, a device with a remote control, uses a trickle of power to listen for the signal from the remote control in order to fully turn on. The amount of standby power is typically small. For example, a modern TV will consume 0.5W in standby, but can consume over 200W when fully turned on.


So why care about vampire energy? The reason is that when you add it all up over all electrical devices in your household, it turns out to be a really big number. In the UK, for example, in 2006, estimates are that 8% of all electrical energy consumed goes to standby power. In the US, this would amount to almost 20 average-size power-plants!

I was curious to see how much standby power we use at home. The public utility recently upgraded our power-meter to a digital smart-meter, so it's easy to just read out the wattage consumed at any point in time. I waited until everyone was asleep and everything in the house was turned off (but still in standby). I also made sure that the fridge did not have its compressor running at the time.

The number? 85W.

Wow. This is surprisingly high! At 24h/day, 30days/month, this works out to 61kWh (around $8/month at today's energy prices). Given that we consume 300-400kWh average total per month, this is between 15-20% of our total consumption! For an average US household, which consumes closer to 1000kWh/month, this would be around 5%, which is about what the British study revealed.

How could we possibly consume 85W in standby mode? I broke it down by circuit, by switching the circuit breakers on and off and studying the devices connected to each circuit:
  • 29W for the DSL modem + wireless router.
  • 11W for the gas heater
  • 8W for various stuff in the bedrooms (alarm clock, cell phone chargers, night-light, etc.)
  • 5W for the garage door remote
  • 4W for the PC
  • 4W for the microwave oven
  • 3W for the washing machine
  • 3W for the electric oven
  • 18W for other stuff I couldn't track down precisely
The modem + wireless router are an interesting case: while I could turn them off at night (from, say, midnight to 6am), they do need to be on during the day given the level of internet use at our place. So the 29W really needs to be pro-rated down to 7W to indicate it's standby power for only about 6 hours. Another option is to get a combination router + modem that consumes less power by itself, but that's harder since I'm picky about the routers I like. :)

The astonishing one is the gas heater, at 11W. I suspect this is because the heater uses an electric element to fire up the gas burner, and this electric element has to be always on. There isn't much I can do about that, and I don't really feel like messing with the heater since it a large, expensive, and scary device.

The remaining ones are relatively small. For things like the microwave oven and washer, I could get power strip with an external switch that fully turns off the appliance, but the cost of the power strip will easily outweigh the savings from the electricity for a few years at least.

I should also probably track down the remaining 18W and see where it's wasted, but I have a feeling it's going to be small amounts here and there, not one significant consumer.

The conclusion? The best way to eliminate standby power is to do it at the source (that is design the device to not consume standby energy as much as possible). The second best way is to use timers or switched power strips and force devices off. This only makes sense if the device is of a certain kind (indoor device, that doesn't suffer problems when turned off, like the gas heater). For the remaining devices, the ideal situation is to combine them all on one (or a few) power strips and force them off with a switch. If the devices are spread throughout the house (like the microwave oven in the kitchen and the washing machine in the garage), this is not really possible.

In the end there's little I can do about this. How frustrating. :(

Friday, September 11, 2009

How I chose the radio

Why did I choose this particular Philco model? The reasoning was not particularly scientific, but in retrospect it turned out to be a very good choice.

I'd spent some time browsing (and drooling over) various old radio galleries, to get a sense for what's available. Roughly speaking, here's my impression about the lay of the land:
  • Wood radios vs. plastic radios. As much as some plastic radios are beautiful pieces of Art-Deco inspired art, I just didn't like them as much as the wood radios. Some particular brands of plastic radios, those made of the Catalin brand of plastic material, are extremely sought-after and expensive -- as much as $2000 or more for a radio in good condition!
  • Cathedral vs. tombstone vs. console vs. tabletop. The console radios are quite large and unwieldy, so they were out of consideration for me fairly quickly. I was somewhat torn betwen cathedral and tombstone design -- I like them both, however, in the end I decided to go with a cathedral design since I find the design more appealing and timeless. The tabletop radios, while very nice in their own right, didn't quite have the vintage look I enjoyed.

vs.

  • Manufacturer: Philco vs. Zenith vs. Motorola vs. RCA vs. Sparton. There is very large number of manufacturers that produced radios in the "golden era", most of them gone today. In the end, I decided to go with a Philco radio because it is one of the best and largest radio producers, they also produced many of their own radio parts (similar to RCA), and I quite liked their designs (the Philco 90, in particular, is considered by some "the" classic cathedral design).
  • Last but not least: complexity. Since this was the first radio I'd try to restore, it was important that the electronics weren't too complicated. I definitely wanted a vacuum-tube based radio (as opposed to transistor), and also wanted a radio that was made in the late 20's - early 30's (earlier than that may have meant too much compromise on audio quality, which would have been a problem as I'd actually like to use the radio around the house).
In the end, the Philco 80 turned out to be a surprisingly good choice:
  • It's a Philco wooden cathedral design. While by no means the most ornate, it still looks very good and embodies many of the design qualities I like.
  • It's very simple electronically. It only has 4 tubes, compared to 7+ tubes for many of the "fancier" models. It has modest energy consumption (46 watts), and it is fairly well documented and understood.
  • As it turns out, the Philco 80 Jr. was meant to be one of the cheapest cathedral design radios at the time -- which explains the complexity and power consumption above. The radio was introduced for $18.75, whereas other radios would start at double that.
Depending on how this restoration goes, I may look into a tombstone design next, as there are some very good looking radios in that category as well.

An old project about an old radio

For a while now I have been thinking of getting an old radio and restoring it back to life. The reasons are because I love the look of old radios, it would be a fun electronics project, and a good excuse to learn how more about how radio broadcasting works.

I finally took the plunge a few weeks ago and bought an old Philco 80 Jr from eBay. The radio is in great physical condition, especially for a radio made in 1933:


The internals also looked good (the seller made it clear that the radio was not plugged in and may or may not work upon arrival):

Upon closer inspection, the radio had a number of issues:
  • It is missing two tubes: the power pentode (tube 42) and the full-wave rectifier (tube 80).
  • The cap connector on one of the oscillators (tube 36) was detached and had come off.
  • It had no power plug, and there were two strange wires coming out of the back of the radio.
You can see the real state of radio from this photo I took from my workbench (notice the missing tube sockets, and the broken cap):


Before getting the radio, I'd done some high-level reading about how to restore an old radio, in particular Phil's excellent guide, so I knew roughly what to expect, in particular not to plug it in until I can run a few tests.

One of the things I find amazing about such old radios (besides the fact that they're 80 years old) is the fact that this particular radio may well have played broadcasts from WW2, from the lunar landing, and other momentous times in history. I think there is something incredibly cool about that.

This post will be the first in a series that will document how I'm going to restore this old radio and what I learn along the way. Hope you enjoy reading it as much as I will enjoy writing about it!

Thursday, September 03, 2009

Lead paint

Lead is a heavy metal with lots of industrial and commercial uses, ranging anywhere from batteries used in cars, to protecting nuclear physicists from radiation. One particularly colorful and sad chapter in the history of this otherwise boring metal is the use of lead in gasoline (tetraethyl lead) to prevent engine knock. Bill Bryson writes about it eloquently in his book A short history of nearly everything.

Another common use of lead is in household paint. The reason? It looks good: paint with lead in it is shiny and pleasing to the eye. It's also quite dangerous and poisonous. Lead also happens to be a neurotoxin, it likes to bind with neurons and prevents the normal formation of synapses, which is particularly bad for young children, who apparently can retain up to 100% of the lead that enters their system (adults, as it turns out, are better at eliminating the bad stuff, only about 20% of the lead that enters an adult body stays, the rest is eliminated).

The most common way for children to be exposed through lead is, you guessed it, household paint. Specifically, in older homes, the lead paint can naturally chip and fall on the floor, where a child can ingest it. One easy solution to this is to paint over the areas with lead paint therefore "trapping" the bad stuff and preventing it from flaking or otherwise getting off the walls. In general, good house keeping (cleaning, vacuuming, and in general maintaining the surfaces whether through washing or painting over) does a lot of good and cheaply.

San Francisco's housing stock is old, some of it built before 1900, and a lot of it built after the earthquake of 1906. There are few modern houses, and even fewer built after lead paint was made illegal. Ironically, the more expensive and "fancy" a house, the more likely it is to have lead paint in it; some of the highest concentration of lead paint in San Francisco is apparently in the fancy mansions of Pacific Heights. Besides paint, which is by far the most common place for lead, another relatively common place is glazing on tiles, for example shower tiles. A good rule of thumb is that -- if the paint or the tiles look nice and shiny, they're probably leaded.

It may be tempting to get very worried about lead paint and decide to "strip" it out. This is not only costly -- it involves sanding most surfaces that can contain lead paint -- but it also frees the stuff in the air and it can become a true nightmare to get it all out.

How would you know if you have lead paint in your house? You can hire someone to test it, or you can even do some of it yourself.

One kind of test involves taking flakes of paint and send them to a lab where they are analyzed. To cover the house, the technician has to take a sample from each wall, or at least each room, since some rooms may have been painted with lead paint, while some weren't. If this sounds like a pain, it is. You can also use a home lead-test kit in area where the paint is exposed. The lead-test kit is basically a "brush" that contains a reactant in it, you brush over the paint, and if there is lead present, it turns red. The test is controversial, and has a false-positive rate, but it can give you some idea.

Another kind of test is using an XRF gun. This gun fires off a stream of high-speed particles which only bounce back when they hit something dense ... like a lead atom. The gun approach is far easier to use since you can just take "readings" from all surfaces you care about and the results are instant.

As a side note, it is remarkable to what extent people have ignored common sense or made bad choices in the name of "looking good".

Repeater on the cheap

The wireless signal in our house has difficulty reaching a few spots. On top of that, the PowerBook's metal shell interferes with the (already weak) signal, and makes it impossible to connect.

One solution is to use a repeater to boost the signal. LinkSys already makes a Range Expander, which seems to do the right thing, but at $80, it costs more than the router itself!

DD-WRT, which I already use for QoS, can also turn a $50 router into a repeater (and more!)

The generic name for this is Linking Routers. At a high-level, there are 4 ways to link two (or more) routers together:
  • Client -- Router 1 is on the WAN, Router 2 extends the range of Router 1. The catch is that all clients can only connect to Router 2 via a wired connection. Additionally, Router 1 and Router 2 sit on different networks, so it is not possible to do things like "broadcast" between them.
  • Client Bridged -- same as Client, but the two routers sit on the same network.
  • Repeater -- same as Client, but the clients can connect to Router 2 via wired and wireless. The two routers sit on different networks.
  • Repeater Bridged -- same as Repeater, but the two routers sit on the same network.
The easiest and most convenient configuration is the Repeater. Basically, Router 2 acts as just another computer that connects to Router 1, but creates a separate network (with a separate SSID) in its area of influence.

One drawback (to all these configurations) is that the wireless bandwidth is cut in half, roughly, because of collissions within the wireless broadcast between the two routers. This is apparently unavoidable. In our case, the 802.11g protocol is already fast enough for what we use it, that this does not create noticeable slowdowns (and SpeedTest confirms it).

The Client functionality has existed for a while (it even exists in my ancient v23 on my original router), but the Repeater functionality is new, only in v24, and (confusingly) only works right in some versions but not others. Read the documentation very carefully!

While I was researching repeaters, I also briefly looked into whether an 802.11n router might help -- they are supposed to be "faster" and have increased range. Unfortunately, the Linksys own N-products get mixed to bad reviews -- the range is not much increased, neither is the speed, and the installation process leaves much to be desired. Add to that the fact that, to use it, I'd have to get compatible 802.11n PCMCIA cards, and I'm not all that interested or excited about this.

DD-WRT continues to be an exceptional product, especially when combined with a solid piece of hardware like the WRT54GL.

Tuesday, September 30, 2008

Quake busting

California is earthquake country. From the iconic 1906 earthquake, to the daily reminders that we live on top of 3 major tectonic faults, earthquakes are a big part of the local consciousness. You would imagine that, after San Francisco was almost leveled in 1906, the building codes would change to take into account such large tremors and build stronger houses. Unfortunately, that's not the case. It's true that a majority of buildings in San Francisco are wood-frame houses with at most 3 floors, which tend to be pretty flexible and withstand earthquakes reasonably well (although they can easily succumb to fire). Still, lots of older houses are vulnerable: they can shift and fall off the foundation, or the lowe stories can crumble under the weight above them. This is especially true of soft-story homes, which are prevalent in the Sunset and Richmond districts. To reduce earthquake risk, it is possible to retrofit an older home to better resist a large earthquake. There is a lot of good evidence that such retrofitting can make a big difference. So what goes into a retrofit?
  1. Foundation work. Many older houses literally "sit" on the foundation with no additional reinforcement. When the ground shakes, it can "push" the house off the foundation. Even a small push, say a few inches, can have disastrous consequences, it can sever water, sewer, and gas pipes, start a fire or worse. To mitigate this, the house can be bolted to the foundation, so that the ground and the house move as one.
  2. Cripple wall work. Most houses don't sit directly on the foundation, to prevent the wood from getting damaged or weakened by natural ground moisture and the like. Houses are either built on top of a narrow "crawl space", or, in the case of soft-story houses, the living quarters are built above the garage. In an earthquake, the heavy part of the house above carries a lot of inertia, and if the underside is bolted to the ground, the house above can shatter the walls underneath and fall. To mitigate this, the lower walls can be reinforced with plywood shear walls that strengthen them and essentially stiffen the house, preventing the upstairs from swinging wildly in a quake.
  3. Garage opening reinforcement. Even with bolts and shear walls, the garage door opening remains a major weak point, as it weakens one of the key structural walls of the house. This can be reinforced with a steel frame or a steel beam to give it the same strength as the other 3 walls.
Here's what it looks like (courtesy of seismicsafety.com): For most houses, the retrofit work can be done just with the help of a skilled contractor, who's qualified to install bolts and shear walls. For some houses, notably those on a steep hill, or soft-story designs, the help of an engineer is needed to design the retrofit and compute the appropriate material strengths, after which the contractor can install it. ABAG has a wealth of information to help people decide the risk of an earthquake and the impact in their specific area. In particular, the maps of shake intensity and liquefaction risk are extremely useful. In San Francisco, having your house on top of a hill can dramatically reduce both shaking and liquefaction, though it does increase the changes of a landslide. Sadly, the Marina with its gorgeous houses and amazing views, is built on top of landfill from the 1906 earthquake, so it's very vulnerable to an earthquake; it's no accident that in the relatively minor 1989 Loma Prieta earthquake, the Marina suffered the most damage (map courtesy of thefrontsteps.com). Another way to mitigate against earthquake risk is to get earthquake insurance. In California, this is of questionable utility: if a major earthquake were to hit, the CEA would probably run out of money, at which point people say that FEMA would have to step in and help the reconstruction. During that time, people would probably have to live in temporary houses, like the ready-made earthquake shacks of yesteryear. Still, having some earthquake insurance can provide a good cushion in the case of a major natural disaster. I personally found that learning about earthquakes made me worry about them less and take the necessary steps to increase our safety. I realize there are no guarantees, but doing even little things can go a long way towards helping deal with this reality. And I certainly wouldn't trade living in San Francisco for anywhere else, earthquakes and all.

Here comes the Sun

Renewable energy is becoming increasingly more visible in our society. The recent oil and food price spikes, the impending opening of the Northwest Passage, the coral bleaching in the ocean all point to the fact that we consume fossil fuels at unsustainable rates, and are changing our environment for the worse. Changing to renewable energy makes both economic and moral sense.
Of the many ways to produce renewable energy, solar is a big focus these days. In the US, the federal government has a generous subsidy, which looks to be extended in the following years. In California, there is an important state subsidy, and a generous San Francisco subsidy. (Lest you wonder, even foggy San Francisco gets plenty of sun.) In California, grid electricity is produced by PG&E, mostly using natural gas. The solar incentives aim to encourage private individuals and businesses to install solar panels and feed electricity back into the grid, thereby offsetting some of their consumption. If the solar installation produces more than the individual consumes, their PG&E bill can be negative (they get a check each month). In most cases, the solar panels would offset some fraction of the consumption, typically the expensive kWh's, more on this below. Here's what this looks like (video credit Solar City): One natural question at this point is: why feed the electricity back into the grid, instead of running your house directly on it? For one, the solar panels only work during the day. To have electricity at night, you would need to install a fairly large set of batteries to store excess energy. Batteries are very costly and often an environmental nightmare (containing acid or rare metals that are expensive to synthesize or extract). Second, solar panels energy output varies considerably between seasons (in the northern hemisphere, the sun's efficiency is very different in the winter vs. the summer), or even between days (on a cold, stormy day with cloudy skies, the output is quite different than on a warm, sunny day). Third, most electrical appliances expect a steady electrical output (110v, with small error margins), which are difficult to maintain even from a good battery bank. The goal of solar is not to necessarily replace the grid entirely, but rather to offset enough to substantially reduce our pollution and dependence on fossil fuels. Solar cells convert sunlight into electrical current. The conversion is pretty inefficient, around 20% of sunlight gets transformed to electricity. However, given that direct sunlight on average produces 120 W/square meter, on an 8 hour sunny summer day we can recover almost 200 kWh of electricity using a modest 1 square meter solar cell array. Solar panels produce DC current, but the grid operates on AC, so the output from solar panels has to be converted to AC using an inverter. This exacts another small efficiency penalty (around 20%), and has to be tuned to the size of the solar array. Solar panels are expensive (largely because they're not yet mass produced, so they can't leverage economies of scale). Absent generous subsidies, in order for them to make financial sense, they have to be sized as a function of household consumption. As of the time of this posting, PG&E uses a tiered price structure for electricity: the first 256 kWh are the cheapest, at 11c. If you consume more than 256 kWh, the price increases quickly, up to more than triple:
  • 11c/kWh - 0 -100% baseline (256 kWh)
  • 13c/kWh - 100-130% baseline
  • 22c/kWh - 130-200% baseline
  • 31c/kWh - 200-300% baseline
  • 35c/kWh - over 300% baseline
For a residence, it makes sense to look at a year's worth of electricity bills and figure out what is the consumption pattern. In a warm area like California, odds are you'll use lots of electricity in the summer (A/C) and less in the winter when it's cooler, but not cold enough to require heating. One solar strategy is to get a solar array big enough to offset only the expensive kWh in the summer (those at 30c or more). There is no magic formula here, each house is different, although in very broad terms a 2.5-3.5 kW solar array should do the trick for a lot of average-size homes. Before embarking on a solar project, it makes sense to first optimize your consumption using the cheapest tools: replace all incandescent bulbs with CFLs, configure computers and TVs to go into standby when not used, increase the temperature of the fridge and freezer, insulate the attic to keep cold air in, and so on. This can have a dramatic effect on your electrical consumption, as much as 30% reduction! At this point, take stock of your usage and size the solar array as a function of the new energy consumption numbers. Go solar!