Tuesday, April 20, 2010

My first month with a Mac

About a month ago I upgraded my company laptop from a Thinkpad to a MacBook Pro. Although I've used Apple computers in the past, it's never been my "main" computer or even close to it. I've always used Thinkpads as my work laptop and felt they were and still are exceptional laptops. I'm pretty proficient with the Mac now; I still miss the Thinkpad every so often, but overall I'm happy. Here are some of my impressions thus far from the transition.

My most important requirement for a computer is that it lets me be "productive". I define productivity as "doing my job as fast as I know how using the given tool". If the machine gets in the way (by being unreliable, slow, lacking software, etc.) then that's a deal breaker and I'm not interested. I don't have particular allegiances to certain companies or hardware manufacturers, so long as I can be productive. In that respect, the Mac got off to a surprisingly good start, and I can now safely say that I'm about 95% as productive on it as I used to be on my Thinkpad. I honestly did not expect this, so it was a rather pleasant surprise.

In terms of software, the applications I run most on my Mac are:
  • Chrome = duh
  • iTerm = SSH client
  • TextMate = general purpose programming text editor
  • OpenOffice = word processing, spreadsheet, presentations
  • Eclipse/Android = work
  • iTunes = streaming radio
  • Picasa = photos
  • VLC/QuickTime/Flip4Mac = media
  • Skype = video conferencing
  • Solitaire etc. = blow off steam
All these programs work almost flawlessly. I don't detect almost any difference from the Thinkpad, they run fast, they're reliable, they're as bug-free as they are on the PC.

The only application that took some work was iTerm -- I run Emacs in screen, and getting the keybindings to work was a bit painful. In particular, by force of habit I'd often hit Command+W to copy in Emacs, which would close the current tab. Fortunately, it's easy to re-bind the keys for any menu to some other key combination, so that's what I did. This is actually a remarkable feature of OSX, and one that I'm not sure exists on Windows -- you can specify any app and any menu entry in that app (by name), and then give it another key combination. Cool!

The main drag was getting used to the new key combinations. Command+{C, X, Z, TAB} all work the same and fairly intuitive for someone used to PCs. But that's where the similarities end:
  • I used Home/PgUp/PgDn/End/Del a lot on the PC, and the Mac is inadequate here. Having to hit Fn+{Left, Right, Up, Down} just isn't the same. Not to mention that different apps interpret the key combinations differently, sometimes it recognizes Fn+, other times Command+.
  • I still find it confusing to have to deal with Control, Alt, and Command. I know how they work, but I still have to think about it. Why do we need 3 keys to do what a PC does with just Control and Alt?
  • Switching between applications is done via Command+TAB, switching between multiple windows of the same application is done via Command+~ Is this really necessary? Command+TAB should be enough IMO, why the extra aggravation?
Another minor drag is connecting an external monitor. I sometimes use an external digital monitor (DVI), other times an external analog monitor (VGA). Apple decided that they couldn't use a single connector for this, you need two dongles: one for DVI (DVI-D to be precise), and another for VGA. This is crazy. The mini-port can clearly put out both digital and analog signal, and DVI connectors are perfectly capable of relaying said signals. I suspect the reason Apple insisted on a DVI-D dongle is to speed the demise of VGA monitors. This is part of the general company culture (we know what's best for you, trust us), which I personally find patronizing.

Now that we got the bad out of the way, there's plenty of stuff to like:
  • The touchpad is fabulous. Scrolling with 2 fingers, clearing the desktop with 4 fingers -- it's just beautiful. Furthermore, I simply could not use the touchpad on my Thinkpad, I always used the little red knob instead; the PC touchpad is so sensitive it's useless. The Mac touchpad simply works, and works well. If you've never tried to use one, I do recommend you try it.
  • The laptop feels extremely sturdy and well built. I used to think that Thinkpads were the best built laptops out there, but the unibody design wins hands down. It's not even a comparison.
  • The keyboard feels good, the screen looks beautiful. I'd say on par with the Thinkpad.
  • The battery life is amazing. I get 4+ hours easily, which was never possible with any Thinkpad I've ever used. The magnetic power connector is really neat and has saved wire accidents a few times already.
  • The wireless configuration just works. As soon as the laptop comes out of standby, it's connected to the wireless, which is fantastic (the Thinkpad used to take a good 15-20 seconds, which gets old fast). The laptop also knows to stay connected to the current SSID, unlike the Thinkpad which would switch between SSIDs seemingly randomly if more than one "preferred" network was in range (I guess they thought this was a feature?)
  • Last but not least, going in and out of standby is remarkably fast and reliable -- it's almost instant ON/instant OFF. The Thinkpad would sometimes take 1 minute (yes, 60 seconds) to do this, and lock-up in the process. I've experienced this with every Thinkpad I've used.
All in all, I think the Mac has come a long way in the area I care the most about -- productivity. I would go so far as to say that if I had some Windows-only apps I needed to run, I'd run them in VMWare (which is very fast and nice to use on the Mac).


The Mac is a remarkable beautiful piece of engineering. The rumours are true, there's no laptop quite like it. It's also expensive, almost double the price of an equivalent Thinkpad. This is the reason I'd have trouble justifying the price if it came out of my own pocket. But I do love using it, and I can understand why some people never settle for less.

Sunday, March 14, 2010

Vacuum Tubes

Part of the reason I worked on restoring this old radio is because I wanted to learn how vacuum tubes work, as well as how radio transmission works.

The simplest vacuum tube is a diode -- a device that allows current to pass only in one direction:
  • Green element = the heater. This is a filament that gets hot when current flows through it. Its only purpose is to radiate heat onto the red element = the cathode.
  • Red element = the cathode, or the emitter. This is a filament coated in a special substance that can emit electrons when it's heated up.
  • Blue element = the anode, the collector, or the plate. This is a flat piece of metal which collects the electrons emitted by the cathode.
The key feature of the diode is that current can only go from red to blue, not the other way. In other words, if the heater is hot, then current can flow from red to blue as shown above. However, if we flip the polarity of the battery (the + would be connected to the red element), no current can  flow.

Notes:
  1. The heater circuit operates at low voltage, around 5V.
  2. The anode/cathode circuit operates at much higher voltage, frequently above 200V (in my radio, the anode/cathode rail goes up to 750V). This is because, even in a vacuum, the voltage has to be high enough to force electrons to jump across the small gap between the two leads. This high voltage makes older appliances dangerous, they can definitely kill you if you're not careful.
  3. The reason tubes are vacuum'ed is because the air molecules get in the way of electrons "jumping" from the cathode to the anode.
  4. All vacuum tubes "wear out" in time: the substance that covers the cathode is literally stripped away and eventually stops emitting electrons entirely. This is why vacuum tubes in old appliances had to be replaced every so often (typically measured in years, but it depends on how heavily the device is used).
This is it. All vacuum tubes are variations on this theme. They contain various additional elements that serve to amplify or dampen the flow of current between the cathode and the anode, but the basic principle is the same.

So why is a diode useful? Who cares that we have a device that lets current flow in only one direction?

To answer that question we need to look at the simplest form of radio transmission: Amplitude Modulation, or AM. More on that in the next post.

Friday, March 12, 2010

Radio Frankenstein

About 6 months ago I wrote the first post about my classic radio restoration project. The project is done, the radio is functional, but I somehow never got around to writing anything more about it. I'm going to try to fix that in the next series of posts.

I read a few useful guides on how to get started restoring an old radio. Bringing a radio up to life for the first time is risky-business, since the radio components can become compromised over time, and you run the real risk of burning up the radio (literally) if you're not careful.

The most useful I found was from Phil's Old Radios, recommending roughly the following:
  • Spot gross defects first: leaks, stains, smells, etc.
  • The electrolytic capacitors are probably dead: replace them.
  • Use a variac to slowly turn the radio on.
At the high-level, my Philco was in reasonable shape: it was missing two vacuum tubes, but aside from that it wasn't leaking or showing other signs of gross physical damage.

I proceeded to test all the capacitors in the radio: there's about 10-15 of them total, so it's not that hard. The most important test for a capacitor is to verify that it's not shorted, in other words the resistance between its two leads should be infinite. Over time, electrolytic capacitors dry-up (literally), and as a result short circuit the leads, which can be catastrophic (it can burn up the power transformer, which is hard and expensive to replace).

My Philco has two kinds of capacitors:

1. Two very large electrolytic capacitors (8 muF each). These are visible on the top of the main board, and serve to smooth the DC coming out of the rectifier bridge. It is essential that these not be shorted as I explain above. Sure enough, when I measured them, one was shorted, and the other was on its way. I bough two replacement Sprague Atom's, disconnected the leads from the existing electrolytics and connected the new capacitors in place. Incidentally, the Spragues are very solid, high-quality capacitors, great to work with. Here's a sample:


2. Bakelite capacitors: these look like a little "vat" with 6-7 leads. Inside it are 2-3 capacitors, sealed in a hard, black, gunky insulator. Fortunately, none of the bakelites were bad, they all tested good on the multi-meter and, fortunately, ended up working in the end.

Serious radio enthusiasts try to "hide" the new capacitors inside the old electrolytics, in order to make the restoration even more authentic. This is a process called "re-capping", it works like this:
  1. Use a Dremel to cut open the bottom of the electrolytic capacitor
  2. Gut its contents, leaving an empty aluminum shell
  3. Insert the new electrolytic capacitor in the old shell, solder each end to the two leads
  4. Glue the bottom of the electrolytic back with epoxy glue
I got so far as step 2 above: the "gunk" inside my electrolytics was a very nasty, black, foul smelling tar-like substance that wouldn't come out no matter what. I read that some people use a hair-dryer to literally melt this stuff out of the shell, but I decided that was too much for me, so I just put the electrolytics back on the circuit board (without the bottom) and left it at that.

The next step was to replace the missing vacuum tubes. I got the complete set of schematics and repair bulletins from the excellent Philco Repair Bench. It was easy to detect that the two missing tubes were:
These are ancient tubes, in fact the names alone should give you an idea: the tube manufacturers simply started to count at 1 and worked their way up to around 100 or so, each number representing one type of tube. There were only a handful of tube manufacturers at the time (Philco, Tung-Sol, RCA), and they made a majority of all the tubes on the market.

I thought I'd be totally out of luck finding replacements for them. In fact, it turned out quite the opposite: there a vibrant community of old-timers who sell tubes like these for cheap. I was able to find both tubes for around $15. Not only that, but I was actually able to find them NOS = "New Old Stock", which means that the tubes were in mint condition, they had never been used! Imagine: these are tubes made 80 years ago and they still work. Here's what the rectifier looks like:


Notice how it says "Made in USA" on it. When was the last time you saw that printed on anything?

With both tubes in their sockets, now came the real test: turning on the radio. I don't have a variac, and I thought it too expensive to buy one. So instead I built a cheap home-made one: a dim-bulb tester. I first tested the radio with a large 100W bulb, and then with a smaller 45W bulb. In both cases the vacuum tubes slowly started to glow, and no weird smells or pops came out of the radio, so I decided it was safe to plug it directly into the wall.

Imagine my surprise and awe when, after 15 seconds of warm-up, the radio actually went on and started hissing! I kept a finger on the antenna lead, and was able to tune into a few AM radio stations (one was broadcasting a baseball commentary, and another had a show about aliens invading the earth).

A beautiful thing.

Tuesday, March 09, 2010

Shuffling tricks

I recently read this interesting article about an error in the EU Microsoft Browser Ballot. Briefly:
  • In the EU, Microsoft must give users a choice of browsers in Windows.
  • This is done via a message box with 5 top-choices (Firefox, Opera, Safari, MSIE, and Chrome), and a bunch of other intermediate-choices.
  • The box should display the 5 top-choices in random order.
As it turns out, the order is not exactly random! For instance, MSIE is more likely than not to appear towards the right-end of the list, and Chrome towards the left-end.

The reason for this is programmer error, as described in the article. The programmer implemented the shuffle algorithm incorrectly by assuming that JavaScript uses QuickSort, when in reality it uses some other algorithm.

It turns out that shuffling is notoriously hard to get right.


Consider the following two simple shuffling algorithms:
  1. For each entry Ex in the array: swap Ex with any other entry in the array.
  2. For each entry Ex in the array: swap Ex only with entries after it.
Intuitively, it seems like either approach should produce a good shuffle. However, your intuition would be wrong: the first approach produces a non-uniform shuffle, while the second approach produces a uniform shuffle.

Better writers than I have explained how and why this happens. One good explanation of this is at Coding Horror: The Dangers of Naivete. The crux of the proof is:
  • The total number of entries in a shuffle of n cards is n! (n factorial).
  • The total number of arrangements of cards where you swap each card with any other card is n^n (n to the power n).
  • n! does not divide evenly into n^n, so the shuffle cannot be uniform.
Prove to yourself that the 2nd algorithm above does not suffer from the problem described above. For extra credit, also prove that it is uniform.

In thinking about this problem, I was struck by one non-obvious fact: how can it be that, in the first (non-uniform) algorithm some configurations occur more than others? I mean, think about it:
  • Start with an array in some order.
  • For each element, randomly swap it with some other element.
Somehow, this makes it so that some configurations occur more often than others, and reliably so! Although the algorithm is random, somehow it favors certain configurations. This is quite surprising indeed. Really, spend a minute to let this sink in.

I haven't found a good "intuitive" explanation for why this favoritism happens. This explanation, from stackoverflow.com, comes close: "And that's the "intuitive" explanation: in your first algorithm, earlier items are much more likely to be swapped out of place than later items, so the permutations you get are skewed towards patterns in which the early items are not in their original places."

Magician-turned-statistics-professor Persi Diaconis has done a lot of very interesting work in randomness, nature, and our perception thereof. One interesting question he answered was: how many times do you have to shuffle a deck of cards so that it's "random"? Another is: how random is a coin flip, really? Both have surprisingly non-obvious and interesting answers.

Wednesday, February 24, 2010

Vampire energy

Standby power (a.k.a. "vampire energy") is the power consumed by devices that are simply plugged in but not really turned on. For example, a device with a remote control, uses a trickle of power to listen for the signal from the remote control in order to fully turn on. The amount of standby power is typically small. For example, a modern TV will consume 0.5W in standby, but can consume over 200W when fully turned on.


So why care about vampire energy? The reason is that when you add it all up over all electrical devices in your household, it turns out to be a really big number. In the UK, for example, in 2006, estimates are that 8% of all electrical energy consumed goes to standby power. In the US, this would amount to almost 20 average-size power-plants!

I was curious to see how much standby power we use at home. The public utility recently upgraded our power-meter to a digital smart-meter, so it's easy to just read out the wattage consumed at any point in time. I waited until everyone was asleep and everything in the house was turned off (but still in standby). I also made sure that the fridge did not have its compressor running at the time.

The number? 85W.

Wow. This is surprisingly high! At 24h/day, 30days/month, this works out to 61kWh (around $8/month at today's energy prices). Given that we consume 300-400kWh average total per month, this is between 15-20% of our total consumption! For an average US household, which consumes closer to 1000kWh/month, this would be around 5%, which is about what the British study revealed.

How could we possibly consume 85W in standby mode? I broke it down by circuit, by switching the circuit breakers on and off and studying the devices connected to each circuit:
  • 29W for the DSL modem + wireless router.
  • 11W for the gas heater
  • 8W for various stuff in the bedrooms (alarm clock, cell phone chargers, night-light, etc.)
  • 5W for the garage door remote
  • 4W for the PC
  • 4W for the microwave oven
  • 3W for the washing machine
  • 3W for the electric oven
  • 18W for other stuff I couldn't track down precisely
The modem + wireless router are an interesting case: while I could turn them off at night (from, say, midnight to 6am), they do need to be on during the day given the level of internet use at our place. So the 29W really needs to be pro-rated down to 7W to indicate it's standby power for only about 6 hours. Another option is to get a combination router + modem that consumes less power by itself, but that's harder since I'm picky about the routers I like. :)

The astonishing one is the gas heater, at 11W. I suspect this is because the heater uses an electric element to fire up the gas burner, and this electric element has to be always on. There isn't much I can do about that, and I don't really feel like messing with the heater since it a large, expensive, and scary device.

The remaining ones are relatively small. For things like the microwave oven and washer, I could get power strip with an external switch that fully turns off the appliance, but the cost of the power strip will easily outweigh the savings from the electricity for a few years at least.

I should also probably track down the remaining 18W and see where it's wasted, but I have a feeling it's going to be small amounts here and there, not one significant consumer.

The conclusion? The best way to eliminate standby power is to do it at the source (that is design the device to not consume standby energy as much as possible). The second best way is to use timers or switched power strips and force devices off. This only makes sense if the device is of a certain kind (indoor device, that doesn't suffer problems when turned off, like the gas heater). For the remaining devices, the ideal situation is to combine them all on one (or a few) power strips and force them off with a switch. If the devices are spread throughout the house (like the microwave oven in the kitchen and the washing machine in the garage), this is not really possible.

In the end there's little I can do about this. How frustrating. :(

Friday, September 11, 2009

How I chose the radio

Why did I choose this particular Philco model? The reasoning was not particularly scientific, but in retrospect it turned out to be a very good choice.

I'd spent some time browsing (and drooling over) various old radio galleries, to get a sense for what's available. Roughly speaking, here's my impression about the lay of the land:
  • Wood radios vs. plastic radios. As much as some plastic radios are beautiful pieces of Art-Deco inspired art, I just didn't like them as much as the wood radios. Some particular brands of plastic radios, those made of the Catalin brand of plastic material, are extremely sought-after and expensive -- as much as $2000 or more for a radio in good condition!
  • Cathedral vs. tombstone vs. console vs. tabletop. The console radios are quite large and unwieldy, so they were out of consideration for me fairly quickly. I was somewhat torn betwen cathedral and tombstone design -- I like them both, however, in the end I decided to go with a cathedral design since I find the design more appealing and timeless. The tabletop radios, while very nice in their own right, didn't quite have the vintage look I enjoyed.

vs.

  • Manufacturer: Philco vs. Zenith vs. Motorola vs. RCA vs. Sparton. There is very large number of manufacturers that produced radios in the "golden era", most of them gone today. In the end, I decided to go with a Philco radio because it is one of the best and largest radio producers, they also produced many of their own radio parts (similar to RCA), and I quite liked their designs (the Philco 90, in particular, is considered by some "the" classic cathedral design).
  • Last but not least: complexity. Since this was the first radio I'd try to restore, it was important that the electronics weren't too complicated. I definitely wanted a vacuum-tube based radio (as opposed to transistor), and also wanted a radio that was made in the late 20's - early 30's (earlier than that may have meant too much compromise on audio quality, which would have been a problem as I'd actually like to use the radio around the house).
In the end, the Philco 80 turned out to be a surprisingly good choice:
  • It's a Philco wooden cathedral design. While by no means the most ornate, it still looks very good and embodies many of the design qualities I like.
  • It's very simple electronically. It only has 4 tubes, compared to 7+ tubes for many of the "fancier" models. It has modest energy consumption (46 watts), and it is fairly well documented and understood.
  • As it turns out, the Philco 80 Jr. was meant to be one of the cheapest cathedral design radios at the time -- which explains the complexity and power consumption above. The radio was introduced for $18.75, whereas other radios would start at double that.
Depending on how this restoration goes, I may look into a tombstone design next, as there are some very good looking radios in that category as well.

An old project about an old radio

For a while now I have been thinking of getting an old radio and restoring it back to life. The reasons are because I love the look of old radios, it would be a fun electronics project, and a good excuse to learn how more about how radio broadcasting works.

I finally took the plunge a few weeks ago and bought an old Philco 80 Jr from eBay. The radio is in great physical condition, especially for a radio made in 1933:


The internals also looked good (the seller made it clear that the radio was not plugged in and may or may not work upon arrival):

Upon closer inspection, the radio had a number of issues:
  • It is missing two tubes: the power pentode (tube 42) and the full-wave rectifier (tube 80).
  • The cap connector on one of the oscillators (tube 36) was detached and had come off.
  • It had no power plug, and there were two strange wires coming out of the back of the radio.
You can see the real state of radio from this photo I took from my workbench (notice the missing tube sockets, and the broken cap):


Before getting the radio, I'd done some high-level reading about how to restore an old radio, in particular Phil's excellent guide, so I knew roughly what to expect, in particular not to plug it in until I can run a few tests.

One of the things I find amazing about such old radios (besides the fact that they're 80 years old) is the fact that this particular radio may well have played broadcasts from WW2, from the lunar landing, and other momentous times in history. I think there is something incredibly cool about that.

This post will be the first in a series that will document how I'm going to restore this old radio and what I learn along the way. Hope you enjoy reading it as much as I will enjoy writing about it!

Thursday, September 03, 2009

Lead paint

Lead is a heavy metal with lots of industrial and commercial uses, ranging anywhere from batteries used in cars, to protecting nuclear physicists from radiation. One particularly colorful and sad chapter in the history of this otherwise boring metal is the use of lead in gasoline (tetraethyl lead) to prevent engine knock. Bill Bryson writes about it eloquently in his book A short history of nearly everything.

Another common use of lead is in household paint. The reason? It looks good: paint with lead in it is shiny and pleasing to the eye. It's also quite dangerous and poisonous. Lead also happens to be a neurotoxin, it likes to bind with neurons and prevents the normal formation of synapses, which is particularly bad for young children, who apparently can retain up to 100% of the lead that enters their system (adults, as it turns out, are better at eliminating the bad stuff, only about 20% of the lead that enters an adult body stays, the rest is eliminated).

The most common way for children to be exposed through lead is, you guessed it, household paint. Specifically, in older homes, the lead paint can naturally chip and fall on the floor, where a child can ingest it. One easy solution to this is to paint over the areas with lead paint therefore "trapping" the bad stuff and preventing it from flaking or otherwise getting off the walls. In general, good house keeping (cleaning, vacuuming, and in general maintaining the surfaces whether through washing or painting over) does a lot of good and cheaply.

San Francisco's housing stock is old, some of it built before 1900, and a lot of it built after the earthquake of 1906. There are few modern houses, and even fewer built after lead paint was made illegal. Ironically, the more expensive and "fancy" a house, the more likely it is to have lead paint in it; some of the highest concentration of lead paint in San Francisco is apparently in the fancy mansions of Pacific Heights. Besides paint, which is by far the most common place for lead, another relatively common place is glazing on tiles, for example shower tiles. A good rule of thumb is that -- if the paint or the tiles look nice and shiny, they're probably leaded.

It may be tempting to get very worried about lead paint and decide to "strip" it out. This is not only costly -- it involves sanding most surfaces that can contain lead paint -- but it also frees the stuff in the air and it can become a true nightmare to get it all out.

How would you know if you have lead paint in your house? You can hire someone to test it, or you can even do some of it yourself.

One kind of test involves taking flakes of paint and send them to a lab where they are analyzed. To cover the house, the technician has to take a sample from each wall, or at least each room, since some rooms may have been painted with lead paint, while some weren't. If this sounds like a pain, it is. You can also use a home lead-test kit in area where the paint is exposed. The lead-test kit is basically a "brush" that contains a reactant in it, you brush over the paint, and if there is lead present, it turns red. The test is controversial, and has a false-positive rate, but it can give you some idea.

Another kind of test is using an XRF gun. This gun fires off a stream of high-speed particles which only bounce back when they hit something dense ... like a lead atom. The gun approach is far easier to use since you can just take "readings" from all surfaces you care about and the results are instant.

As a side note, it is remarkable to what extent people have ignored common sense or made bad choices in the name of "looking good".

Repeater on the cheap

The wireless signal in our house has difficulty reaching a few spots. On top of that, the PowerBook's metal shell interferes with the (already weak) signal, and makes it impossible to connect.

One solution is to use a repeater to boost the signal. LinkSys already makes a Range Expander, which seems to do the right thing, but at $80, it costs more than the router itself!

DD-WRT, which I already use for QoS, can also turn a $50 router into a repeater (and more!)

The generic name for this is Linking Routers. At a high-level, there are 4 ways to link two (or more) routers together:
  • Client -- Router 1 is on the WAN, Router 2 extends the range of Router 1. The catch is that all clients can only connect to Router 2 via a wired connection. Additionally, Router 1 and Router 2 sit on different networks, so it is not possible to do things like "broadcast" between them.
  • Client Bridged -- same as Client, but the two routers sit on the same network.
  • Repeater -- same as Client, but the clients can connect to Router 2 via wired and wireless. The two routers sit on different networks.
  • Repeater Bridged -- same as Repeater, but the two routers sit on the same network.
The easiest and most convenient configuration is the Repeater. Basically, Router 2 acts as just another computer that connects to Router 1, but creates a separate network (with a separate SSID) in its area of influence.

One drawback (to all these configurations) is that the wireless bandwidth is cut in half, roughly, because of collissions within the wireless broadcast between the two routers. This is apparently unavoidable. In our case, the 802.11g protocol is already fast enough for what we use it, that this does not create noticeable slowdowns (and SpeedTest confirms it).

The Client functionality has existed for a while (it even exists in my ancient v23 on my original router), but the Repeater functionality is new, only in v24, and (confusingly) only works right in some versions but not others. Read the documentation very carefully!

While I was researching repeaters, I also briefly looked into whether an 802.11n router might help -- they are supposed to be "faster" and have increased range. Unfortunately, the Linksys own N-products get mixed to bad reviews -- the range is not much increased, neither is the speed, and the installation process leaves much to be desired. Add to that the fact that, to use it, I'd have to get compatible 802.11n PCMCIA cards, and I'm not all that interested or excited about this.

DD-WRT continues to be an exceptional product, especially when combined with a solid piece of hardware like the WRT54GL.

Tuesday, September 30, 2008

Quake busting

California is earthquake country. From the iconic 1906 earthquake, to the daily reminders that we live on top of 3 major tectonic faults, earthquakes are a big part of the local consciousness. You would imagine that, after San Francisco was almost leveled in 1906, the building codes would change to take into account such large tremors and build stronger houses. Unfortunately, that's not the case. It's true that a majority of buildings in San Francisco are wood-frame houses with at most 3 floors, which tend to be pretty flexible and withstand earthquakes reasonably well (although they can easily succumb to fire). Still, lots of older houses are vulnerable: they can shift and fall off the foundation, or the lowe stories can crumble under the weight above them. This is especially true of soft-story homes, which are prevalent in the Sunset and Richmond districts. To reduce earthquake risk, it is possible to retrofit an older home to better resist a large earthquake. There is a lot of good evidence that such retrofitting can make a big difference. So what goes into a retrofit?
  1. Foundation work. Many older houses literally "sit" on the foundation with no additional reinforcement. When the ground shakes, it can "push" the house off the foundation. Even a small push, say a few inches, can have disastrous consequences, it can sever water, sewer, and gas pipes, start a fire or worse. To mitigate this, the house can be bolted to the foundation, so that the ground and the house move as one.
  2. Cripple wall work. Most houses don't sit directly on the foundation, to prevent the wood from getting damaged or weakened by natural ground moisture and the like. Houses are either built on top of a narrow "crawl space", or, in the case of soft-story houses, the living quarters are built above the garage. In an earthquake, the heavy part of the house above carries a lot of inertia, and if the underside is bolted to the ground, the house above can shatter the walls underneath and fall. To mitigate this, the lower walls can be reinforced with plywood shear walls that strengthen them and essentially stiffen the house, preventing the upstairs from swinging wildly in a quake.
  3. Garage opening reinforcement. Even with bolts and shear walls, the garage door opening remains a major weak point, as it weakens one of the key structural walls of the house. This can be reinforced with a steel frame or a steel beam to give it the same strength as the other 3 walls.
Here's what it looks like (courtesy of seismicsafety.com): For most houses, the retrofit work can be done just with the help of a skilled contractor, who's qualified to install bolts and shear walls. For some houses, notably those on a steep hill, or soft-story designs, the help of an engineer is needed to design the retrofit and compute the appropriate material strengths, after which the contractor can install it. ABAG has a wealth of information to help people decide the risk of an earthquake and the impact in their specific area. In particular, the maps of shake intensity and liquefaction risk are extremely useful. In San Francisco, having your house on top of a hill can dramatically reduce both shaking and liquefaction, though it does increase the changes of a landslide. Sadly, the Marina with its gorgeous houses and amazing views, is built on top of landfill from the 1906 earthquake, so it's very vulnerable to an earthquake; it's no accident that in the relatively minor 1989 Loma Prieta earthquake, the Marina suffered the most damage (map courtesy of thefrontsteps.com). Another way to mitigate against earthquake risk is to get earthquake insurance. In California, this is of questionable utility: if a major earthquake were to hit, the CEA would probably run out of money, at which point people say that FEMA would have to step in and help the reconstruction. During that time, people would probably have to live in temporary houses, like the ready-made earthquake shacks of yesteryear. Still, having some earthquake insurance can provide a good cushion in the case of a major natural disaster. I personally found that learning about earthquakes made me worry about them less and take the necessary steps to increase our safety. I realize there are no guarantees, but doing even little things can go a long way towards helping deal with this reality. And I certainly wouldn't trade living in San Francisco for anywhere else, earthquakes and all.

Here comes the Sun

Renewable energy is becoming increasingly more visible in our society. The recent oil and food price spikes, the impending opening of the Northwest Passage, the coral bleaching in the ocean all point to the fact that we consume fossil fuels at unsustainable rates, and are changing our environment for the worse. Changing to renewable energy makes both economic and moral sense.
Of the many ways to produce renewable energy, solar is a big focus these days. In the US, the federal government has a generous subsidy, which looks to be extended in the following years. In California, there is an important state subsidy, and a generous San Francisco subsidy. (Lest you wonder, even foggy San Francisco gets plenty of sun.) In California, grid electricity is produced by PG&E, mostly using natural gas. The solar incentives aim to encourage private individuals and businesses to install solar panels and feed electricity back into the grid, thereby offsetting some of their consumption. If the solar installation produces more than the individual consumes, their PG&E bill can be negative (they get a check each month). In most cases, the solar panels would offset some fraction of the consumption, typically the expensive kWh's, more on this below. Here's what this looks like (video credit Solar City): One natural question at this point is: why feed the electricity back into the grid, instead of running your house directly on it? For one, the solar panels only work during the day. To have electricity at night, you would need to install a fairly large set of batteries to store excess energy. Batteries are very costly and often an environmental nightmare (containing acid or rare metals that are expensive to synthesize or extract). Second, solar panels energy output varies considerably between seasons (in the northern hemisphere, the sun's efficiency is very different in the winter vs. the summer), or even between days (on a cold, stormy day with cloudy skies, the output is quite different than on a warm, sunny day). Third, most electrical appliances expect a steady electrical output (110v, with small error margins), which are difficult to maintain even from a good battery bank. The goal of solar is not to necessarily replace the grid entirely, but rather to offset enough to substantially reduce our pollution and dependence on fossil fuels. Solar cells convert sunlight into electrical current. The conversion is pretty inefficient, around 20% of sunlight gets transformed to electricity. However, given that direct sunlight on average produces 120 W/square meter, on an 8 hour sunny summer day we can recover almost 200 kWh of electricity using a modest 1 square meter solar cell array. Solar panels produce DC current, but the grid operates on AC, so the output from solar panels has to be converted to AC using an inverter. This exacts another small efficiency penalty (around 20%), and has to be tuned to the size of the solar array. Solar panels are expensive (largely because they're not yet mass produced, so they can't leverage economies of scale). Absent generous subsidies, in order for them to make financial sense, they have to be sized as a function of household consumption. As of the time of this posting, PG&E uses a tiered price structure for electricity: the first 256 kWh are the cheapest, at 11c. If you consume more than 256 kWh, the price increases quickly, up to more than triple:
  • 11c/kWh - 0 -100% baseline (256 kWh)
  • 13c/kWh - 100-130% baseline
  • 22c/kWh - 130-200% baseline
  • 31c/kWh - 200-300% baseline
  • 35c/kWh - over 300% baseline
For a residence, it makes sense to look at a year's worth of electricity bills and figure out what is the consumption pattern. In a warm area like California, odds are you'll use lots of electricity in the summer (A/C) and less in the winter when it's cooler, but not cold enough to require heating. One solar strategy is to get a solar array big enough to offset only the expensive kWh in the summer (those at 30c or more). There is no magic formula here, each house is different, although in very broad terms a 2.5-3.5 kW solar array should do the trick for a lot of average-size homes. Before embarking on a solar project, it makes sense to first optimize your consumption using the cheapest tools: replace all incandescent bulbs with CFLs, configure computers and TVs to go into standby when not used, increase the temperature of the fridge and freezer, insulate the attic to keep cold air in, and so on. This can have a dramatic effect on your electrical consumption, as much as 30% reduction! At this point, take stock of your usage and size the solar array as a function of the new energy consumption numbers. Go solar!

Saturday, February 09, 2008

Gas vs. Electric

The two major sources of energy used in California homes are gas and electricity. In our home, for example, the stove uses gas, the water heater uses gas, the washer/dryer use electricity to spin and gas to heat, and the house heater uses an electric motor to push air over a metal tube heated with gas. It's no accident that the California's major utility company is called PG&E: Pacific Gas & Electric. I recently stumbled upon an interesting article about energy efficiency in home appliances. Among others, the article recommends using an electric room heater instead of running the home gas heater. I was generally under the impression that "gas is better" because it's cheaper and pollutes less (gas burns cleaner, whereas electricity is generally produced in coal burning power-plants that are far dirtier). So I decided to do some some research into the matter. For the baseline, I looked at the January bill from 2008 and 2007 (January is the coldest month around here, when one would expect the bill to be the highest, and this past January was especially cold):
  • January 2007
    • Gas: 53 therms @ $1.13
    • Electric: 133 KWh @ $0.11
    • Total: $74
  • January 2008
    • Gas: 49 therms @ $1.14
    • Electric: 136 KWh @ $0.11
    • Total: $71
Based on our usage patterns, I would estimate that roughly half the gas we consume is for heating the air in the home. So what would it look like if we used an electric Vornado heater instead? Based on the Vornado's specifications, it uses between 750 and 1500 Watts, depending on the temperature setting. I measured the wattage, and we're between 600 and 1200 Watts, since we never set it at the max (it gets too hot). For the purposes of this simple calculation, I'll assume the average consumption is 1000 Watts = 1 KW. We use the heater for a maximum of 5 hours per night (5pm - 10pm), so in one month, that's 30 * 5h * 1KW = 150 KWh. With this in mind, our January 2008 bill would have looked like:
  • Gas: 25 therms @ $1.14
  • Electric: 286 KWh @ $0.11
  • Total: $56
That's a 22% reduction in cost! The heater would pay for itself in 3 months. In terms of quality of life, we've started spending more time in one room, with the door closed, in order to keep the heat inside. The Vornado works best in such a closed environment, and it often heats up the room far more than the gas heater. It takes a bit longer to get the room warm, but once it's warm, it consumes very little electricity to keep it that. I spoke to some colleagues at work about this, and the general consensus is that if you can thermally insulate individual rooms in the house, it makes sense to individually heat them using electricity, otherwise a gas heater is more efficient and economical for the entire house. What about the environmental impact? It turns out that in California, most electricity is also produced using natural gas, which is a reasonably clean way to do it. Some electric energy is lost in transmission, but it appears to be reasonably small (average 10%). The big upside, however, is that California is aggressively pursuing electricity generation using renewable energy: solar, wind, and so on, in which case electricity is definitely the way to go. You also have the option to offset the carbon used by your consumption, which is a nice bonus. In typical maverick fashion, San Francisco wants to become fully energy independent, and there are projects underway for that. In my case, it seems that electricity makes more sense than gas. At the end of the day, however, the most important thing is to be aware, measure the impact, and think about what makes sense. More on that, however, in another post.

Monday, January 28, 2008

IPv6: here already?

With the phenomenal recent growth of the internet, the 32-bit IP address space (IPv4) is starting to run out. There are efforts to address this, like subdividing or repossessing large blocks already in use, or the ubiquitous use of NATs, but they're only slowing the inevitable. The IPv6 standard has been around since circa 1996, and it has recently started to get more visible traction: all major operating systems (Windows XP/Vista, Linux, and OSX) support IPv6 natively, there exist IPv6 networks that you can access via dedicated tunnels ("tunnel brokers"), and there are rumors that some companies use IPv6 internally. Some ISPs (like Sonic.net) even provide such dedicated IPv6 tunnels for free, in an effort to encourage experimentation. Although no major ISP has yet migrated to IPv6, I believe it's only a matter of time.

Discussing the in-depth technical aspects of IPv6 is much beyond the scope of this blog post, though I do encourage the adventurous reader to take a look at some of the resources linked above. Instead, this is more of a high-level overview for how to get IPv6 up and running on your home LAN. I recommend using the excellent DD-WRT firmware's IPv6 support, upon which this post is roughly based.

IPv4 addresses are 32 bit in size, which gives us a total of 4 billion different addresses. The effective number of machines that can be accessed in practice is smaller (since many addresses were given out in "blocks" to institutions that don't use the entire block), but it still gives an idea about the upper-bound. In contrast, IPv6 addresses are 128 bit in size. This is an enormously large number, and it's hard to provide a good metaphor for how large it really is. IPv6 would allocate a trillion IPv6 addresses for each square centimeter of our planet, or over 1 trillion trillion addresses for each of the 6.5 billion people currently on the planet. Hopefully, this should be enough for the foreseeable future.

The internet at large is still IPv4, but you can configure your internal LAN (behind your NAT'ed router) to run over IPv6. All the machines on the LAN would talk amongst themselves using IPv6. However, the router is still connected to an IPv4 network (your ISP). Therefore, the LAN traffic must have a way to travel across IPv4 to its destination, regardless if the destination is an IPv4 or an IPv6 network. This is done in one of two major ways:

1. Use 6to4

This is a protocol which assigns each IPv4 address a special IPv6 equivalent (which, by convention, always starts with the prefix "2002:"). Your router would package all IPv6 packets, regardless of destination, into IPv4 equivalents, and send them to a relay gateway. This relay gateway is a machine that sits at the edge between an IPv4 and an IPv6 network, and looks at the destination address of incoming traffic:
  • If the address is an IPv6 address on the relay's IPv6 network, it routes it using IPv6 routing.
  • If the address is an IPv6 address on another IPv6 network, it packages it into an IPv4 packet, sends it over IPv4 to the relay gateway of the destination IPv6 network, which unpacks it, and routes it.
  • If the address is a 6to4 IPv6 address (corresponding to an IPv4 address), it unpacks it, sends the traffic over the IPv4 network, collects the response, and sends it back to your router.
Confused? Here's a diagram:

One risk with the 6to4 protocol is that the nearest relay gateway may be far away, which negatively impacts performance. If you're careful, you can select a 6to4 relay that's reasonably close to you, and you should be fine.

2. Use a tunnel broker

A tunnel broker is a dedicated link from your router's gateway to an IPv6 gateway. This makes life a lot simpler for your router: all it has to do is take IPv6 packets from the LAN, package them as IPv4, and push them into the tunnel. Provided that the tunnel isn't too slow (that is, too far away), the tunnel 's end-point will do all the hard work of routing IPv6 and IPv4 packets correctly. Here is a diagram that describes the process visually (note that the Tunnel Server can either directly route over IPv6, or ask the Tunnel Broker to route over IPv4; your LAN uses only a single IPv6 tunnel to connect to the Tunnel Server).

Some tunnels require that your router have a static address, while others can handle dynamic addresses (via the AICCU utility). The two big IPv6 tunnel providers in the US are SixXS and Hurricane Electric.

If you succeed in setting up IPv6 on your LAN, you can connect to this site, and watch the turtle dance!

"All this work to see a silly turtle dance on the screen?" I hear you ask?

Although the IPv6 internet is significantly smaller than the IPv4 at this point, this might be a good exercise for you to learn how IPv6 works. All indications are that we will migrate to IPv6 before too long, so consider this a way to get your feet wet. There is no downside to running your LAN over IPv6, your operating system, as well as most modern browsers (Firefox, IE, etc.) and network applications (Thunderbird, etc.) natively support IPv6. The performance penalty for running using tunnels should not be prohibitive, and most DNS servers know to serve both IPv4 and IPv6 addresses for multi-homed servers, to make the accessible from both networks. IPv6 is, in some ways, much simpler and more elegant than IPv4, so for those curious to learn more, this is a great opportunity to look under the covers.