Tuesday, September 30, 2008

Quake busting

California is earthquake country. From the iconic 1906 earthquake, to the daily reminders that we live on top of 3 major tectonic faults, earthquakes are a big part of the local consciousness. You would imagine that, after San Francisco was almost leveled in 1906, the building codes would change to take into account such large tremors and build stronger houses. Unfortunately, that's not the case. It's true that a majority of buildings in San Francisco are wood-frame houses with at most 3 floors, which tend to be pretty flexible and withstand earthquakes reasonably well (although they can easily succumb to fire). Still, lots of older houses are vulnerable: they can shift and fall off the foundation, or the lowe stories can crumble under the weight above them. This is especially true of soft-story homes, which are prevalent in the Sunset and Richmond districts. To reduce earthquake risk, it is possible to retrofit an older home to better resist a large earthquake. There is a lot of good evidence that such retrofitting can make a big difference. So what goes into a retrofit?
  1. Foundation work. Many older houses literally "sit" on the foundation with no additional reinforcement. When the ground shakes, it can "push" the house off the foundation. Even a small push, say a few inches, can have disastrous consequences, it can sever water, sewer, and gas pipes, start a fire or worse. To mitigate this, the house can be bolted to the foundation, so that the ground and the house move as one.
  2. Cripple wall work. Most houses don't sit directly on the foundation, to prevent the wood from getting damaged or weakened by natural ground moisture and the like. Houses are either built on top of a narrow "crawl space", or, in the case of soft-story houses, the living quarters are built above the garage. In an earthquake, the heavy part of the house above carries a lot of inertia, and if the underside is bolted to the ground, the house above can shatter the walls underneath and fall. To mitigate this, the lower walls can be reinforced with plywood shear walls that strengthen them and essentially stiffen the house, preventing the upstairs from swinging wildly in a quake.
  3. Garage opening reinforcement. Even with bolts and shear walls, the garage door opening remains a major weak point, as it weakens one of the key structural walls of the house. This can be reinforced with a steel frame or a steel beam to give it the same strength as the other 3 walls.
Here's what it looks like (courtesy of seismicsafety.com): For most houses, the retrofit work can be done just with the help of a skilled contractor, who's qualified to install bolts and shear walls. For some houses, notably those on a steep hill, or soft-story designs, the help of an engineer is needed to design the retrofit and compute the appropriate material strengths, after which the contractor can install it. ABAG has a wealth of information to help people decide the risk of an earthquake and the impact in their specific area. In particular, the maps of shake intensity and liquefaction risk are extremely useful. In San Francisco, having your house on top of a hill can dramatically reduce both shaking and liquefaction, though it does increase the changes of a landslide. Sadly, the Marina with its gorgeous houses and amazing views, is built on top of landfill from the 1906 earthquake, so it's very vulnerable to an earthquake; it's no accident that in the relatively minor 1989 Loma Prieta earthquake, the Marina suffered the most damage (map courtesy of thefrontsteps.com). Another way to mitigate against earthquake risk is to get earthquake insurance. In California, this is of questionable utility: if a major earthquake were to hit, the CEA would probably run out of money, at which point people say that FEMA would have to step in and help the reconstruction. During that time, people would probably have to live in temporary houses, like the ready-made earthquake shacks of yesteryear. Still, having some earthquake insurance can provide a good cushion in the case of a major natural disaster. I personally found that learning about earthquakes made me worry about them less and take the necessary steps to increase our safety. I realize there are no guarantees, but doing even little things can go a long way towards helping deal with this reality. And I certainly wouldn't trade living in San Francisco for anywhere else, earthquakes and all.

Here comes the Sun

Renewable energy is becoming increasingly more visible in our society. The recent oil and food price spikes, the impending opening of the Northwest Passage, the coral bleaching in the ocean all point to the fact that we consume fossil fuels at unsustainable rates, and are changing our environment for the worse. Changing to renewable energy makes both economic and moral sense.
Of the many ways to produce renewable energy, solar is a big focus these days. In the US, the federal government has a generous subsidy, which looks to be extended in the following years. In California, there is an important state subsidy, and a generous San Francisco subsidy. (Lest you wonder, even foggy San Francisco gets plenty of sun.) In California, grid electricity is produced by PG&E, mostly using natural gas. The solar incentives aim to encourage private individuals and businesses to install solar panels and feed electricity back into the grid, thereby offsetting some of their consumption. If the solar installation produces more than the individual consumes, their PG&E bill can be negative (they get a check each month). In most cases, the solar panels would offset some fraction of the consumption, typically the expensive kWh's, more on this below. Here's what this looks like (video credit Solar City): One natural question at this point is: why feed the electricity back into the grid, instead of running your house directly on it? For one, the solar panels only work during the day. To have electricity at night, you would need to install a fairly large set of batteries to store excess energy. Batteries are very costly and often an environmental nightmare (containing acid or rare metals that are expensive to synthesize or extract). Second, solar panels energy output varies considerably between seasons (in the northern hemisphere, the sun's efficiency is very different in the winter vs. the summer), or even between days (on a cold, stormy day with cloudy skies, the output is quite different than on a warm, sunny day). Third, most electrical appliances expect a steady electrical output (110v, with small error margins), which are difficult to maintain even from a good battery bank. The goal of solar is not to necessarily replace the grid entirely, but rather to offset enough to substantially reduce our pollution and dependence on fossil fuels. Solar cells convert sunlight into electrical current. The conversion is pretty inefficient, around 20% of sunlight gets transformed to electricity. However, given that direct sunlight on average produces 120 W/square meter, on an 8 hour sunny summer day we can recover almost 200 kWh of electricity using a modest 1 square meter solar cell array. Solar panels produce DC current, but the grid operates on AC, so the output from solar panels has to be converted to AC using an inverter. This exacts another small efficiency penalty (around 20%), and has to be tuned to the size of the solar array. Solar panels are expensive (largely because they're not yet mass produced, so they can't leverage economies of scale). Absent generous subsidies, in order for them to make financial sense, they have to be sized as a function of household consumption. As of the time of this posting, PG&E uses a tiered price structure for electricity: the first 256 kWh are the cheapest, at 11c. If you consume more than 256 kWh, the price increases quickly, up to more than triple:
  • 11c/kWh - 0 -100% baseline (256 kWh)
  • 13c/kWh - 100-130% baseline
  • 22c/kWh - 130-200% baseline
  • 31c/kWh - 200-300% baseline
  • 35c/kWh - over 300% baseline
For a residence, it makes sense to look at a year's worth of electricity bills and figure out what is the consumption pattern. In a warm area like California, odds are you'll use lots of electricity in the summer (A/C) and less in the winter when it's cooler, but not cold enough to require heating. One solar strategy is to get a solar array big enough to offset only the expensive kWh in the summer (those at 30c or more). There is no magic formula here, each house is different, although in very broad terms a 2.5-3.5 kW solar array should do the trick for a lot of average-size homes. Before embarking on a solar project, it makes sense to first optimize your consumption using the cheapest tools: replace all incandescent bulbs with CFLs, configure computers and TVs to go into standby when not used, increase the temperature of the fridge and freezer, insulate the attic to keep cold air in, and so on. This can have a dramatic effect on your electrical consumption, as much as 30% reduction! At this point, take stock of your usage and size the solar array as a function of the new energy consumption numbers. Go solar!

Saturday, February 09, 2008

Gas vs. Electric

The two major sources of energy used in California homes are gas and electricity. In our home, for example, the stove uses gas, the water heater uses gas, the washer/dryer use electricity to spin and gas to heat, and the house heater uses an electric motor to push air over a metal tube heated with gas. It's no accident that the California's major utility company is called PG&E: Pacific Gas & Electric. I recently stumbled upon an interesting article about energy efficiency in home appliances. Among others, the article recommends using an electric room heater instead of running the home gas heater. I was generally under the impression that "gas is better" because it's cheaper and pollutes less (gas burns cleaner, whereas electricity is generally produced in coal burning power-plants that are far dirtier). So I decided to do some some research into the matter. For the baseline, I looked at the January bill from 2008 and 2007 (January is the coldest month around here, when one would expect the bill to be the highest, and this past January was especially cold):
  • January 2007
    • Gas: 53 therms @ $1.13
    • Electric: 133 KWh @ $0.11
    • Total: $74
  • January 2008
    • Gas: 49 therms @ $1.14
    • Electric: 136 KWh @ $0.11
    • Total: $71
Based on our usage patterns, I would estimate that roughly half the gas we consume is for heating the air in the home. So what would it look like if we used an electric Vornado heater instead? Based on the Vornado's specifications, it uses between 750 and 1500 Watts, depending on the temperature setting. I measured the wattage, and we're between 600 and 1200 Watts, since we never set it at the max (it gets too hot). For the purposes of this simple calculation, I'll assume the average consumption is 1000 Watts = 1 KW. We use the heater for a maximum of 5 hours per night (5pm - 10pm), so in one month, that's 30 * 5h * 1KW = 150 KWh. With this in mind, our January 2008 bill would have looked like:
  • Gas: 25 therms @ $1.14
  • Electric: 286 KWh @ $0.11
  • Total: $56
That's a 22% reduction in cost! The heater would pay for itself in 3 months. In terms of quality of life, we've started spending more time in one room, with the door closed, in order to keep the heat inside. The Vornado works best in such a closed environment, and it often heats up the room far more than the gas heater. It takes a bit longer to get the room warm, but once it's warm, it consumes very little electricity to keep it that. I spoke to some colleagues at work about this, and the general consensus is that if you can thermally insulate individual rooms in the house, it makes sense to individually heat them using electricity, otherwise a gas heater is more efficient and economical for the entire house. What about the environmental impact? It turns out that in California, most electricity is also produced using natural gas, which is a reasonably clean way to do it. Some electric energy is lost in transmission, but it appears to be reasonably small (average 10%). The big upside, however, is that California is aggressively pursuing electricity generation using renewable energy: solar, wind, and so on, in which case electricity is definitely the way to go. You also have the option to offset the carbon used by your consumption, which is a nice bonus. In typical maverick fashion, San Francisco wants to become fully energy independent, and there are projects underway for that. In my case, it seems that electricity makes more sense than gas. At the end of the day, however, the most important thing is to be aware, measure the impact, and think about what makes sense. More on that, however, in another post.

Monday, January 28, 2008

IPv6: here already?

With the phenomenal recent growth of the internet, the 32-bit IP address space (IPv4) is starting to run out. There are efforts to address this, like subdividing or repossessing large blocks already in use, or the ubiquitous use of NATs, but they're only slowing the inevitable. The IPv6 standard has been around since circa 1996, and it has recently started to get more visible traction: all major operating systems (Windows XP/Vista, Linux, and OSX) support IPv6 natively, there exist IPv6 networks that you can access via dedicated tunnels ("tunnel brokers"), and there are rumors that some companies use IPv6 internally. Some ISPs (like Sonic.net) even provide such dedicated IPv6 tunnels for free, in an effort to encourage experimentation. Although no major ISP has yet migrated to IPv6, I believe it's only a matter of time.

Discussing the in-depth technical aspects of IPv6 is much beyond the scope of this blog post, though I do encourage the adventurous reader to take a look at some of the resources linked above. Instead, this is more of a high-level overview for how to get IPv6 up and running on your home LAN. I recommend using the excellent DD-WRT firmware's IPv6 support, upon which this post is roughly based.

IPv4 addresses are 32 bit in size, which gives us a total of 4 billion different addresses. The effective number of machines that can be accessed in practice is smaller (since many addresses were given out in "blocks" to institutions that don't use the entire block), but it still gives an idea about the upper-bound. In contrast, IPv6 addresses are 128 bit in size. This is an enormously large number, and it's hard to provide a good metaphor for how large it really is. IPv6 would allocate a trillion IPv6 addresses for each square centimeter of our planet, or over 1 trillion trillion addresses for each of the 6.5 billion people currently on the planet. Hopefully, this should be enough for the foreseeable future.

The internet at large is still IPv4, but you can configure your internal LAN (behind your NAT'ed router) to run over IPv6. All the machines on the LAN would talk amongst themselves using IPv6. However, the router is still connected to an IPv4 network (your ISP). Therefore, the LAN traffic must have a way to travel across IPv4 to its destination, regardless if the destination is an IPv4 or an IPv6 network. This is done in one of two major ways:

1. Use 6to4

This is a protocol which assigns each IPv4 address a special IPv6 equivalent (which, by convention, always starts with the prefix "2002:"). Your router would package all IPv6 packets, regardless of destination, into IPv4 equivalents, and send them to a relay gateway. This relay gateway is a machine that sits at the edge between an IPv4 and an IPv6 network, and looks at the destination address of incoming traffic:
  • If the address is an IPv6 address on the relay's IPv6 network, it routes it using IPv6 routing.
  • If the address is an IPv6 address on another IPv6 network, it packages it into an IPv4 packet, sends it over IPv4 to the relay gateway of the destination IPv6 network, which unpacks it, and routes it.
  • If the address is a 6to4 IPv6 address (corresponding to an IPv4 address), it unpacks it, sends the traffic over the IPv4 network, collects the response, and sends it back to your router.
Confused? Here's a diagram:

One risk with the 6to4 protocol is that the nearest relay gateway may be far away, which negatively impacts performance. If you're careful, you can select a 6to4 relay that's reasonably close to you, and you should be fine.

2. Use a tunnel broker

A tunnel broker is a dedicated link from your router's gateway to an IPv6 gateway. This makes life a lot simpler for your router: all it has to do is take IPv6 packets from the LAN, package them as IPv4, and push them into the tunnel. Provided that the tunnel isn't too slow (that is, too far away), the tunnel 's end-point will do all the hard work of routing IPv6 and IPv4 packets correctly. Here is a diagram that describes the process visually (note that the Tunnel Server can either directly route over IPv6, or ask the Tunnel Broker to route over IPv4; your LAN uses only a single IPv6 tunnel to connect to the Tunnel Server).

Some tunnels require that your router have a static address, while others can handle dynamic addresses (via the AICCU utility). The two big IPv6 tunnel providers in the US are SixXS and Hurricane Electric.

If you succeed in setting up IPv6 on your LAN, you can connect to this site, and watch the turtle dance!

"All this work to see a silly turtle dance on the screen?" I hear you ask?

Although the IPv6 internet is significantly smaller than the IPv4 at this point, this might be a good exercise for you to learn how IPv6 works. All indications are that we will migrate to IPv6 before too long, so consider this a way to get your feet wet. There is no downside to running your LAN over IPv6, your operating system, as well as most modern browsers (Firefox, IE, etc.) and network applications (Thunderbird, etc.) natively support IPv6. The performance penalty for running using tunnels should not be prohibitive, and most DNS servers know to serve both IPv4 and IPv6 addresses for multi-homed servers, to make the accessible from both networks. IPv6 is, in some ways, much simpler and more elegant than IPv4, so for those curious to learn more, this is a great opportunity to look under the covers.

Thursday, January 24, 2008

QuickCam Pro 9000: Skype in HD

Many friends and family members live abroad or just plain far away. In order to keep in touch, I frequently use Skype, and in the past few years or so, Skype Video. I was initially hooked by Skype's amazing voice quality (and the fact that it was free!), but Skype Video adds a whole new dimension to everything. Bandwidth speeds have gotten good enough (even across entire oceans and continents) that it's now feasible to have a full-screen Skype Video conference call and literally "hang out". For me personally, this must be one of the biggest and most profound changes that technology brought about in the past few years.

Up until recently, I've been using a Creative WebCam Live! for video calls. The camera does a pretty decent job, but it has two important shortcomings: the images are a bit darker than I'd like, and its field of view is narrow enough that it has difficulty fitting more than one person, or even capturing what I'm doing unless I sit pretty still. So I decided a few weeks ago to look for a webcam with a wide-angle lens. The two main contenders appear to be Creative Live! Cam Optia Pro and Logitech's QuickCam Pro 9000.

The reviews for the Optia are pretty mixed, and echoed some of the same issues I'd had with the current Live! model. In contrast, the QuickCam model not only has a wider field of view, but it also has an exceptional Zeiss lens, and a very good reputation in on-line reviews. Both cameras are labeled as "HD-quality" (probably to capitalize on the HD buzz that's going around). Off the bat, I'd actually consider this a bad thing since it puts considerably more strain on internet bandwidth, but I figured that there might be a software setting to dial it down if the video conference starts to stutter. The QuickCam is a bit more expensive than the Optia, but with some luck (a timely Amazon rebate), it came out cheaper, so I went for it.

Upon installation, the camera behaves very well. The image quality is very crisp (Zeiss never disappoints), the angle is nice and wide (it's able to easily fit 2 or 3 people in the picture). But the most amazing feature is Logitech's "RightLight 2" technology, which makes images far brighter and easier on the eye. In almost all light conditions (natural, halogen, evening, etc.) the camera delivers stunning, well-balanced images automatically. If you've ever had problems with webcams in low-light conditions before, you really owe it to yourself to try this camera, it "just works". The built-in microphone is a nice touch, it removed one more wire and device off my desk, and the noise and echo-canceling technology is very good as well.

When it comes to Skype, the camera's drivers auto-detect Skype and auto-configure it to use the webcam as its video and audio source (a nice touch). Unfortunately, the default software that comes on the CD is incredibly buggy, it crashes Skype reliably, and is very hard to get right. After an hour of frustration, I found an updated version of the software on-line that is far more stable and works as intended with Skype. One remaining mildly annoying tid-bit is that right after you start a video call, the camera occasionally "resets" a few seconds into the call (which really means that it skips about a second's worth of video), but after that it's solid.

Although the camera is HD, Skype doesn't miss a beat: the video calls are crisp, clear, and clearly feel more like "high-definition" than with the previous webcam. I imagine that Skype probably automatically dials down the video quality to keep up with network traffic, but if the bandwidth is there, it will use it, and what a difference it makes!

All in all, I'm very happy with this camera, and I highly recommend it, it will make video calls feel that much more real and enjoyable.

Wednesday, January 23, 2008

QoS with DD-WRT

More and more of my life has been going digital, anywhere from pictures, to e-mail, music, etc. A large chunk of this content is stored on my PC, so there is a very real risk of losing it all in a serious crash. To mitigate this, I recently decided to go with an on-line backup service. Besides convenience, this has the essential advantage that it's located in a different state, so if our house were to fall in the ocean, the data has a better chance of surviving.

A downside of on-line backup services is that they want to suck-up all available bandwidth, so they slow down the entire home network. There are some settings for throttling bandwidth in the backup client itself, but they're primitive and plain don't work that well. It'd be a lot nicer if the router simply did the throttling for me, depending on what the other machines on the LAN are doing. This is what QoS is all-about.

Some time ago I read an excellent article on hacking open-source routers, so I decided to buy a WRT54GL. It turns out that flashing this router with the DD-WRT firmware gives it excellent QoS facilities, far better than what the native firmware can do. Installing the DD-WRT firmware requires some care, but is not difficult. There's something very satisfying about rebooting the router and getting a detailed status page with CPU load averages, and being able to SSH into the router to poke around the file system.

Setting up QoS correctly turns out to be trickier than it seems. First, you have to estimate the uplink and downlink bandwidth that your ISP provides, and tell the router 85% of each. The underlying assumption is that your bandwidth is fairly constant, but the router has to be able to handle spikes, so the 85% gives it a cushion. Since I use DSL, my bandwidth is pretty stable, but for cable customers (where bandwidth is shared by all people in your neighborhood), I imagine that's less likely to be true, so your mileage may vary here.

After this, you have the option to boost the priority of certain traffic sources, and lower the priority of other sources. A traffic source can be:
  • A particular application, communicating on a particular port
  • One or more specific IPs (or IP masks) on the LAN
  • One or more specific MACs on the LAN
  • One or more specific ethernet ports on the router
I decided to go for the first option, which in my opinion is the most flexible given my network setup. My on-line backup client uses a well-known fixed port, so it was easy to tell the router to downgrade all traffic on that port to "Bulk" status. This means that if any other traffic source wants to use the network, the "Bulk" sources are throttled all the way down to zero. This by itself works pretty well, but we can do better by marking other traffic sources as "Standard" or "Express" in order to boost their priority. Natural candidates for boosting are HTTP, Skype, IMAP, etc.

At first sight, it might seem that HTTP traffic can be simply identified as all traffic on port 80. This isn't quite true, given that many websites decide to serve static content off other ports, and it also doesn't handle HTTPS traffic. Fortunately, DD-WRT supports the L7 filter, which attempts to classify the type of traffic by inspecting the packets themselves (for example, this is the L7 pattern that classifies HTTP traffic). This does take a performance penalty since all packets have to be inspected, but is easy, reliable, and headache-free, so I gave it a shot.

Once everything was configured and running, I was pleased to see that it works well: when the network was quiet, my backup software was running at close to full-speed. As soon as I started to use the network (for example, watch an on-line video), the router magically throttled the backup traffic down to almost zero, and my browsing was unimpaired. When I stopped browsing, the backup traffic came back to using almost all the bandwidth.

The only downside is that, between estimating the maximum bandwidth above, and the performance toll of the L7 filter, this does exact around 15-20% penalty on the overall traffic coming in and out of the network. My DSL service currently has about 2.5 MB up, and 450 KB down. With QoS enabled, I'm seeing around 2.1 MB up and around 360 KB down. This is enough for my needs for now, but I will have to see how it holds up over time.