Wednesday, February 24, 2010

Vampire energy

Standby power (a.k.a. "vampire energy") is the power consumed by devices that are simply plugged in but not really turned on. For example, a device with a remote control, uses a trickle of power to listen for the signal from the remote control in order to fully turn on. The amount of standby power is typically small. For example, a modern TV will consume 0.5W in standby, but can consume over 200W when fully turned on.


So why care about vampire energy? The reason is that when you add it all up over all electrical devices in your household, it turns out to be a really big number. In the UK, for example, in 2006, estimates are that 8% of all electrical energy consumed goes to standby power. In the US, this would amount to almost 20 average-size power-plants!

I was curious to see how much standby power we use at home. The public utility recently upgraded our power-meter to a digital smart-meter, so it's easy to just read out the wattage consumed at any point in time. I waited until everyone was asleep and everything in the house was turned off (but still in standby). I also made sure that the fridge did not have its compressor running at the time.

The number? 85W.

Wow. This is surprisingly high! At 24h/day, 30days/month, this works out to 61kWh (around $8/month at today's energy prices). Given that we consume 300-400kWh average total per month, this is between 15-20% of our total consumption! For an average US household, which consumes closer to 1000kWh/month, this would be around 5%, which is about what the British study revealed.

How could we possibly consume 85W in standby mode? I broke it down by circuit, by switching the circuit breakers on and off and studying the devices connected to each circuit:
  • 29W for the DSL modem + wireless router.
  • 11W for the gas heater
  • 8W for various stuff in the bedrooms (alarm clock, cell phone chargers, night-light, etc.)
  • 5W for the garage door remote
  • 4W for the PC
  • 4W for the microwave oven
  • 3W for the washing machine
  • 3W for the electric oven
  • 18W for other stuff I couldn't track down precisely
The modem + wireless router are an interesting case: while I could turn them off at night (from, say, midnight to 6am), they do need to be on during the day given the level of internet use at our place. So the 29W really needs to be pro-rated down to 7W to indicate it's standby power for only about 6 hours. Another option is to get a combination router + modem that consumes less power by itself, but that's harder since I'm picky about the routers I like. :)

The astonishing one is the gas heater, at 11W. I suspect this is because the heater uses an electric element to fire up the gas burner, and this electric element has to be always on. There isn't much I can do about that, and I don't really feel like messing with the heater since it a large, expensive, and scary device.

The remaining ones are relatively small. For things like the microwave oven and washer, I could get power strip with an external switch that fully turns off the appliance, but the cost of the power strip will easily outweigh the savings from the electricity for a few years at least.

I should also probably track down the remaining 18W and see where it's wasted, but I have a feeling it's going to be small amounts here and there, not one significant consumer.

The conclusion? The best way to eliminate standby power is to do it at the source (that is design the device to not consume standby energy as much as possible). The second best way is to use timers or switched power strips and force devices off. This only makes sense if the device is of a certain kind (indoor device, that doesn't suffer problems when turned off, like the gas heater). For the remaining devices, the ideal situation is to combine them all on one (or a few) power strips and force them off with a switch. If the devices are spread throughout the house (like the microwave oven in the kitchen and the washing machine in the garage), this is not really possible.

In the end there's little I can do about this. How frustrating. :(

Friday, September 11, 2009

How I chose the radio

Why did I choose this particular Philco model? The reasoning was not particularly scientific, but in retrospect it turned out to be a very good choice.

I'd spent some time browsing (and drooling over) various old radio galleries, to get a sense for what's available. Roughly speaking, here's my impression about the lay of the land:
  • Wood radios vs. plastic radios. As much as some plastic radios are beautiful pieces of Art-Deco inspired art, I just didn't like them as much as the wood radios. Some particular brands of plastic radios, those made of the Catalin brand of plastic material, are extremely sought-after and expensive -- as much as $2000 or more for a radio in good condition!
  • Cathedral vs. tombstone vs. console vs. tabletop. The console radios are quite large and unwieldy, so they were out of consideration for me fairly quickly. I was somewhat torn betwen cathedral and tombstone design -- I like them both, however, in the end I decided to go with a cathedral design since I find the design more appealing and timeless. The tabletop radios, while very nice in their own right, didn't quite have the vintage look I enjoyed.

vs.

  • Manufacturer: Philco vs. Zenith vs. Motorola vs. RCA vs. Sparton. There is very large number of manufacturers that produced radios in the "golden era", most of them gone today. In the end, I decided to go with a Philco radio because it is one of the best and largest radio producers, they also produced many of their own radio parts (similar to RCA), and I quite liked their designs (the Philco 90, in particular, is considered by some "the" classic cathedral design).
  • Last but not least: complexity. Since this was the first radio I'd try to restore, it was important that the electronics weren't too complicated. I definitely wanted a vacuum-tube based radio (as opposed to transistor), and also wanted a radio that was made in the late 20's - early 30's (earlier than that may have meant too much compromise on audio quality, which would have been a problem as I'd actually like to use the radio around the house).
In the end, the Philco 80 turned out to be a surprisingly good choice:
  • It's a Philco wooden cathedral design. While by no means the most ornate, it still looks very good and embodies many of the design qualities I like.
  • It's very simple electronically. It only has 4 tubes, compared to 7+ tubes for many of the "fancier" models. It has modest energy consumption (46 watts), and it is fairly well documented and understood.
  • As it turns out, the Philco 80 Jr. was meant to be one of the cheapest cathedral design radios at the time -- which explains the complexity and power consumption above. The radio was introduced for $18.75, whereas other radios would start at double that.
Depending on how this restoration goes, I may look into a tombstone design next, as there are some very good looking radios in that category as well.

An old project about an old radio

For a while now I have been thinking of getting an old radio and restoring it back to life. The reasons are because I love the look of old radios, it would be a fun electronics project, and a good excuse to learn how more about how radio broadcasting works.

I finally took the plunge a few weeks ago and bought an old Philco 80 Jr from eBay. The radio is in great physical condition, especially for a radio made in 1933:


The internals also looked good (the seller made it clear that the radio was not plugged in and may or may not work upon arrival):

Upon closer inspection, the radio had a number of issues:
  • It is missing two tubes: the power pentode (tube 42) and the full-wave rectifier (tube 80).
  • The cap connector on one of the oscillators (tube 36) was detached and had come off.
  • It had no power plug, and there were two strange wires coming out of the back of the radio.
You can see the real state of radio from this photo I took from my workbench (notice the missing tube sockets, and the broken cap):


Before getting the radio, I'd done some high-level reading about how to restore an old radio, in particular Phil's excellent guide, so I knew roughly what to expect, in particular not to plug it in until I can run a few tests.

One of the things I find amazing about such old radios (besides the fact that they're 80 years old) is the fact that this particular radio may well have played broadcasts from WW2, from the lunar landing, and other momentous times in history. I think there is something incredibly cool about that.

This post will be the first in a series that will document how I'm going to restore this old radio and what I learn along the way. Hope you enjoy reading it as much as I will enjoy writing about it!

Thursday, September 03, 2009

Lead paint

Lead is a heavy metal with lots of industrial and commercial uses, ranging anywhere from batteries used in cars, to protecting nuclear physicists from radiation. One particularly colorful and sad chapter in the history of this otherwise boring metal is the use of lead in gasoline (tetraethyl lead) to prevent engine knock. Bill Bryson writes about it eloquently in his book A short history of nearly everything.

Another common use of lead is in household paint. The reason? It looks good: paint with lead in it is shiny and pleasing to the eye. It's also quite dangerous and poisonous. Lead also happens to be a neurotoxin, it likes to bind with neurons and prevents the normal formation of synapses, which is particularly bad for young children, who apparently can retain up to 100% of the lead that enters their system (adults, as it turns out, are better at eliminating the bad stuff, only about 20% of the lead that enters an adult body stays, the rest is eliminated).

The most common way for children to be exposed through lead is, you guessed it, household paint. Specifically, in older homes, the lead paint can naturally chip and fall on the floor, where a child can ingest it. One easy solution to this is to paint over the areas with lead paint therefore "trapping" the bad stuff and preventing it from flaking or otherwise getting off the walls. In general, good house keeping (cleaning, vacuuming, and in general maintaining the surfaces whether through washing or painting over) does a lot of good and cheaply.

San Francisco's housing stock is old, some of it built before 1900, and a lot of it built after the earthquake of 1906. There are few modern houses, and even fewer built after lead paint was made illegal. Ironically, the more expensive and "fancy" a house, the more likely it is to have lead paint in it; some of the highest concentration of lead paint in San Francisco is apparently in the fancy mansions of Pacific Heights. Besides paint, which is by far the most common place for lead, another relatively common place is glazing on tiles, for example shower tiles. A good rule of thumb is that -- if the paint or the tiles look nice and shiny, they're probably leaded.

It may be tempting to get very worried about lead paint and decide to "strip" it out. This is not only costly -- it involves sanding most surfaces that can contain lead paint -- but it also frees the stuff in the air and it can become a true nightmare to get it all out.

How would you know if you have lead paint in your house? You can hire someone to test it, or you can even do some of it yourself.

One kind of test involves taking flakes of paint and send them to a lab where they are analyzed. To cover the house, the technician has to take a sample from each wall, or at least each room, since some rooms may have been painted with lead paint, while some weren't. If this sounds like a pain, it is. You can also use a home lead-test kit in area where the paint is exposed. The lead-test kit is basically a "brush" that contains a reactant in it, you brush over the paint, and if there is lead present, it turns red. The test is controversial, and has a false-positive rate, but it can give you some idea.

Another kind of test is using an XRF gun. This gun fires off a stream of high-speed particles which only bounce back when they hit something dense ... like a lead atom. The gun approach is far easier to use since you can just take "readings" from all surfaces you care about and the results are instant.

As a side note, it is remarkable to what extent people have ignored common sense or made bad choices in the name of "looking good".

Repeater on the cheap

The wireless signal in our house has difficulty reaching a few spots. On top of that, the PowerBook's metal shell interferes with the (already weak) signal, and makes it impossible to connect.

One solution is to use a repeater to boost the signal. LinkSys already makes a Range Expander, which seems to do the right thing, but at $80, it costs more than the router itself!

DD-WRT, which I already use for QoS, can also turn a $50 router into a repeater (and more!)

The generic name for this is Linking Routers. At a high-level, there are 4 ways to link two (or more) routers together:
  • Client -- Router 1 is on the WAN, Router 2 extends the range of Router 1. The catch is that all clients can only connect to Router 2 via a wired connection. Additionally, Router 1 and Router 2 sit on different networks, so it is not possible to do things like "broadcast" between them.
  • Client Bridged -- same as Client, but the two routers sit on the same network.
  • Repeater -- same as Client, but the clients can connect to Router 2 via wired and wireless. The two routers sit on different networks.
  • Repeater Bridged -- same as Repeater, but the two routers sit on the same network.
The easiest and most convenient configuration is the Repeater. Basically, Router 2 acts as just another computer that connects to Router 1, but creates a separate network (with a separate SSID) in its area of influence.

One drawback (to all these configurations) is that the wireless bandwidth is cut in half, roughly, because of collissions within the wireless broadcast between the two routers. This is apparently unavoidable. In our case, the 802.11g protocol is already fast enough for what we use it, that this does not create noticeable slowdowns (and SpeedTest confirms it).

The Client functionality has existed for a while (it even exists in my ancient v23 on my original router), but the Repeater functionality is new, only in v24, and (confusingly) only works right in some versions but not others. Read the documentation very carefully!

While I was researching repeaters, I also briefly looked into whether an 802.11n router might help -- they are supposed to be "faster" and have increased range. Unfortunately, the Linksys own N-products get mixed to bad reviews -- the range is not much increased, neither is the speed, and the installation process leaves much to be desired. Add to that the fact that, to use it, I'd have to get compatible 802.11n PCMCIA cards, and I'm not all that interested or excited about this.

DD-WRT continues to be an exceptional product, especially when combined with a solid piece of hardware like the WRT54GL.

Tuesday, September 30, 2008

Quake busting

California is earthquake country. From the iconic 1906 earthquake, to the daily reminders that we live on top of 3 major tectonic faults, earthquakes are a big part of the local consciousness. You would imagine that, after San Francisco was almost leveled in 1906, the building codes would change to take into account such large tremors and build stronger houses. Unfortunately, that's not the case. It's true that a majority of buildings in San Francisco are wood-frame houses with at most 3 floors, which tend to be pretty flexible and withstand earthquakes reasonably well (although they can easily succumb to fire). Still, lots of older houses are vulnerable: they can shift and fall off the foundation, or the lowe stories can crumble under the weight above them. This is especially true of soft-story homes, which are prevalent in the Sunset and Richmond districts. To reduce earthquake risk, it is possible to retrofit an older home to better resist a large earthquake. There is a lot of good evidence that such retrofitting can make a big difference. So what goes into a retrofit?
  1. Foundation work. Many older houses literally "sit" on the foundation with no additional reinforcement. When the ground shakes, it can "push" the house off the foundation. Even a small push, say a few inches, can have disastrous consequences, it can sever water, sewer, and gas pipes, start a fire or worse. To mitigate this, the house can be bolted to the foundation, so that the ground and the house move as one.
  2. Cripple wall work. Most houses don't sit directly on the foundation, to prevent the wood from getting damaged or weakened by natural ground moisture and the like. Houses are either built on top of a narrow "crawl space", or, in the case of soft-story houses, the living quarters are built above the garage. In an earthquake, the heavy part of the house above carries a lot of inertia, and if the underside is bolted to the ground, the house above can shatter the walls underneath and fall. To mitigate this, the lower walls can be reinforced with plywood shear walls that strengthen them and essentially stiffen the house, preventing the upstairs from swinging wildly in a quake.
  3. Garage opening reinforcement. Even with bolts and shear walls, the garage door opening remains a major weak point, as it weakens one of the key structural walls of the house. This can be reinforced with a steel frame or a steel beam to give it the same strength as the other 3 walls.
Here's what it looks like (courtesy of seismicsafety.com): For most houses, the retrofit work can be done just with the help of a skilled contractor, who's qualified to install bolts and shear walls. For some houses, notably those on a steep hill, or soft-story designs, the help of an engineer is needed to design the retrofit and compute the appropriate material strengths, after which the contractor can install it. ABAG has a wealth of information to help people decide the risk of an earthquake and the impact in their specific area. In particular, the maps of shake intensity and liquefaction risk are extremely useful. In San Francisco, having your house on top of a hill can dramatically reduce both shaking and liquefaction, though it does increase the changes of a landslide. Sadly, the Marina with its gorgeous houses and amazing views, is built on top of landfill from the 1906 earthquake, so it's very vulnerable to an earthquake; it's no accident that in the relatively minor 1989 Loma Prieta earthquake, the Marina suffered the most damage (map courtesy of thefrontsteps.com). Another way to mitigate against earthquake risk is to get earthquake insurance. In California, this is of questionable utility: if a major earthquake were to hit, the CEA would probably run out of money, at which point people say that FEMA would have to step in and help the reconstruction. During that time, people would probably have to live in temporary houses, like the ready-made earthquake shacks of yesteryear. Still, having some earthquake insurance can provide a good cushion in the case of a major natural disaster. I personally found that learning about earthquakes made me worry about them less and take the necessary steps to increase our safety. I realize there are no guarantees, but doing even little things can go a long way towards helping deal with this reality. And I certainly wouldn't trade living in San Francisco for anywhere else, earthquakes and all.

Here comes the Sun

Renewable energy is becoming increasingly more visible in our society. The recent oil and food price spikes, the impending opening of the Northwest Passage, the coral bleaching in the ocean all point to the fact that we consume fossil fuels at unsustainable rates, and are changing our environment for the worse. Changing to renewable energy makes both economic and moral sense.
Of the many ways to produce renewable energy, solar is a big focus these days. In the US, the federal government has a generous subsidy, which looks to be extended in the following years. In California, there is an important state subsidy, and a generous San Francisco subsidy. (Lest you wonder, even foggy San Francisco gets plenty of sun.) In California, grid electricity is produced by PG&E, mostly using natural gas. The solar incentives aim to encourage private individuals and businesses to install solar panels and feed electricity back into the grid, thereby offsetting some of their consumption. If the solar installation produces more than the individual consumes, their PG&E bill can be negative (they get a check each month). In most cases, the solar panels would offset some fraction of the consumption, typically the expensive kWh's, more on this below. Here's what this looks like (video credit Solar City): One natural question at this point is: why feed the electricity back into the grid, instead of running your house directly on it? For one, the solar panels only work during the day. To have electricity at night, you would need to install a fairly large set of batteries to store excess energy. Batteries are very costly and often an environmental nightmare (containing acid or rare metals that are expensive to synthesize or extract). Second, solar panels energy output varies considerably between seasons (in the northern hemisphere, the sun's efficiency is very different in the winter vs. the summer), or even between days (on a cold, stormy day with cloudy skies, the output is quite different than on a warm, sunny day). Third, most electrical appliances expect a steady electrical output (110v, with small error margins), which are difficult to maintain even from a good battery bank. The goal of solar is not to necessarily replace the grid entirely, but rather to offset enough to substantially reduce our pollution and dependence on fossil fuels. Solar cells convert sunlight into electrical current. The conversion is pretty inefficient, around 20% of sunlight gets transformed to electricity. However, given that direct sunlight on average produces 120 W/square meter, on an 8 hour sunny summer day we can recover almost 200 kWh of electricity using a modest 1 square meter solar cell array. Solar panels produce DC current, but the grid operates on AC, so the output from solar panels has to be converted to AC using an inverter. This exacts another small efficiency penalty (around 20%), and has to be tuned to the size of the solar array. Solar panels are expensive (largely because they're not yet mass produced, so they can't leverage economies of scale). Absent generous subsidies, in order for them to make financial sense, they have to be sized as a function of household consumption. As of the time of this posting, PG&E uses a tiered price structure for electricity: the first 256 kWh are the cheapest, at 11c. If you consume more than 256 kWh, the price increases quickly, up to more than triple:
  • 11c/kWh - 0 -100% baseline (256 kWh)
  • 13c/kWh - 100-130% baseline
  • 22c/kWh - 130-200% baseline
  • 31c/kWh - 200-300% baseline
  • 35c/kWh - over 300% baseline
For a residence, it makes sense to look at a year's worth of electricity bills and figure out what is the consumption pattern. In a warm area like California, odds are you'll use lots of electricity in the summer (A/C) and less in the winter when it's cooler, but not cold enough to require heating. One solar strategy is to get a solar array big enough to offset only the expensive kWh in the summer (those at 30c or more). There is no magic formula here, each house is different, although in very broad terms a 2.5-3.5 kW solar array should do the trick for a lot of average-size homes. Before embarking on a solar project, it makes sense to first optimize your consumption using the cheapest tools: replace all incandescent bulbs with CFLs, configure computers and TVs to go into standby when not used, increase the temperature of the fridge and freezer, insulate the attic to keep cold air in, and so on. This can have a dramatic effect on your electrical consumption, as much as 30% reduction! At this point, take stock of your usage and size the solar array as a function of the new energy consumption numbers. Go solar!

Saturday, February 09, 2008

Gas vs. Electric

The two major sources of energy used in California homes are gas and electricity. In our home, for example, the stove uses gas, the water heater uses gas, the washer/dryer use electricity to spin and gas to heat, and the house heater uses an electric motor to push air over a metal tube heated with gas. It's no accident that the California's major utility company is called PG&E: Pacific Gas & Electric. I recently stumbled upon an interesting article about energy efficiency in home appliances. Among others, the article recommends using an electric room heater instead of running the home gas heater. I was generally under the impression that "gas is better" because it's cheaper and pollutes less (gas burns cleaner, whereas electricity is generally produced in coal burning power-plants that are far dirtier). So I decided to do some some research into the matter. For the baseline, I looked at the January bill from 2008 and 2007 (January is the coldest month around here, when one would expect the bill to be the highest, and this past January was especially cold):
  • January 2007
    • Gas: 53 therms @ $1.13
    • Electric: 133 KWh @ $0.11
    • Total: $74
  • January 2008
    • Gas: 49 therms @ $1.14
    • Electric: 136 KWh @ $0.11
    • Total: $71
Based on our usage patterns, I would estimate that roughly half the gas we consume is for heating the air in the home. So what would it look like if we used an electric Vornado heater instead? Based on the Vornado's specifications, it uses between 750 and 1500 Watts, depending on the temperature setting. I measured the wattage, and we're between 600 and 1200 Watts, since we never set it at the max (it gets too hot). For the purposes of this simple calculation, I'll assume the average consumption is 1000 Watts = 1 KW. We use the heater for a maximum of 5 hours per night (5pm - 10pm), so in one month, that's 30 * 5h * 1KW = 150 KWh. With this in mind, our January 2008 bill would have looked like:
  • Gas: 25 therms @ $1.14
  • Electric: 286 KWh @ $0.11
  • Total: $56
That's a 22% reduction in cost! The heater would pay for itself in 3 months. In terms of quality of life, we've started spending more time in one room, with the door closed, in order to keep the heat inside. The Vornado works best in such a closed environment, and it often heats up the room far more than the gas heater. It takes a bit longer to get the room warm, but once it's warm, it consumes very little electricity to keep it that. I spoke to some colleagues at work about this, and the general consensus is that if you can thermally insulate individual rooms in the house, it makes sense to individually heat them using electricity, otherwise a gas heater is more efficient and economical for the entire house. What about the environmental impact? It turns out that in California, most electricity is also produced using natural gas, which is a reasonably clean way to do it. Some electric energy is lost in transmission, but it appears to be reasonably small (average 10%). The big upside, however, is that California is aggressively pursuing electricity generation using renewable energy: solar, wind, and so on, in which case electricity is definitely the way to go. You also have the option to offset the carbon used by your consumption, which is a nice bonus. In typical maverick fashion, San Francisco wants to become fully energy independent, and there are projects underway for that. In my case, it seems that electricity makes more sense than gas. At the end of the day, however, the most important thing is to be aware, measure the impact, and think about what makes sense. More on that, however, in another post.

Monday, January 28, 2008

IPv6: here already?

With the phenomenal recent growth of the internet, the 32-bit IP address space (IPv4) is starting to run out. There are efforts to address this, like subdividing or repossessing large blocks already in use, or the ubiquitous use of NATs, but they're only slowing the inevitable. The IPv6 standard has been around since circa 1996, and it has recently started to get more visible traction: all major operating systems (Windows XP/Vista, Linux, and OSX) support IPv6 natively, there exist IPv6 networks that you can access via dedicated tunnels ("tunnel brokers"), and there are rumors that some companies use IPv6 internally. Some ISPs (like Sonic.net) even provide such dedicated IPv6 tunnels for free, in an effort to encourage experimentation. Although no major ISP has yet migrated to IPv6, I believe it's only a matter of time.

Discussing the in-depth technical aspects of IPv6 is much beyond the scope of this blog post, though I do encourage the adventurous reader to take a look at some of the resources linked above. Instead, this is more of a high-level overview for how to get IPv6 up and running on your home LAN. I recommend using the excellent DD-WRT firmware's IPv6 support, upon which this post is roughly based.

IPv4 addresses are 32 bit in size, which gives us a total of 4 billion different addresses. The effective number of machines that can be accessed in practice is smaller (since many addresses were given out in "blocks" to institutions that don't use the entire block), but it still gives an idea about the upper-bound. In contrast, IPv6 addresses are 128 bit in size. This is an enormously large number, and it's hard to provide a good metaphor for how large it really is. IPv6 would allocate a trillion IPv6 addresses for each square centimeter of our planet, or over 1 trillion trillion addresses for each of the 6.5 billion people currently on the planet. Hopefully, this should be enough for the foreseeable future.

The internet at large is still IPv4, but you can configure your internal LAN (behind your NAT'ed router) to run over IPv6. All the machines on the LAN would talk amongst themselves using IPv6. However, the router is still connected to an IPv4 network (your ISP). Therefore, the LAN traffic must have a way to travel across IPv4 to its destination, regardless if the destination is an IPv4 or an IPv6 network. This is done in one of two major ways:

1. Use 6to4

This is a protocol which assigns each IPv4 address a special IPv6 equivalent (which, by convention, always starts with the prefix "2002:"). Your router would package all IPv6 packets, regardless of destination, into IPv4 equivalents, and send them to a relay gateway. This relay gateway is a machine that sits at the edge between an IPv4 and an IPv6 network, and looks at the destination address of incoming traffic:
  • If the address is an IPv6 address on the relay's IPv6 network, it routes it using IPv6 routing.
  • If the address is an IPv6 address on another IPv6 network, it packages it into an IPv4 packet, sends it over IPv4 to the relay gateway of the destination IPv6 network, which unpacks it, and routes it.
  • If the address is a 6to4 IPv6 address (corresponding to an IPv4 address), it unpacks it, sends the traffic over the IPv4 network, collects the response, and sends it back to your router.
Confused? Here's a diagram:

One risk with the 6to4 protocol is that the nearest relay gateway may be far away, which negatively impacts performance. If you're careful, you can select a 6to4 relay that's reasonably close to you, and you should be fine.

2. Use a tunnel broker

A tunnel broker is a dedicated link from your router's gateway to an IPv6 gateway. This makes life a lot simpler for your router: all it has to do is take IPv6 packets from the LAN, package them as IPv4, and push them into the tunnel. Provided that the tunnel isn't too slow (that is, too far away), the tunnel 's end-point will do all the hard work of routing IPv6 and IPv4 packets correctly. Here is a diagram that describes the process visually (note that the Tunnel Server can either directly route over IPv6, or ask the Tunnel Broker to route over IPv4; your LAN uses only a single IPv6 tunnel to connect to the Tunnel Server).

Some tunnels require that your router have a static address, while others can handle dynamic addresses (via the AICCU utility). The two big IPv6 tunnel providers in the US are SixXS and Hurricane Electric.

If you succeed in setting up IPv6 on your LAN, you can connect to this site, and watch the turtle dance!

"All this work to see a silly turtle dance on the screen?" I hear you ask?

Although the IPv6 internet is significantly smaller than the IPv4 at this point, this might be a good exercise for you to learn how IPv6 works. All indications are that we will migrate to IPv6 before too long, so consider this a way to get your feet wet. There is no downside to running your LAN over IPv6, your operating system, as well as most modern browsers (Firefox, IE, etc.) and network applications (Thunderbird, etc.) natively support IPv6. The performance penalty for running using tunnels should not be prohibitive, and most DNS servers know to serve both IPv4 and IPv6 addresses for multi-homed servers, to make the accessible from both networks. IPv6 is, in some ways, much simpler and more elegant than IPv4, so for those curious to learn more, this is a great opportunity to look under the covers.

Thursday, January 24, 2008

QuickCam Pro 9000: Skype in HD

Many friends and family members live abroad or just plain far away. In order to keep in touch, I frequently use Skype, and in the past few years or so, Skype Video. I was initially hooked by Skype's amazing voice quality (and the fact that it was free!), but Skype Video adds a whole new dimension to everything. Bandwidth speeds have gotten good enough (even across entire oceans and continents) that it's now feasible to have a full-screen Skype Video conference call and literally "hang out". For me personally, this must be one of the biggest and most profound changes that technology brought about in the past few years.

Up until recently, I've been using a Creative WebCam Live! for video calls. The camera does a pretty decent job, but it has two important shortcomings: the images are a bit darker than I'd like, and its field of view is narrow enough that it has difficulty fitting more than one person, or even capturing what I'm doing unless I sit pretty still. So I decided a few weeks ago to look for a webcam with a wide-angle lens. The two main contenders appear to be Creative Live! Cam Optia Pro and Logitech's QuickCam Pro 9000.

The reviews for the Optia are pretty mixed, and echoed some of the same issues I'd had with the current Live! model. In contrast, the QuickCam model not only has a wider field of view, but it also has an exceptional Zeiss lens, and a very good reputation in on-line reviews. Both cameras are labeled as "HD-quality" (probably to capitalize on the HD buzz that's going around). Off the bat, I'd actually consider this a bad thing since it puts considerably more strain on internet bandwidth, but I figured that there might be a software setting to dial it down if the video conference starts to stutter. The QuickCam is a bit more expensive than the Optia, but with some luck (a timely Amazon rebate), it came out cheaper, so I went for it.

Upon installation, the camera behaves very well. The image quality is very crisp (Zeiss never disappoints), the angle is nice and wide (it's able to easily fit 2 or 3 people in the picture). But the most amazing feature is Logitech's "RightLight 2" technology, which makes images far brighter and easier on the eye. In almost all light conditions (natural, halogen, evening, etc.) the camera delivers stunning, well-balanced images automatically. If you've ever had problems with webcams in low-light conditions before, you really owe it to yourself to try this camera, it "just works". The built-in microphone is a nice touch, it removed one more wire and device off my desk, and the noise and echo-canceling technology is very good as well.

When it comes to Skype, the camera's drivers auto-detect Skype and auto-configure it to use the webcam as its video and audio source (a nice touch). Unfortunately, the default software that comes on the CD is incredibly buggy, it crashes Skype reliably, and is very hard to get right. After an hour of frustration, I found an updated version of the software on-line that is far more stable and works as intended with Skype. One remaining mildly annoying tid-bit is that right after you start a video call, the camera occasionally "resets" a few seconds into the call (which really means that it skips about a second's worth of video), but after that it's solid.

Although the camera is HD, Skype doesn't miss a beat: the video calls are crisp, clear, and clearly feel more like "high-definition" than with the previous webcam. I imagine that Skype probably automatically dials down the video quality to keep up with network traffic, but if the bandwidth is there, it will use it, and what a difference it makes!

All in all, I'm very happy with this camera, and I highly recommend it, it will make video calls feel that much more real and enjoyable.

Wednesday, January 23, 2008

QoS with DD-WRT

More and more of my life has been going digital, anywhere from pictures, to e-mail, music, etc. A large chunk of this content is stored on my PC, so there is a very real risk of losing it all in a serious crash. To mitigate this, I recently decided to go with an on-line backup service. Besides convenience, this has the essential advantage that it's located in a different state, so if our house were to fall in the ocean, the data has a better chance of surviving.

A downside of on-line backup services is that they want to suck-up all available bandwidth, so they slow down the entire home network. There are some settings for throttling bandwidth in the backup client itself, but they're primitive and plain don't work that well. It'd be a lot nicer if the router simply did the throttling for me, depending on what the other machines on the LAN are doing. This is what QoS is all-about.

Some time ago I read an excellent article on hacking open-source routers, so I decided to buy a WRT54GL. It turns out that flashing this router with the DD-WRT firmware gives it excellent QoS facilities, far better than what the native firmware can do. Installing the DD-WRT firmware requires some care, but is not difficult. There's something very satisfying about rebooting the router and getting a detailed status page with CPU load averages, and being able to SSH into the router to poke around the file system.

Setting up QoS correctly turns out to be trickier than it seems. First, you have to estimate the uplink and downlink bandwidth that your ISP provides, and tell the router 85% of each. The underlying assumption is that your bandwidth is fairly constant, but the router has to be able to handle spikes, so the 85% gives it a cushion. Since I use DSL, my bandwidth is pretty stable, but for cable customers (where bandwidth is shared by all people in your neighborhood), I imagine that's less likely to be true, so your mileage may vary here.

After this, you have the option to boost the priority of certain traffic sources, and lower the priority of other sources. A traffic source can be:
  • A particular application, communicating on a particular port
  • One or more specific IPs (or IP masks) on the LAN
  • One or more specific MACs on the LAN
  • One or more specific ethernet ports on the router
I decided to go for the first option, which in my opinion is the most flexible given my network setup. My on-line backup client uses a well-known fixed port, so it was easy to tell the router to downgrade all traffic on that port to "Bulk" status. This means that if any other traffic source wants to use the network, the "Bulk" sources are throttled all the way down to zero. This by itself works pretty well, but we can do better by marking other traffic sources as "Standard" or "Express" in order to boost their priority. Natural candidates for boosting are HTTP, Skype, IMAP, etc.

At first sight, it might seem that HTTP traffic can be simply identified as all traffic on port 80. This isn't quite true, given that many websites decide to serve static content off other ports, and it also doesn't handle HTTPS traffic. Fortunately, DD-WRT supports the L7 filter, which attempts to classify the type of traffic by inspecting the packets themselves (for example, this is the L7 pattern that classifies HTTP traffic). This does take a performance penalty since all packets have to be inspected, but is easy, reliable, and headache-free, so I gave it a shot.

Once everything was configured and running, I was pleased to see that it works well: when the network was quiet, my backup software was running at close to full-speed. As soon as I started to use the network (for example, watch an on-line video), the router magically throttled the backup traffic down to almost zero, and my browsing was unimpaired. When I stopped browsing, the backup traffic came back to using almost all the bandwidth.

The only downside is that, between estimating the maximum bandwidth above, and the performance toll of the L7 filter, this does exact around 15-20% penalty on the overall traffic coming in and out of the network. My DSL service currently has about 2.5 MB up, and 450 KB down. With QoS enabled, I'm seeing around 2.1 MB up and around 360 KB down. This is enough for my needs for now, but I will have to see how it holds up over time.

Sunday, February 04, 2007

Amped up!

This weekend I assembled a very cool Tripath amplifier. These amps come as a kit (a printed circuit board and some components), and you have to solder them together. The end result is something pretty tiny (about the size of an iPod), but with amazingly big sound. More specifically, you can drive a pair of 25w speakers off 8 AA batteries and easily overload them! It's pretty amazing to watch. The sound is crystal clear, and it's hard to believe that it can come out of something that small.

My friend and I each assembled one such amp. He wants to use it to put a stereo on his bike: he's built a carboard enclosure that houses two speakers and sits in the frame of the bike. The sound that comes out of that cardboard enclosure is just amazing, I think he can easily wake up the neighborhood. I have more mundane plans for my amp: I just want to drive two speakers in the house, for music and projecting movies. For that purpose, I also built a small enclosure for the amp, basically a cheap plastic box with some carefully drilled holes. It's got a distinct home-brew feel to it, and I just think it's great.

I used to love building stuff like this when I was little, and even after all these years, I still think it's all that.

Tuesday, December 05, 2006

Jules Verne translations

It's been a while since I posted here! And what better way to get back than with a post about one of my childhood heroes, Jules Verne.

I've been recently re-reading a biography of Jules Verne's, and I was inspired to look for some English translations of his books for my library. Although I've read and still have most of his books, they are in Romanian, and, if I like a book, I sometimes enjoy reading it in different translations.

Anyway, as it turns out, such English translations of Jules Verne are quite difficult to find! It appears that the popular English editions of his books have rather egregious and systematic biases in them. For example, entire chapters missing (the original "20,000 leagues ..." has 44 chapters in French, while most English translations have 37), character name changes, and even deliberate character personality changes! For example, Ned Land, one of the main characters in "20,000 leagues ..." is a rather unapologetic socialist at heart, and makes that quite known throughout the book, but in the English translation, this bias is entirely absent! As you might imagine, in the Romanian version, the bias was there since it didn't offend any political sensibilities at the time.

It is widely believed that these poor translations are one of the main reasons why Verne's books are considered children literature here in the US, while in Europe they are read by both children and adults alike, and interpreted and discussed as such.

Fortunately, there are two good editions of the two books I wanted ("20,000 leagues ..." and "The mysterious island") in unabridged, careful English translation. The former, is published by the US Naval Institute, with all original illustrations and copious footnotes explaining lots of historical and literary details about the book. The latter is a true labor of love: it was translated by an engineer over 14 years(!), and also contains lots of additional material, footnotes, and commentaries. Both complete editions, amazingly, were published after 2000 (over 100 years after the first English translation appeared in the US).

The books are available pretty cheaply used on Amazon (I just bought the first for $5), and as you might imagine, I am very curious to re-read them and see what I might have missed even in the Romanian versions!

The extraordinary Jules Verne Collection came in handy with a page about various English translations and their pros and cons. I also learned that Jules Verne's centennial was last year, I wish I'd known and followed it more closely.