All posts by rbross

Enable Wifi on Wandboard Quad Ubuntu 14.04

There are no clear instructions for enabling the Broadcom wifi chip in the Wandboard Quad. What follows is an aggregation of information stemming from the research I did to get the chip working on my board.

What I’ve learned

The Wandboard Quad uses the 4330 chip, unlike the other Wandboards that use the 4329 chip. I found some instructions for Debian and Ubuntu on the Wandboard, but between the different chips and firmware, it was difficult to piece together.

You have to download the nvram.txt file for the Broadcom chip separately. Why it is not included in the images, I don’t know.

Ubuntu 14.04 disk image

I have a Wandboard Quad purchased in Oct 2014. I am running this image:
from this Web site:
The bottom line is that it was far more stable than the image on the official Wandboard site.

Enabling the Broadcom Wifi chip

Once I cobbled together the info, it was actually a piece of cake. You have to download the nvram.txt for the 4330, and then make symbolic links to the proper files so they are found on boo. You also have to create a config file (wpa_supplicant.conf) with your ssid and password:

sudo wpa_passphrase myssid mypassprase > /etc/wpa_supplicant.conf
cd /lib/firmware/brcm
sudo wget
sudo ln -s bcm4330_nvram.txt brcmfmac-sdio.txt
sudo ln -s brcmfmac4330-sdio.bin brcmfmac-sdio.bin
sudo reboot

Configure your Wifi interface
Now just go to /etc/network, open your favorite editor, and edit the “interfaces” file:

sudo nano /etc/network/interfaces

Then configure the wlan0 interface like normal:

# wireless network interface
auto wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant.conf

You can test by entering:

sudo ifup wlan0

Move your Linux installation to a new Solid State Drive – even a smaller one

Solid State Drives (SSD) are all the rage these days – and for good reason. They’re fast, silent, and have no mechanical parts to go wrong (although firmware bugs can and do bite people).
They’re also horribly expensive. For this reason many people who take the plunge buy drives that are smaller than the mechanical drives that are being replaced. This can be a problem.  If you’re moving to a larger drive, Clonezilla or other imaging programs do a fine job, but moving to a smaller drive can be daunting. If you do a Google search there are articles all over the Web that will give you bits and pieces of advice. The following is a procedure that I know will work, because I just moved my installation from a 300GB hard drive to a 240GB Intel 520 SSD.

First things first – preparation
First of all, make sure that all of your data will fit on your new drive. No article in existence can explain how to fit 400GB of data onto a 200GB drive. So check out how much space you are using and start housecleaning. Remember that you want to have some free working space as well.

Second, back up your drive. I said BACK UP YOUR DRIVE! Better yet, download a copy of Clonezilla and image it. Then you can recover from even the worst screwup by restoring the image. Clonezilla, learn it, love it, live it.
Finally, your /etc/fstab file should be using UUIDs instead of device ids to identify drives. Open /etc/fstab, and if you have entries that look like this:

/dev/sda2 / ext4 defaults,errors=remount-ro 0 1

instead of this:

UUID=ec9de201-c1f1-44d1-b398-5977188d4632 / ext4 defaults,errors=remount-ro 0 1

Then start Googling until you can boot with UUIDs. Although I won’t cover the difference between device designations and UUIDs in this article except to say that the reason for using UUIDs is that the device IDs and partition numbers on the new drive may not match the old ones, so fstab could be incorrect. Once you can boot with UUIDs, we are in a position to assign new drives the same UUIDs as the old drive had, ensuring that fstab will not have to be edited.

Create and copy your partitions
For this part you will need a Linux Live CD or USB flash drive. Hopefully, you already have one handy in case of emergency. Boot from the live CD or USB flash drive. Attach the old and the new drive to the system.
Ok, here we go.
To copy partitions:
1. Partition the new drive. If you have “/boot” and “/” partitions on the old drive, create them on the new drive.
2. Format the partitions with the same file systems as the old drive partitions.
3. Boot from a live CD or USB flash drive
4. Open a terminal
5. mount the partitions from both the old and the new drives (use “sudo blkid” to see the devices and partitions).
6. For each partition, run:
sudo cp -afv source_mount_point/. destination_mount_point
So for example:

sudo cp -afv /mnt/old_drive_boot/. /mnt/new_drive_boot

7. run “sudo blkid” and make note of the UUIDs of the old drive (open text editor and copy and paste from the terminal).
8. Unmount the old drive partitions (“sudo umount MOUNT_POINT”)
9. For each partition, set the GUID to match the ones on the old partitions:

sudo tune2fs -U UUID /dev/sdXX

10. Turn off your machine, remove the old drive and boot from the Live CD/USB again.
11. Follow the next section derived from

Install GRUB2 to your new drive
1. In Terminal type

sudo fdisk -l (or "parted" and "list" if you are using GPT instead of MBR)

2. Mount the / partition drive

sudo mount /dev/sdXX /mnt

(example ‘sudo mount /dev/sda11 /mnt’ ,don’t miss the spaces.)
3. Only if you have a separate boot partition:

sudo mount /dev/sdYY /mnt/boot

4. Mount the virtual filesystems:

sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys

5. To ensure that only the grub utilities from the LiveCD get executed, mount the /usr directory:

sudo mount --bind /usr/ /mnt/usr

6. Ok, now we can chroot onto the new drive.

sudo chroot /mnt

7. Ensure that there’s a /boot/grub/grub.cfg


8. Now reinstall Grub

grub-install /dev/sdX

(eg. grub-install /dev/sda – do not specify the partition number.
9. Verify the install

sudo grub-install --recheck /dev/sdX

10. Exit chroot : CTRL-D on keyboard (or “exit”)
11. Unmount virtual filesystems:

sudo umount /mnt/dev
sudo umount /mnt/proc
sudo umount /mnt/sys

12. If you mounted a separate /boot partition:

sudo umount /mnt/boot

13. Unmount the LiveCD’s /usr directory:

sudo umount /mnt/usr

14. Unmount last device:

sudo umount /mnt

15. Reboot.

sudo reboot

16. After you successfully reboot, make sure that you change fstab and other files to tweak your system for your new SSD:
Enjoy your new SSD!

How I Used a Pellet Stove and a Smart Fan to Eliminate Heating Oil Costs

Home heating costs in northern New England with fuel oil
In part 1 of this series we will look into the problem of energy costs in cold climates (in this case New Hampshire) and how a pellet stove can be used to reduce or eliminate that cost.

When most people around the country think of New Hampshire, the first thing they usually think is “cold and snow”. And from October until May that’s a pretty accurate picture. It gets really cold – and really expensive.
Over half of the homes in New Hampshire use heating oil to heat their homes, including us. My house is a bit over 3000sf. We are fully insulated and I have programmable thermostats in 4 zones. They are programmed with the following profiles:
Weekdays, 5am – 7am, 69 degF
Weekdays, 7am – 4pm, 64 degF
Weekdays, 4pm – 9:30pm, 69 degF
Weekdays, 9:30pm – 5am, 64 degF
Weekends, 6am – 10:30pm, 69 degF
Weekends, 10:30pm – 6am, 64 degF
Of course when the temperature outside is about 25 degF or less, the floor tends to be quite a bit cooler than the chest high height where the thermostats are mounted. This necessitates sweaters and long sleeves. Not horrible, and we’re sure not wasting energy. I also have a wood stove in the cellar that heats the floor. Although a bit of a pain, it greatly reduces our oil usage on frigid days.
In an effort to save more, we replaced our 18 year old furnace in 2007 (which burned efficiently but was “stupid”. A simple thermostat kept the water at a high set temperature for our forced hot water heat.) At a cost of around $8,000, we installed a Weil-McLain “smart” furnace. It has a control panel that sets the water temperature based on a combination of outside temperature and heating demand. We paid extra for an option that maintains a separate hot-water tank and heater for tap water. It cut our oil use by 30%. But it couldn’t control the price of oil. The result? Our 2010-2011 winter season oil bills totaled $3,373,49, or around $550/month for the late fall to early spring. Here’s a sampling of our oil cost for comparable periods in prior years:
2004-5 $2,285.47
2005-6 $3,861.05 (oil price spike)
2006-7 $2,920.91 (new “smart” furnace installed mid season)
2010-11 $3,373.49 (oil price spike).
As you can see, regardless of all the energy saving steps we have taken, we are still at the mercy of the volatile oil market. At the time that I write this residential home heating oil costs around $3.50/gal. We average about 200-300gal/month during the winter season, so I am looking at heating bills of $700-$1000/month during the 2011-12 season; $4000-$5000. I simply can’t afford that.
Here is a chart of oil prices from 2000-2010. If the chart went to 2011 you would see another spike up to 350 (cents).
I looked into every alternative energy technology I could think of; wind, solar; geothermal. They were all very expensive, achieved at best modest gains, were high maintenance, and the payback period varied from 15-30 years. As Dr. Evil would say; Riiiiiiight!
Enter the pellet stove
True innovations come in surprising forms. At first glance, you may think that a modern pellet stove is the result of the availability of super-cheap microcontrollers. They allow the stove to increase and decrease the fire based on a combination of air flow and pellet feed rate. They also control a fan that blows through heating tubes to blast hot air into the room. Cool. But if you think about it, the true genius was the person who decided to take wood waste and transform it into a medium that could be conveniently delivered, doesn’t require chopping and splitting, and allows the homeowner to load a hopper every other day. Try doing that with logs!
So after calling a number of vendors, I purchased a Regency Greenfire GFI55 insert for the fireplace in my living room from All Basics Stove Shop in Merrimack, NH. Short story; I love it. This stove is “smart” and runs great with a thermostat attached.
I connected a Lux TX500E, the same type of programmable thermostat that I use for my oil heat. This allows the pellet stove to fire up early in the morning and blast away for an hour, and then to lower itself to a comfortable temperature.
Pellets are about $200-$250/ton, which is 50 40lb bags. A bag will last from 24-48 hours depending on how much the stove runs.
Go to most homes where a pellet stove is installed and you see two common situations. Either they want to heat a lot of the house, so the room with the stove is 80+ degF and you want to take a swim. The other alternative is that the room with the pellet stove is the only comfortable place in the house.
And that points to the pellet stove’s biggest shortcoming as a whole house heating system; airflow.
Using a pellet stove as a home’s primary heat source
What I really wanted was a way to distribute the pellet stove’s intense heat throughout the house. Depending on your home’s floor plan, strategic placement of the stove is essential. You want to install it in a room that is close to the center of the house. As it happens, my house has a perfect floor plan for a pellet stove. It is a traditional New England home of a type know as a “hip roof colonial”. Basically it is a two story cube with a central stairwell. I installed the pellet stove in the living room and then put a pedestal fan (so the blades are 4 feet high) in the doorway to blow the hot air into the middle of the house. The air rises to the second story and spreads through most the whole house.
This is a Lasko 18″ 3 speed remote controlled fan for around $38 at Home Depot.
So the concept works, but now the fan either has to run 24×7 or you have to constantly control it with the remote control. I wanted “set it and forget it”.
Fully automated, the simple way
An easy way to automate a manual fan is to simply plug it into an outlet thermostat, like the Lux WIN100.
Warning; if you decide to use this method do not buy a remote controlled fan! When a remote controlled fan is unplugged and then plugged in, it won’t come on until the power button is pressed, useless for control by an outlet thermostat which cuts and restores power to the plug. Instead get a simple fan like this instead.
Ok, so now when the room heats up your outlet thermostat turns on the fan. This works quite well and is far more efficient than either heating the room to tropical temperatures or leaving a noisy fan running 24×7.
At this point we’ve achieved “efficient”, but what if we want super-efficient, fully automatic, and Web control? Geek alert! Read on.
The pellet stove has a microcontroller and a thermostat, why not the fan?
As mentioned above, I bought a 3 speed, remote controlled fan. It’s not like I had a choice. I went to 2 Home Depots, a Lowes, and a Wal-Mart and it was the only pedestal fan left at any of those stores.
Since I couldn’t use the outlet thermostat on a remote controlled fan, it looked like I was stuck. Nope. Being an inveterate tinkerer, about a year ago I asked my wife and daughter to buy me a microcontroller and network card to play with. For those of you who have never heard of “Arduino”, it is an open source, super cheap microcontroller with lots of variations and modules. I got a clone called a “Seeeduino” that costs $22. Arduinos have digital inputs and outputs as well as analog inputs. They can communicate with the world in a variety of ways. For more information, check out the project page.
My goal for this project was:

  • Turn the fan on and off when the room exceeded or fell below a set temperature.
  • Increase and decrease the fan speed in steps as the room temperature rises and falls to disperse the heat throughout the house.
  • Be able to override the controller without dealing with wall plugs.

The result is a tiny box that controls the fan via it’s remote control codes.
It has an embedded Web server and any household member can bring up the page and change the settings.
It can learn the fan remote control codes by simply pointing the remote at the controller, pressing a button, and then clicking either “Learn Power” or “Learn Speed”. As you can see, it has three temperature settings which correspond to fan speeds of Low, Medium, and High. When the temperature falls below “Low”, it turns the fan off.
The result is super efficient. The pellet stove thermostat is set to 77 degF early in the morning when the house is at its coldest. The fan slowly ramps up as the temperature in the living room increases and blasts hot air into the rest of the house. The thermostat then settles down to 68 at 7AM, which keeps the rest of the house at about 65. On weekends it is set for 74 during the day, which keeps the rest of the house at a comfy 70. In addition, my pellet use has been cut by around 30% as the fan works in perfect harmony with the stove to achieve peak efficiency.
I estimate that I will go through a ton of pellets (50 bags) an average of every 70 days throughout the heating season. That totals 3 tons, possibly a bit more. I buy clean burning pellets which are a little more expensive at $250/ton, so my cost will be between $750-$1000 for the entire season. The oil heat has never fired up since I completed this system. We keep the oil heat thermostats at 62 so they will kick on if we are away from home for an extended period of time or if the stove were to have a problem.
The pellet stove was $3500 installed. It will pay for itself in one heating season (with some pretty nice nights out to spare). Mission accomplished.
In future articles, I’ll describe the hardware components, how I built the controller, and also discuss the software that I wrote to run it.

How the NCAA and the BCS Ruined New Years Day

You can read any number of articles criticizing the BCS bowl system, almost all of them decrying that there isn’t a playoff system to decide a national champion. I completely agree, but that is not what this post is about. It is about what I really, really hate about the NCAA and the BCS system. In fact, I hate it for so many reasons that it is hard to pick the worst offense. But let’s start by discussing their second worst offense; ruining New Year’s Day.
The current bowl system uses a computer to decide who the top two teams are, and then arranges a “national title” bowl game on a weekday night almost a week after New Year’s Day, with a sprinkling of minor bowl games almost every night for the entire first week. If you are an undefeated, NCAA Division I football team and you aren’t picked, you’re out of the running (sorry Boise State, sorry TCU).
For you youngsters out there (< 35 years old), let me relate how New Year's Day used to unfold around 20+ years ago; there was no BCS bowl system to pick a mythical national champion and there was obviously no playoff. Most minor bowl games, which involved teams without a chance of being considered number 1, had already been played. What were left were the big dogs; the Rose Bowl, the Orange Bowl, the Sugar Bowl, the Cotton Bowl, and later the Fiesta Bowl. These games had match-ups determined by conference champions and schedule. The bowls that included independents (such as, at that time, national powers Miami, Florida State, Penn State, and Notre Dame) had to attract top contenders with a match-up that provided the best chance for a win to result in a consensus national title. So if there were 2 undefeated schools, and they were not playing each other in a bowl, each would try to arrange a bowl match-up with another top contender so they could try to dominate and win the "national championship argument" over the first week of the year. Later on, the NCAA moved one or two of the biggest bowl games to January 2nd in an attempt to simulate a true national title contest. This was a precursor to the BCS and it diluted the New Year's Day games a bit, but there was still an outside chance that those games could have an impact on the national title. Was this an imperfect system? Of course it was. The only valid system would pit the top 4 teams (or 8, etc) in playoff games so the title would be decided on the field. But it had one huge advantage over the present system; the New Year's Day bowl games meant something. You could look down the list of match-ups and get excited about hunkering down in front of the tube, drinking Bloody Marys to lessen the effect of the previous night's revelry, and watch some great football with your friends. Then over the following week you could argue about who was number 1 while waiting for the final polls to be published. Contrast that with today, January 1, 2010. Here is a list of the New Years Day match-ups: Outback Bowl - Northwestern vs Auburn Gator Bowl - West Virginia vs Florida St Capital One Bowl - Penn State vs LSU Rose Bowl - Ohio St vs Oregon Sugar Bowl - Cincinnati vs Florida Wow! Can you contain your excitement? I'm not saying that there aren't a few good games here, there are. But since there is zero impact on the national championship picture, do you really care who wins these games (unless you are an alumnus of one of these schools)? I didn't think so. These games are followed by some more meaningless bowls on January 2nd, 4th, 5th, and 6th, with the "National Championship" Rose Bowl game on Thursday night, January 7th, at 8:00PM EST. Oh boy, a week after the holiday season, on a work night and after 3 straight weeks of football, I get to see a national championship game that doesn't conclusively decide anything! If I can stay awake. What sort of idiots decided to implement a system that devalues every game leading up to an "ultimate" game that is no better in deciding a true national champion than the previous system? Simple; idiots whose primary goal has nothing to do with college football, sports, or athletics. Their one and only goal is to maximize revenue, and all sports fans are the losers for it. Now that I have discussed the NCAA's second worst offense, you may wonder what their worst crime is. This one applies to all big-time NCAA sports. It's that these revenue hogs are the same bunch of hypocrites that won't allow a kid from the projects to accept a plane ticket for his parents to watch a game, a part-time job, or a small monthly stipend because they say that college athletics would be "corrupted". Meanwhile, the coaches and the NCAA are making millions and billions of dollars, respectively, marketing these same kids to garner TV contracts and sportswear endorsements, advertising revenue, and merchandising deals. And what do the kids get if they don't make it to the pros after devoting untold hours each week to practice and travel for their entire college career while simultaneously trying to maintain their grades (assuming that they attend one of the very few schools that even care about their schoolwork)? A trip right back to the projects. How can you not hate these guys?

Recession, Stimulus, and Keynesian Economics

A Short History

As we are all too well aware, the economy is now in the deepest recession since the Great Depression. Much has been written about irresponsible bankers, irresponsible borrowers, speculators, investors, regulation, deregulation, and ineffective government. But that is not the focus of today’s article. Today, we find ourselves in the midst of the greatest binge in government borrowing and spending in the history of civilization. One may or may not agree with our government’s actions, but it is fitting to examine the stated economic rationale behind the policies.

Prior to the mid 1930s, most economists believed that free markets were self balancing and would emerge from recessions if left to their own devices. They knew that capitalist economies were a balance between savings and investment. If there was a large expansion in savings, then there would be a large supply of money available. The law of supply and demand mandates that any commodity in great supply (in this case money) will become less expensive. In the case of capital, this is manifested by lower interest rates. As interest rates fall, it becomes less expensive for both consumers and businesses to borrow and invest. Consumers buy CDs, stocks and bonds (because we are talking about savings, not consumption, for the moment ignore purchases of consumable goods such as cars, TVs, and the like). Businesses find it cheaper to borrow money to expand manufacturing capability, invest in research and new product development, expand marketing, or move into other product lines and geographies. As investment ramps up, capital (savings) are absorbed and put to productive use, resulting in economic growth. In the shorter term, as capital is sopped up, there is a reduction in the money supply. Interest rates once again begin to climb, bringing the entire system back into balance.

At least that was the classical theory. But during the Great Depression, economists were stumped. It is generally agreed that the Federal Reserve contributed to the onset of the crisis by raising interest rates in the late 1920s in an effort to stem stock speculation, but that is a side issue. The great stock market crash occurred in 1929 and the economy was in a downward spiral.

Keynes provides an explanation

The depression went on for years. Why didn’t automatic mechanisms in the free market bring the economy back into balance? In 1936, John Maynard Keynes believed that he knew the answer, published in his masterpiece “The General Theory of Interest, Employment, and Money“. In it, Keynes argued that the basic problem of the Depression (or any deep, lasting recession) was that there was a lack of investment on the part of business in spite of low interest rates. If there is a general malaise, businesses surely are not going to risk taking on debt to expand into a future where there is uncertain demand for their products. Such a course is far too risky. And here we come to the crux of Keynesianism; Keynes’ solution was that the only recourse remaining was for government to step into the breach and spur investment by borrowing and spending. Government spending would guarantee (some) businesses economic activity, which would provide a market for other industries that serve those businesses, and so on. This would halt the downward slide and reverse the course of the economy. As business recovered, the government could withdraw and allow private enterprise to return to normal.

It should be noted that “The General Theory” was published in 1936, 3 years into Franklin Roosevelt’s first term. Under Roosevelt, government spending had already increased 50% by 1936 as compared to 1929 ($15B vs. $10B). Although private investment did increase somewhat, the unemployment rate fell to only 17% from 25%. In spite of government expenditures, it would rise once again (to 19% by 1939). This was hardly a vindication of Keynesianism. In his 1953 work, “The Worldy Philosophers“, Robert Heilbroner provides the most cogent explanation of this ineffectiveness, one which is eerily prescient of the current policy debate:

Nei­ther Keynes nor the government spenders had taken into account that the beneficiaries of the new medicine might con­sider it worse than the disease. Government spending was meant as a helping hand for business. It was interpreted by business as a threatening gesture.
Nor is this surprising. The New Deal had swept in on a wave of anti-business sentiment; values and standards that had become virtually sacrosanct were suddenly held up to skeptical scrutiny and criticism. The whole conception of “business rights,” “property rights,” and “the role of govern­ment” was rudely shaken; within a few years business was asked to forget its traditions of unquestioned preeminence and to adopt a new philosophy of cooperation with labor unions, acceptance of new rules and regulations, reform of many of its practices. Little wonder that it regarded the gov­ernment in Washington as inimical, biased, and downright radical. And no wonder, in such an atmosphere, that its ea­gerness to undertake large-scale investment was dampened by the uneasiness it felt in this unfamiliar climate.

Hence every effort of the government to undertake a program of sufficient magnitude to mop up all the unem­ployed–probably a program at least twice as large as it did in fact undertake–was assailed as further evidence of Socialist design. And at the same time, the halfway measures the gov­ernment did employ were just enough to frighten business away from undertaking a full-scale effort by itself. It was a sit­uation not unlike that found in medicine; the medicine cured the patient of one illness, only to weaken him with its side effects. Government spending never truly cured the econ­omy–not because it was economically unsound, but because it was ideologically upsetting.

Note that during World War II the federal budget peaked at $103B, fully 10 times the 1929 amount. This did result in full employment, but at the cost of rampant inflation, as would be expected when the government indulges in the wholesale expansion of the monetary base.

Keynes misunderstood

Many modern politicians invoke Keynes in the name of government expansion, but the fact was that Keynes was a great admirer of Edmund Burke. He believed that government activity in the economy should be targeted and temporary, should focus on stimulus and investment, and should be withdrawn as soon as the free market was once again healthy.

In a letter to the New York Times in 1934, Keynes wrote “I see the problem of recovery in the following light; How soon will normal business enterprise come to the rescue? On what scale, by which expedients, and for how long is abnormal government expenditure advisable in the meantime?“. [emphasis added]

Are current policies “Keynesian?”

Governments around the world, from China, to the European Union, to the United States, are passing “stimulus” bills. The idea is to spark economic activity in an effort to get business to once again invest. Given what we have learned, an effective stimulus should have the following attributes:

    • It should be large enough to have an effect. The 2007 Gross Domestic Product of the US economy was $14T. An $800B stimulus package is 5.7% of GDP. The 2007 federal budget was $2.8T. As explained above, in World War II, the U.S. government spent 10x the 1929 budget.
    • It should be immediate. If the government is going to borrow huge amounts of money to stimulate the economy, it needs to get that money into the system as quickly as possible. One way to do so is to fund projects that are already in the pipeline. The money should not be spent on programs that do not spur investment or spark economic activity in the private sector.
    • It should encourage private investment. No matter how much the government spends, if the private sector is not confident about the future, they will not invest. Therefore, the program should endeavor to make private investment as attractive as possible. Lower capital gains taxes encourage companies and individuals to take on more risk. Lower individual tax rates immediately provide an infusion of capital into the system, as well as incentivizing individuals to take more risk. If federal income tax, social security, medicare, state income tax, and property taxes add up to a tax rate of 65%, one can hardly expect an individual to risk their savings or livelihood in an effort to better their economic situation. They will be more reluctant to work harder for a bonus, more reluctant to join a start-up, more reluctant to relocate. In short, if you lower the rewards, then you have depressed the risk-taking activities that are the beating heart of a free market economy. Counter-productive in the best of times, policies that depress the investment climate are potentially catastrophic in the midst of a recession.

A word about the monetarists

Typically, one hears that the economic debate is between Keynesians and monetarists. Policy makers, rightly or wrongly, tend to invoke Keynes when arguing for more government involvement in the economy. Other policy makers invoke monetarists, principally Milton Friedman, to argue for a more laissez-faire approach to the free market.

What is monetarism? At it’s core, it is the belief that government can best tune the economy and prevent economic bubbles and recessions by controlling the supply of money and balancing the budget. By what mechanism? Primarily a central bank’s (for example the Federal Reserve) control of interest rates, as well as its sale (or withdrawal) of government bonds. As espoused by Milton Friedman, government should concentrate primarily on keeping prices stable. If there is too much money in the system, the result is inflation. Too little and there could be a lack of investment, causing a recession, and in severe cases a deflationary spiral. (Some ask why falling prices are a problem. Ask yourself what the result would be if businesses were incapable of making a profit).

Ben Bernanke, the current Chairman of the Federal Reserve, is generally thought to be non-ideological in his views of Keynesianism and monetarism. In his writings and actions, he seems to be a pragmatist, willing to use whatever tools are at the disposal of government to forestall a crisis or alleviate one.

Who is right?

In my (admittedly) uneducated opinion, neither school of economic thought is fully correct or incorrect. From a non-ideological viewpoint, we don’t live in world with a pure free market economy, free from all regulation and government interference. Nor do we live in a world with economies fully controlled in minute detail by government (unless you are one of the unfortunates residing in countries like Cuba or North Korea).

Was it a lack of regulation that caused the housing bust, as some claim? Were banks running wild? Did Alan Greenspan lower interest rates too much in the wake of the Internet bust and 9/11 (monetarism) in an effort to forestall a severe recession, thus contributing to the housing bubble?

What of government interfering in the housing market via the Community Reinvestment Act and the quasi-governmental entities, Fannie Mae and Freddie Mac? Most of us remember a time when a 20% down payment and a high credit rating were required to qualify for a mortgage. Was it deregulation of the banks that loosened lending standards, or was it that the CRA mandated that 50% of bank lending “meet the needs of the entire community”? (Note that this threshold was raised from 42% in 1999 by the Clinton administration). At the same time, Fannie Mae and Freddie Mac were mandated to meet housing goals set by the Department of Housing and Urban Development . As such, they bought and securitized trillions of dollars in sub-prime mortgages. One can hardly declare the failure of a “free market” that requires lenders to loan money to those that would otherwise be denied as poor credit risks, backstopped by GSEs (Government Sponsored Enterprises) holding trillions in risky mortgages; $6 trillion total, fully half of all mortgages written in the United States.

Modern economic systems are complex. Government regulation and intrusion only make them more so. Pure monetarism or Keynesianism is nearly impossible in such an environment. The best that we, as citizens, can do is to be watchful that government actors are invoking neither Milton Friedman nor John Maynard Keynes as a smokescreen in the pursuit of non-economic goals.

  • Does a “Keynesian” policy meet the test as summarized above? Will it be timely, targeted, temporary, and large enough to have an impact?
  • Keep an eye on incentives, as they are what drive a market economy. Will a proposed regulation throw sand in the gears of commerce at a time when we need as much economic activity as possible? Will a tax policy or law encourage investment by both business and individuals, or suppress it? Will it encourage risk taking and innovation or reduce the rewards of success to the point that investors aren’t willing to fund a venture and individuals are unwilling to go out on an economic limb?
  • How much of a policy is economic and how much is social engineering? Is a policy designed to get the economy growing, or to change our society?

One last note; whether one agrees or disagrees with a particular social policy, it is extremely dangerous to add more uncertainty to a market economy that is already rife with fear. That is simply bad policy, whether it originates on the left or the right.

Bought and Sold

About a week and a half ago, it was announced that the company that I work for was being purchased by a large multi-billion dollar firm.
Turnabout is fair play, I guess. I have been on both sides of acquisitions in my career, since 1997 exclusively in the role of the acquirer. What’s interesting is the perspective you have depending on your specific situation. Following is a subset of the infinite number of personal and business situations and a take on what might be the feelings of the acquired employees in each situation:
• Acquired company is in financial trouble; acquirer is known as a fast moving, exciting place to work: I still have a job! And a future!
• Acquired company is not in financial trouble; acquirer is known as a fast moving, exciting place to work: This may not be so bad. Maybe there will be even more upside.
• Acquired company may or may not be in financial trouble, acquirer has the same initials as the state of California: Woe be to all ye who venture forth. For I have brought the gates of Hades to the corporeal realm!
• Acquired company is in financial trouble, acquirer is a staid but respectable player in the industry: This may not be so bad. Let’s see how this plays out.
• Acquired company is not in financial trouble, acquirer is a staid but respectable player in the industry: This is an endgame? Anticlimactic. Guess I’ll take a “wait and see” attitude.
• Acquired company may or may not be in financial trouble, acquirer is the largest software company on the planet: I don’t want to live in Redmond! Who the hell can afford a house in Redmond! What about my kids, my family, my friends, my life! MY CODE!!!
You get the idea. There is an entire spectrum of emotions that one could experience depending on their specific situation. So given my situation, here are mine.
I’ve worked at my current place of employment for 6 years. When I came on board, we were a going concern, but by no means was our future assured. We had about 150 employees and we partnered with whichever industry players we could get to sign an agreement. It was kind of like being a cat migrating with a herd of elephants. One wrong move and you’re road kill. But damn, what a great feeling! It was exciting. People knew each other and everyone pulled together. We had what in military circles is known as “esprit de corps”. And you know what? We still do. Even though we now have over 1000 employees. I don’t know everyone in the company anymore, but it’s amazing how many people I get to interact with. And our start-up culture is now so ingrained that people who are too political or who avoid accountability are quickly discovered and marginalized. That’s not nearly as brutal as it sounds. Our culture is to have a great work/life balance and just about everyone buys into that; but we’re not the type of place where you can just show up in the morning, do nothing, and nobody will notice. If you truly take pride in your work, and you have some modicum of self-motivation, our company is a great place to work.
Now it’s 2007. We’re certainly not an industry titan, but we definitely have an impact; we make a difference. We have a respectable market share in certain segments, so other players have to pay attention to us. And we’re still growing. We’ve been successful, with a great management team that has deftly navigated some pretty tricky waters. We’ve avoided those elephants and even danced with a few of them. So I guess I knew that sooner or later a larger player would pull the trigger. I even thought I was ready if it happened. But the thing is; I had never been emotionally attached to a company before. It’s harder than I thought.
During the workday, I am moving forward. Work occupies my mind and nothing much has changed. I honestly doubt that much will change in the day to day life of the employees. But it is still the end of an era, and the greatest experience of my 28 year career.
If in the years to come my work is 1/2 as fulfilling as it has been in the last six, I will consider myself exceptionally lucky.

The Importance of People

I was out to dinner with some colleagues last week (if you are a regular reader, “out to dinner” has become a frequent phrase in this blog) and we began to exchange anecdotes about our careers. The attendees were software teams from two different companies with a range of experience from intermediate (6 to 8 years experience) to senior (27 years of experience; me).
Anyone familiar with high technology culture knows that cool technology is so important to engineers that they will pass up higher pay (within reason) to work on interesting projects. We love talking about all the cool stuff that we’ve done. When it was my turn, I dug back into the early 80s to describe the wireframe 3D rotation software that I worked on, a contact lens expert system in 1982, automated climate and lighting control of my house in 1986, photographic image display software written in hand-tuned Assembler in 1990, compression algorithms in 1991, neural networks in the 90s, etc, etc. In the process, my career trajectory has gone from a 3 person company, to my own business, to a series of startups, to a Fortune 500 software company, and finally to my current position with a company that began as a startup but now has over 1000 employees.
In the course of conversation, it occurred to me that my perspective has changed. I no longer feel that my career accomplishments are so strictly defined by technical achievements or shipping products. Engineering milestones age rapidly. It’s hard for today’s engineers to relate to the challenge of writing a 3D rotating wireframe model on a 20 MHz 16 bit processor when they are accustomed to playing virtual reality games on desktop computers with supercomputer chips on the graphics card.
At this point in my career, the accomplishments that really stand out to me are those that are related to the people that I’ve worked with. Entry level engineers that I have helped to learn proper software development processes, developers that have become successful managers or Software Architects, and line managers who have become executives. Identifying potential talent and mentoring and working with those people over the years, one hardly notices that the former novice now has an opinion of his own; more often than not an opinion that has become as educated and well thought out as yours. It’s not always fun to be bested in a debate, but there have been many times that I have suppressed a small smile of satisfaction when a former “student” one-ups his “teacher”. That’s validation that both of you have grown.
So my perspective has changed. The “cool” engineering accomplishments that I worked on in the past will eventually be forgotten. But the people who have achieved and accomplished success due to their own hard work, with just a little bit of push and guidance from me? Those are “accomplishments” that we share, that will be passed down to their future colleagues, and that will live on.

Software Engineers and Musicians

There is almost no difference between a preening, tattooed rock star and a Star Trek worshipping, Cheetos-eating computer geek.

Ok, maybe I’m overstating that just a tad. I’m an executive at a software company. I started out as a software engineer and I still consider myself a software engineer (of course as soon as one moves into a management role, former colleagues snicker behind your back when you continue to refer to yourself as a “software engineer”). I have worked in the software industry for 27 years. With all the seismic changes that have occurred over that time, from green screen terminals, to PCs, to networks, to desktop supercomputers, to the Internet and mobile computing, one fact has remained constant; an inordinate percentage of software engineers are also amateur or semi-professional musicians.

I have personally observed this phenomenon through the years. I can’t count the number of times that I have been out to dinner with new acquaintances who work in the software industry. The talk turns to music, and inevitably a majority of the group actively plays an instrument. One colleague of mine recently quoted a study (I have no idea if this is true) which showed that 85% of IT workers were also musicians.

The question is; why should this be so? Believe me; my friends and I have batted around a number of theories. There are some obvious similarities; the stereotypical rock star exists on Jack Daniels and cocaine, the software engineer on Twinkies and Jolt cola. Rock stars rebel against bureaucracy and the “suits” at the big companies (or at least they used to when the music was more important than commercial success). Software engineers do the same (that’s why so many good ones opt to work at small companies and open source projects). Rock stars have their band. Software engineers have their team. Both choose jobs that don’t require a dress code (just turn out good tunes or good code) and have more flexible hours (or just more hours). Rock stars work towards the CD release, software engineers the product release. Just as many software engineers are amateur musicians, many musicians are amateur computer geeks.

I have a theory that I think neatly explains the significant overlap in interest and skill between these two groups. Like most good theories, this one is forehead-slappingly obvious once you hear it. In a nutshell, software engineers and musicians do virtually the same thing. Allow me to explain. Most people who do not work in the high-tech industry assume that software engineering is mathematical in nature. Ask any teenager why he or she is uninterested in programming and I’ll bet you that most will say “I’m not good at math”. There is a perception that programming is a mathematically based calculation, probably because the parallel idea persists that computers are nothing more than giant, blindingly fast calculators. But the truth is that programming is not a mathematical exercise, it is a creative exercise. Programmers are given a language syntax, programming interfaces, and a toolset which together form a structured framework. Within that framework they can be inventive, creative, and imaginative. Truly great programmers create elegant, almost beautiful solutions to complex problems. Peer recognition of their creativity and expertise is a large part of their reward. Sound familiar? Musicians are also given a “language syntax” and a set of rules that form a structured framework. Within that very structured framework they are also called upon to be inventive, creative, and imaginative. A software engineer hones the same skills that are required of the musician. If he or she also happen to have a good ear, why not pick up an instrument and leverage those skills in a different and equally interesting realm? All that is required is that he or she learn another language and a different structure.

I have no idea how one would go about attempting to prove whether my theory is correct, but I will admit to having a hidden agenda. My hope is that the next time the reader meets a software engineer, the first thought that comes to mind will not be “computer geek”. It will be “artist”.