Tuesday, September 28, 2010

The Bright and Dark Sides of the Smart Grid

As I was catching up on email that accumulated during my travels last week, I ran across two items highlighting the contrast between the shining potential of the emerging "smart grid" for energy and its darker, more dangerous side. In his keynote address at the first annual GridWise Global Forum, IBM's CEO Samuel J. Palmisano described the vision and opportunity of a closely interconnected, highly efficient global energy system, while the unfolding story of the Stuxnet computer worm infecting the control system of Iran's Bushehr nuclear reactor and other facilities serves as a chilling reminder of the vulnerabilities that will likely accompany this revolution. That doesn't justify clinging to our un-networked past, but it certainly strikes a cautionary note, as Mr. Palmisano himself pointed out.

Last week was a big one for energy conferences. I skipped both the GridWise session in D.C. and MIT's annual Emerging Technologies Conference to attend IHS Herold's Pacesetters Energy Conference. I'll share some of my insights and observations from the latter in the weeks ahead, but for now I recommend Mr. Palmisano's GridWise speech as a good overview of the characteristics of a truly smart energy system and the ways in which our present energy sources and distribution networks fall short of constituting a well-functioning system. Without being Pollyannish, it presents a more positive outlook than the widely-quoted comment from the same conference by GE's CEO Jeffrey Immelt, concerning US energy policy, "It's just stupid what we have here today."

Yet while most of Mr. Palmisano's remarks focused on the inevitability and benefits of knitting together and adding intelligence to regional, national and global energy infrastructure and markets (and other key systems) he also had this to say about the security concerns this would create: "And when it comes to security--the technology is robust, but as the world's infrastructure becomes networked and interconnected, the exposures multiply exponentially. How vulnerable is the world's essential infrastructure? Are our electric grids only going to be as secure as a website? If we don't come together to forge a new policy framework that protects the individual's privacy and the community's and nation's security, people may say 'stop.' And they should."

I don't know if he had intrusions like Stuxnet in mind. I also suspect it's going to take a lot more than a new policy framework to prevent governments, organized crime and random hackers from exploiting the inevitable security gaps in the smart grid and other smart infrastructure to cause mayhem. Having treated the nefarious work of spammers and hackers as more of a costly annoyance than an intolerable violation--when all that was at stake was personal and financial data--will we be as cavalier about similar risks to our physical safety and security? Or does the rise of the smart grid portend a great "hacker crackdown", to use the title of a classic book on the hacker subculture?

Much of the speculation about the sophisticated Stuxnet exploit, which apparently targets a particular kind of industrial process control hardware and software, suggests that it took the capabilities of a government to develop. However, if we've learned anything from the last couple of decades of computing advances, it's that anything a government can do today in this field a smart, motivated individual can do in the not-so-distant future, particularly once he knows it can be done; it's an irresistible challenge for some. Ubiquitous computer viruses today, endemic power plant and refinery viruses tomorrow?

I agree with Mr. Palmisano concerning both the inevitability of smart grid development and the risk that security concerns or actual events could halt this shift in its tracks. As we call on the grid--or more accurately, grids--to handle rapidly increasing numbers of distributed power sources, particularly intermittent ones like wind and solar power, and as customers demand more choices and more control over their energy usage, the old un-smart grid will soon cease to be up to the task. However, we're already seeing pushback against smart meters in some areas, without any concrete evidence of misuse of consumer data or other harm. Imagine what this would look like if the smart grid had to reboot as often as your PC or periodically became infected with malware that shut down parts of it for hours or days. I know that the companies and agencies involved in the smart grid are working hard on solutions to these challenges; however, I'm just as sure that it's going to require a completely different approach than the one we've employed for dealing with computer security, with operating system and anti-virus providers seemingly always one step behind the bad guys.

Friday, September 24, 2010

Is Gasoline Too Cheap?

It's an article of faith among many observers of the oil industry that gasoline is too cheap in the US. Environmentalists and economists point to various externalities that aren't included in the price consumers pay, while carmakers and alternative energy developers need a (much) higher price to make advanced vehicle technologies and substitute fuels competitive without subsidies. When someone asks, "Too low compared to what?" the response usually draws a comparison to prices in Europe and elsewhere. Yet while perusing a clever historical price comparison tool on the Energy Information Agency's website, I was struck by how high today's gas prices are, when adjusted for inflation, compared to those that prevailed for most of my life--other than during energy crises. That's surely a factor in current weak US gasoline demand, which has been running slightly below last year's, and a full 3% less than the record levels of 2007.

In the course of searching for a standard table of historical gasoline prices, I recently ran across a handy new feature (or merely one I hadn't seen before) of the EIA's Short-Term Energy Outlook report. It allows the public to compare the nominal and real prices for crude oil, gasoline and other fuels, and electricity, over a flexible interval adjusted with a slider control. The first thing I noticed was that although today's price for West Texas Intermediate crude oil of $76 per barrel seems pretty low compared to its $145/bbl high in July 2008, it's actually higher than the inflation-adjusted price for most of the period from 1973-2006, with the exception of the aftermath of the second 1970s oil shock. Now, the Consumer Price Index might not be the most appropriate measure of inflation for crude oil, as I've described in some detail before, but it is perfectly reasonable to apply it to gasoline prices.

On that basis, this week's national average of $2.72 per gallon--$0.17/gal. more than one year ago--is higher than the $0.53/gal. average (equivalent to $2.32/gal. today) for 1974, following the Arab Oil Embargo that helped trigger a severe global recession. It's higher than the $1.17/gal. average ($2.37/gal.) for 1985, before a flood of new production from the North Slope, North Sea and elsewhere broke OPEC's pricing power for more than a decade. It's even higher the $1.35/gal ($2.20) that we paid in the final lead-up to the first Gulf War in late 1990. In fact, it's almost a full dollar higher than the $1.75/gal. inflation-adjusted average for 1986-2005.

When gas prices dipped below $2.00/gal. in late 2008 and early 2009, that provided a significant stimulus to an economy suffering from the combination of a recession and financial crisis. At today's level, however, not only are gas prices not stimulating the economy, but they must be a significant drag on it. Our consumer psychology may be anchored for the time being on $4 as the gauge of what constitutes a high gas price, but compared to the prices that were in effect when our current patterns of mobility and employment were set and the vast majority of the US vehicle fleet was purchased, $2.70 seems more than sufficiently high to inflict economic pain.

Don't get me wrong. I understand full well that a realistic assessment of the cost of greenhouse gas emissions would add at least another $0.10-0.20/gal. to gas prices, and that the current level of US motor fuel taxes is inadequate to pay for the proper maintenance of our highway infrastructure, let alone all the other transportation priorities we'd like to pursue. Other economies have adjusted to much higher gas prices, though these do not prevent the European Union from being a larger net oil importer, in aggregate, than the US is. If OPEC can keep crude oil above $70/bbl when global demand is slack, it's anyone's guess how high it will go when the global economy is actually growing strongly, again. Yet while higher gas prices may well be in our future for many reasons, we should recognize that today's prices remain at oil-crisis levels, and the view of them as "too low" is very much in the eye of the beholder.

Monday, September 20, 2010

Green Jobs and a Renewable Electricity Standard

With a full-blown climate bill off the table at least for this year, a coalition of renewable energy project developers, trade associations, investors and technology providers is apparently rallying around a simpler energy bill that has been on the back burner since last year. The group's arguments in favor of the bill's key provision of a national "renewable electricity standard" (RES) focus on "green jobs" and concerns that China is passing us by. There is merit in both of these themes, though not to the extent that the RES Alliance for Jobs suggests. A national RES, which would require utilities to source a growing proportion of electricity supplies from renewable energy might help project developers and investors, but its impact on cleantech manufacturing--which is at the heart of the employment and competitiveness issues--looks much less direct.

When I examined a draft of the American Clean Energy Leadership Act of 2009 (ACELA) last summer, I found it preferable to the bloated Waxman-Markey bill that the House had just passed. ACELA was more genuinely bi-partisan, coming out of the Senate Energy and Natural Resources Committee chaired by Senator Bingaman (D-NM), and it took a more realistic view of the continuing importance of a broad spectrum of energy resources, not just renewables. However, along with various other energy measures over the last year, it was overshadowed by other legislative priorities and the Macondo oil leak. In the event that this bill gets another chance before the new Congress takes office next January, we should understand what an RES can and can't do.

The industry is doubtless correct that expected demand for renewable energy hardware plays a key role when companies decide whether to begin or expand production of such gear in the US. An RES would likely boost demand for this equipment. However, as the reports earlier this year and late last year concerning the share of the Treasury's renewable energy grants awarded to non-US firms and projects using non-US equipment made clear, this is a necessary but not sufficient condition for a healthy US cleantech manufacturing sector. Absent a concerted effort to make US manufacturing more globally competitive, much of the benefit--including many of the associated "green jobs"--would accrue to other countries. An RES, like other measures emphasizing deployment, rather than manufacturing, would be a weak domestic job creator, at best. And that's without factoring in the job losses in energy-intensive sectors resulting from the higher electricity prices that a national RES would impose, as the costs of compliance get passed on to ratepayers.

Another reason that an RES, at least at the modest levels contemplated under the ACELA bill, probably wouldn't spark a huge wave of renewable energy development and manufacturing is that most of the states with significant wind, solar, or geothermal resources already have their own RPS mandates in place, generally at levels well above those that would be set by the national RES. So at least one likely short-term result would be inflation in the prices of the renewable energy credits (RECs) that make these standards work, as the states without much reliable wind or sunlight, or easily accessible hydrothermal reservoirs, scrambled to cover their new quotas from out-of-state providers. This would be a real boon for REC traders, at the expense of regulated utilities and their ratepayers.

As for being beaten by China in clean energy technology, this is a serious concern, particularly in light of recent reports that some Chinese support for cleantech might violate World Trade Organization rules. At the same time, we must factor in the reality that China has become the "workshop of the world", not just for cleantech hardware but for many other products, as well. If we wonder why China makes 40% of the world's photovoltaic cells and 30% of the world's wind turbines, the answer has a lot in common with why they also make so many of the world's laptops, PCs and flat-screen TVs. It also helps that China has one of the world's fastest-growing electricity markets, creating enormous internal demand for a wide variety of generating technologies, including renewables. Since US electricity demand has been growing much more slowly, and has recently declined, renewables here can't simply capture their fair share of growth; they must squeeze something else out--typically something that produces electricity more reliably or cheaply.

As I noted in last Monday's posting, renewable energy still needs help to compete with conventional energy. Finding the right means of providing that help is complicated by the aftermath of the financial crisis and recession. A national RES comes off second-best to a lot of other approaches that don't seem very feasible right now, and in any case it won't solve our unemployment and global competitiveness challenges. Perhaps it's the least-bad comprehensive support we can realistically provide renewables today, if we are truly convinced we must provide extra assistance beyond the federal tax credits and state-level policies already in place--and which its implementation would render somewhat redundant. That's a far cry from the expansive claims being made for it by a number of entities that stand to gain if it were enacted.

Friday, September 17, 2010

Fuel Economy Gold Standard

I've long been fascinated by the X-Prize approach of providing substantial (but not astronomical) incentives for key breakthroughs: private sub-orbital flight, lunar landers, and most relevantly for this blog, the Progressive Automotive X-Prize for a 100 mile-per-gallon car, which began in 2006. The latter competition has been won by a trio of extremely efficient cars with very different architectures, power sources, and appearances. Splitting a $10 million prize probably won't even come close to reimbursing these teams for the cost of developing their cars, but the associated visibility should lead to some valuable opportunities. In a larger sense, the competition has served another useful purpose, In addition to furthering the technology for continuing to improve the efficiency of mainstream automobiles, it provides a gold standard reference against which to gauge the lavish claims of fuel economy we've already begun to hear from the makers of various plug-in electric vehicles.

One of the main aspects that impressed me about the Automotive X-Prize was the determination of its founders to avoid the superficial approach of merely counting how many gallons of liquid fuel each competing car burned, in favor of a comprehensive energy consumption metric, MPGe, or miles per gallon equivalent, which is based on the gasoline-equivalent energy used, regardless of source or form. MPGe gave the X-Prize judges a fair and unbiased means of comparing cars running on gasoline, ethanol, hydrogen, electricity, or any other energy carrier, onboard or offboard.

Now, as long as our primary concern is reducing our dependence on imported oil, a simple view of gallons of gasoline consumed isn't all bad. Displacing gasoline with electricity or hydrogen produced from domestic energy sources provides important benefits for energy security and our balance of trade, even if it doesn't save much actual energy in the process. One of the main arguments for vehicle electrification is that we can generate electricity in many different ways, but we can only produce gasoline or effective liquid-fuel substitutes for it in a few ways. However, in the long run, total energy consumption matters, particularly because of its strong linkage with emissions. Running a Nissan Leaf or GM Volt on electricity generated from coal--as would be the case in large swaths of the country--certainly saves oil, but it doesn't do very much for the atmosphere or climate. That's where MPGe comes in, and that's why I was pleased that the EPA and Department of Transportation have proposed something similar in their new fuel economy stickers for cars.

So when you see an ad for a new plug-in car that claims that to get effectively 100, 200, or even 300 miles per gallon, you should take a careful look at it, both in terms of MPGe and the physical characteristics of the car in question. This is what a real 100 mpg 4-passenger car looks like: the 830 lb., one-cylinder engined Very Light Car of the Edison2 team--from Virginia, I might add. Or consider the 187 MPGe Wave II two-seater plug-in battery electric car from Li-ion Motors. Cars like this show what it takes to deliver that kind of efficiency on a comprehensive basis. If you're buying a plug-in production model in the next year or two, and it looks more like a normal passenger car than these do, with room for four or more passengers and equipped with all the usual accessories we've grown accustomed to, then you should recognize that while it might burn little or no fuel from petroleum, that's not the whole story.

Wednesday, September 15, 2010

Avoiding Commoditization

It's no secret that it's tough to make money in a commodity business. However, that's precisely what most renewable energy companies are attempting to do, by going after some of the biggest, most commoditized markets of all, in fuel and electricity. An article in MIT's Technology Review about a New Zealand biofuels start-up illustrates one path around this trap, by producing specialty products, though there are others, including various branding strategies. Choosing the right strategy may depend as much on where an industry is in its lifecycle as on the actual output of the company's technology.

It wasn't so long ago that even the enormous US gasoline market retained significant non-commodity attributes. Fuel marketers successfully differentiated themselves on service and perceptions of quality, and those that were best at this were able to command an extra penny or two at the pump--a huge uplift in profit margin when gas was under a dollar a gallon. One of the classics in this line was an old Shell marketing campaign focused on "Super Shell with Platformate", implying a unique formulation that delivered more power and more mileage. Only when I started working in the industry did I learn that essentially all gasoline contained Platformate, which was just a brand name for a common gasoline component that had been catalytically reformed to turn straight hydrocarbon molecules into higher-octane ring compounds. Today, with the basic formulation of gasoline set not by refiners but by federal and state environmental agencies, fuel marketers have attempted to differentiate themselves on the basis of infusing branded additives. I'm not sure how successful that has been, outside the niche market of motorists driving high-end vehicles with expensive, high-performance engines.

The New Zealand biofuels company in today's article, LanzaTech, is apparently using tailored bacteria to convert carbon monoxide from steel mill and power plant flue gas into ethanol and useful chemicals. The process is doubly interesting, because although carbon monoxide is not of great concern as a greenhouse gas, it is a major local pollutant, and it can slow the decay of other greenhouse gases and contribute to atmospheric ozone. Producing ethanol puts LanzaTech into direct competition with the rapidly growing output of crop-based biofuels producers, including highly efficient production from sugar cane in the tropics. But by targeting the production of petrochemical intermediates (used to make other chemicals) like butanediol, they can access higher-margin, less-commoditized markets. A facility wouldn't have to produce much of these products, along with the ethanol, to boost its profitability by enough to make a difference.

This strategy could also be beneficial for companies using costly processes for converting cellulosic biomass into fuel, particularly as they scale up from laboratory and demonstration scale, because scale is a key limitation on specialty products. One of the main reasons these products are typically worth a lot more than commodity fuels is that they are produced and used in relatively small quantities, and their markets aren't large enough to attract the biggest competitors. The first industrial-scale gas-to-liquids (GTL) facilities made good margins selling waxes and other specialty products along with the high-quality diesel they produced, but as GTL becomes more mainstream, with plants like Oryx GTL and the giant Pearl GTL in Qatar, that option becomes less valuable.

Biofuel start-ups already face enough technology challenges without focusing all their efforts on making ethanol that can't be differentiated from the output of corn ethanol plants that have been at this game a lot longer and have mastered not only the production process, but also the intricacies of managing their large supply chains--a huge, under-appreciated challenge for cellulosic biofuels. Focusing initially on specialty products, to the degree their technologies allow, would give them time to scale up and work out the kinks, before tackling a market in which cost is everything.

Monday, September 13, 2010

Post-Stimulus Transition for Renewable Energy

One of the largest uncertainties affecting the US renewable energy sector is how it will make the transition from the special subsidies provided under last year's stimulus bill (American Recovery and Reinvestment Act of 2009) back to the "normal" incentives available prior to the financial crisis and recession. The key element of this concerns the Treasury renewable energy grant program, which has stood in for the "tax equity" market that stalled around the time Lehman Brothers went under. Eligibility for the grants expires at the end of this year, and companies that have benefited from them are calling for an extension into 2011 or beyond. That looks like a long-shot at this point. However, another proposal not specifically aimed at renewable energy could provide exactly the sort of transition support the industry requires, while also beginning the necessary task of treating this sector more like others.

As of the Treasury Department's most recent update, renewable energy projects have received a total of $5.2 billion under the "1603" grant program, with more than 85% going to large-scale wind farms. Solar electric and thermal projects received $330 million, or about 6%, trailed by geothermal, biomass power, and small-scale wind. With the financial markets that developers had previously relied on to exchange future tax credits for current cash in disarray last year, the 1603 grants were a crucial stop-gap. However, with electricity demand still lagging and renewables facing strong competition from cheap natural gas, the US wind industry has gone into a slump that might deepen further, once developers' new projects are no longer eligible for up-front cash grants, forcing them to wait for tax credits that accrue as power is generated.

Several proposals to extend the 1603 grants are floating around the Congress, including one from Senator Cantwell (D-WA), but the mid-term elections are looming and the mood in the country is turning away from direct economic stimulus, so an extension is far from a sure thing. Nor does the argument that the grants are deficit-neutral, because they merely accelerate payments, entirely wash. Once the incentive for wind power reverts to the Production Tax Credit (PTC) or substitute Investment Tax Credit (ITC) on 1/1/11, companies would again need substantial taxable earnings to claim it, and not all would qualify. That's one of the main reasons that cash up front was such a powerful incentive for developers. It's also never been clear how the tax equity market was expected to revive fully as long as firms could get cash from the Treasury instead, without any transaction fees beyond filling out the paperwork. Whatever we do about the expiring stimulus grants, we need to get this market on a trajectory back to normal.

The best solution for bridging this transition might involve a measure that doesn't seem to have been aimed at the renewable energy sector at all. Last week President Obama proposed allowing businesses to expense 100% of capital investments in 2011. This would kick in just as eligibility for the 1603 grants ends, and at the 35% tax rate that most corporations are subject to, it could actually be worth more than the 30% renewable energy ITC upon which the grants were based. That would help compensate for the difference between receiving these funds up front and waiting to file a tax return. The new benefit would also be calculated on the amount invested, like the grants, rather than the quantity of power produced, as under the PTC.

There's an additional advantage to this approach, which would put the decision for capital investment and allocation entirely back in the hands of corporate managers and boards--who are accountable for their results--rather than government bureaucrats with little experience at running a business or gauging which projects make sense and which don't. And if it means that companies that can't wait until they file taxes to collect the benefit must convince a banker or other investor of the merits of the project, that's an extra layer of market discipline that might winnow out some projects now, but would help ensure that those that survive are more viable.

In the long run, renewable energy must stand on its own feet, without incentives that are orders of magnitude larger, per unit of energy produced, than those for conventional energy. Most renewable electricity technologies aren't ready to make that leap, but forcing them to rely on the same 100% investment expensing that other businesses would be given next year (if enacted into law) looks like a good first step, instead of extending a stimulus program that must end sooner or later.

Friday, September 10, 2010

Climate-Proofing Infrastructure

Even in an election year, it's hard to make infrastructure repair sound glamorous. Perhaps that helps explain why the latest annual report card on the condition of US infrastructure from the American Society of Civil Engineers was so dismal, a "D" overall. In any given year, there are usually more exciting things to spend our money on, until we realize we haven't spent enough on these necessary props for our civilization for decades. The president's latest proposal to improve roads, rails and runways could help, though it faces skepticism from those who thought such fixes were already covered by last year's federal stimulus package. Perhaps what's missing is a green angle, and I don't mean that cynically.

If there are any aspects of infrastructure that have acquired a hint of glamour, lately, it's the ones that deal with making energy more sustainable or reducing emissions. The "smart grid" comes to mind, along with renewable power generation. As I was reading a recent New York Times op-ed concerning whether this year's bizarre weather is attributable to global warming--it's not, but it could be a taste of things to come--it occurred to me that climate-proofing our roads, power lines, train tracks, sewers, and other basic infrastructure could be at least as important as much more controversial policies addressing whether and how to reduce greenhouse gas emissions. In fact, whether climate change is caused in whole, partly, or not at all by humanity, we must still deal with its consequences. And even if all greenhouse gas emissions ended tomorrow--an impossibility--the climate is predicted to continue warming for a long time. That makes adapting our infrastructure to withstand climate change a suitably green endeavor.

However we explain this year's odd weather, including massive floods, heat waves and the fires in Russia--which incidentally contributed to a spike in US ethanol prices by driving up corn prices--scientists expect our future climate to include more such events. A few years ago, "adaptation" was taboo to some environmentalists, signaling defeatism. They bet everything on "mitigation"--reducing emissions. Since mitigation may not happen soon enough, on a large enough scale globally to make a difference, nothing we do can avert the need for adaptation to a world of less benign weather. In that respect any jobs created by a concerted effort to shore up our infrastructure to cope with more frequent weather events would be just as green as those associated with building and installing wind turbines and solar panels.

What might this entail? Well, most of the detail is outside my area of expertise, but if a bridge needs to be replaced, perhaps the new one should be designed to provide more clearance between the river and the roadway, with higher floodwaters in mind. Similarly, should highways be built (or rebuilt) with better drainage where flooding is a growing risk, or using concrete or asphalt formulated to withstand more extreme heat and cold? And having spent more than a decade living in regions subject to high winds and ice storms, putting utility lines underground makes lots of sense even without climate change, and it could become indispensable with it. In some respects this merely boils down to widening the routine assumptions that engineers make concerning the conditions that a piece of infrastructure must withstand during its lifetime, in order to cope with more uncertainty.

All of this costs money and competes with other priorities. The more resilient (and expensive) we make each project, the fewer of them we're going to do, unless we make upgrading our infrastructure--and not just the semi-glamorous parts--a much higher priority than it has been. That would require a different mindset, and not just with regard to the risks of climate change. Nor are the political rewards likely to be quick, because if anything, it involves the antithesis of the "shovel-ready" projects the stimulus targeted, since much will need to be rethought first. That wouldn't have deterred the generations of Americans that built the systems that must now be replaced; it shouldn't deter us, either, particularly if we recognize the connection to what many see as the greatest challenge of our century.

Wednesday, September 08, 2010

Diesel Hybrids Arrive

Regular readers know I'm a fan of diesel cars, having test-driven some terrific models at recent car shows, as well as renting them on past trips to Europe. For drivers who travel mainly highway miles, the fuel economy benefits of dieselization can approach those of hybrids at a much lower initial cost premium. However, at least in the US, combining the two technologies to achieve even greater fuel savings has been cost-prohibitive, while in Europe, where fuel prices are much higher, interest in hybrids languished until fairly recently. Now, two auto makers have announced they will take that step and launch European hybrid-diesel models next year, with impressive fuel economy and emissions results.

Carmakers have known about the efficiency potential of diesel hybrids for a long time. This was the architecture chosen by the Clinton-era Partnership for a New Generation of Vehicles, a US government/industry consortium pursuing the goal of an 80 mpg car. As both Mercedes and Peugeot have determined, there is no technical barrier to building such a car, and the two models announced, although falling somewhat short of the old PNGV goal, are expected to deliver the equivalent of 62 mpg and 57 mpg. That would be respectable for small hybrid sedans competing with the Prius, but it's remarkable for a small crossover SUV and an E-series sedan, respectively. And in addition to fuel efficiency, Peugeot claims its diesel hybrid will emit just under 100 g/km of CO2, roughly matching the lifecycle emissions of an EV recharged on average US grid electricity. I'm also intrigued by the potential for highly-efficient four-wheel drive it creates.

The problem with this configuration, and a primary reason it has faced resistance in the US, results from the combination of relatively low US fuel prices and the diminishing returns to increasing fuel economy. Converting a gasoline model to either a hybrid or a diesel captures the largest, most valuable increment of fuel savings, leaving less fuel--and less money--to be saved by the other technology. As the article in Technology Review notes, achieving an attractive return on the pairing of powertrain technologies requires fuel prices much higher than the roughly $3 per gallon we pay here. So it shouldn't surprise anyone that the first place we'd see this configuration is in Europe, where diesel cars are already well-established--to a much higher degree than hybrids are here. With the average retail price in the EU currently around $6.06/gal. for gasoline (with a range of $5.00-7.11/gal.) and $5.53/gal. for diesel, the incremental savings for going from 40 mpg to 60 mpg still amount to over $500 per year, compared to less than $300 in the US.

The New York Times recently quoted research from the University of Michigan suggesting that cars could achieve 74 mpg by 2035 without drawing energy from the electric grid. With the US new car fleet struggling to reach 35 mpg within a few years, that sounds fanciful, until you see real cars like the Mercedes and Peugeot diesel hybrids. However, unless fuel prices end up rising significantly between now and then--which many expect but is far from certain--the biggest obstacle won't be technology, but justifying the cost, as the performance of baseline vehicles continues to move down the curve.

Friday, September 03, 2010

Oil Rig Rorschach Test

Yesterday's fire on Mariner Energy's Vermilion 380 production platform in the Gulf of Mexico thankfully resulted in neither loss of life nor another big oil spill. However, the timing of this event seems likely to complicate the debate over the drilling moratorium that has been in place since the Deepwater Horizon accident, and that the government had been showing signs of relaxing or ending early. Based on the reactions so far, this latest accident also provides a Rorschach test on attitudes concerning offshore oil. Those convinced that the risks of offshore drilling outweigh its benefits are citing it as further evidence, while supporters of drilling are likelier to see it as proof that accidents offshore needn't be catastrophic. In reality, the two situations were so different that I'm not sure how much light one sheds on the other.

Although information on Vermilion 380 has been somewhat sketchy, we know from statements by the company and Coast Guard that unlike Deepwater Horizon, which was a floating deepwater drilling vessel, this facility is a fixed production platform in relatively shallow water, tapping a smallish oil and gas field with proved reserves of 33.2 billion equivalent cubic feet of gas--the equivalent of about 5.7 million barrels of oil, with more than half of that consisting of natural gas. The platform had recently undergone restoration work after having been damaged by Hurricane Ike in 2008. It was not engaged in exploration or any other kind of drilling, but instead producing oil and gas from previously-drilled wells. A company press release indicated that production in August averaged 9.4 million cubic feet per day of gas and 1,400 bbl/day of oil and condensate. This is orders of magnitude smaller than the Macondo field and its blown-out exploration well. In its particulars, Vermilion is more typical of the thousands of oil & gas platforms in the Gulf than the big, complex drilling rigs like Deepwater Horizon that we've been focused on since April.

Under the circumstances, another distinction between Vermilion and Deepwater Horizon is even more important than the ones above. While every accident is one too many, the outcome of yesterday's was precisely what the designers of such facilities work hard to enable and offshore oil & gas workers undergo intensive training to be able to execute: The wells were apparently secured, the crew evacuated safely, and damage was limited to the surface hardware.

This accident will be investigated, and I'm sure its lessons will find their way into the ongoing reassessment of offshore oil & gas practices and regulations. But without jumping to conclusions about its causes, yesterday's incident provides no proof at all for the argument that every offshore oil & gas well is a potential Macondo-style blowout, and every facility a potential Deepwater Horizon calamity waiting to happen.

Meanwhile, as my US readers head off for Labor Day weekend I suggest reading Technology Review's assessment of the energy aspects of the US economic stimulus, about which I had originally planned to write today. It raises important questions concerning the impact and effectiveness of the stimulus, including on employment, as well as the sustainability of efforts begun with its impetus. Expect to hear a lot more about this later this year, as eligibility for the Treasury renewable energy grants and other stimulus programs draws to a close, and recipients and their advocates call for temporary or permanent extensions.

Wednesday, September 01, 2010

Grading Cars on the Curve

By now you may have seen some prototypes of the new-car fuel economy stickers on which the EPA is seeking public comment. The versions that prominently display letter grades for overall fuel economy performance are certainly eye-catching, rising above the potentially confusing mix of numbers and graphics in the body of the sticker. Yet although the current stickers are clearly inadequate to illuminate the choices and consequences associated with buying vehicles powered by an increasingly diverse array of fuels, devising a similarly simple summary page may be beyond the skills of even the cleverest engineers and graphic designers. And in the hyper-connected world in which we now live, the necessity of presenting all this information in one place deserves at least as much thought as the proposed new stickers themselves.


I don't envy the EPA its assigned task of coming up with a useful replacement for the venerable fuel economy stickers that adorn the windows of all new cars at every dealership in America. It should be obvious that the current stickers, displaying city, highway and assumed average fuel economy--a subject for discussion in itself--along with estimated annual fuel expenditures, are not up to the task of informing consumers faced with a choice of vehicles running on gasoline, diesel, natural gas, ethanol, electricity, or a mix of several of these. Whether you consider it necessary to attempt to do so probably depends as much on your personal philosophy as on the inherent complexity of the situation, so for the purposes of this posting, I will accept it as a given and focus on evaluating the range of sticker options furnished by the EPA.

Start with the prototype shown above, for a plug-in hybrid car (PHEV) capable of running on both gasoline and electricity. A larger image of this sticker is available on page 4 of the PDF file on EPA's site. The two features that stand out in this design are the letter grade, which compares the fuel economy of the stickered vehicle to all others on a scale running from A+ to D (further conveyed in a green to amber color range) and the estimated 5-year fuel cost savings compared to the average new car. I've seen comments elsewhere suggesting that the graded comparison should be based only on comparable vehicles, rather than all cars, and there's some merit to that. There might be a few folks out there looking to replace a Chevy Suburban with a Nissan Leaf, but I'll bet they're in the minority. At the same time, the goal of the program is to reduce fuel consumption and emissions, and people are generally smart enough to figure out that if the vehicles that meet their needs only range from B- to a D, the B- choice will probably cost less to run and be better for the environment.

Of course that raises questions about whether the letter-grade system dumbs-down the whole process and diverts attention from details that actually matter a great deal, including the assumptions underlying the system, many of which are displayed in the fine print. One of the biggest of those is that only emissions from the tailpipe count. That sounds like a technicality, but when the result is that vehicles powered directly by electricity are guaranteed to get most of the A's in the class, it has serious consequences. For example, it would skew the comparison between an only-moderately efficient PHEV and an extremely efficient conventional hybrid (non-plug-in.) The latter might produce fewer lifecycle GHG emissions than a plug-in running on the electricity mix of the applicable regional grid, but because its emissions mainly come out of the tailpipe, it would be at a disadvantage. This kind of comparison is only one aspect of the emerging transportation energy market for which any static, national-level representation such as a sticker plastered on a car window seems likely to be wrong more than right.

The new stickers also introduce several new concepts to motorists, including the MPGe, or mile per gallon equivalent--a worthy evolution of mpg. This provides a handy way to compare the energy content of different fuels, including electricity, to the standard energy content of a gallon of petroleum gasoline, approximately 115,000 BTUs. The problem is that in the case of electricity, the stated conversion rate of 33.7 kWh per gallon-equivalent ("eGallons in the stickers' parlance) grossly understates the energy required to produce most of the kWhs on the grid. It's only accurate for the 31% of our national electricity mix attributable to nuclear, hydro or other renewables. In the case of electricity from natural gas turbines, it can understate the fuel requirement by much more than half--and thus overstate electricity-based fuel efficiency by more than double. In other words, an accurate comparison of the equivalent fuel economy of an electric vehicle requires more information about the source of electricity for each consumer than any sticker could conceivably collect. I'd prefer to see a more conservative conversion rate for electricity, such as 14.4 kWh/gallon (based on a typical gas turbine heat rate), but at a minimum the CO2 grams per mile figure on all such stickers should be asterisked along the lines of, "Your emissions will vary depending on your local electricity source."

Another new concept incorporated in the sticker for PHEVs is the notion of "Blended Electric + Gas" fuel economy. Unfortunately, while it provides a basis of comparison among similar vehicles, its accuracy is limited to the precise combination of electric and non-electric miles that went into the EPA's calculation, accentuated by the kWh/eGallon problem described above. Drive mostly electric miles and just a few on gasoline, and you'll get a result like what I saw when I drove the Chevrolet Volt last winter. I much prefer the alternative sticker shown on page 12 of the PDF, in which consumers are given the fuel economy on gas only and electricity only and allowed to work out the likely result for their specific circumstances. I also like the range graphic on this PHEV sticker and the electric vehicle sticker on page 11, allowing an easy comparison between those two vehicle types.

Yet while the EPA is clearly working hard to provide consumers with more information about the performance of vehicles that can use combinations of electricity and liquid fuels, I find it inexplicable that their proposed sticker (page 15) for flexible fuel vehicles (FFVs) would display the fuel economy only for gasoline, rather than for both gasoline and E85 ethanol, since the latter tends to be a quarter to a third less, based on the EPA's own results for actual FFVs.

The last issue I want to raise relates to the assumptions underlying the annual fuel costs and savings shown on all the stickers. When the first stickers were introduced in the 1970s, there was no easy way to convey to consumers up-to-date information on current and expected future fuel prices. That's certainly no longer true, and posting cost estimates relying on the assumption that we all pay the same price for gasoline and electricity and will do so for the life of a car makes little sense. Why not omit this information and replace it with a link to an interactive website that, with the input of just a zip code, could determine local fuel and electricity prices and calculate future savings based on those and the latest forecasts from the Department of Energy?

While I commend the EPA for its effort to make alternative fuel vehicle characteristics more understandable and for making these proposed stickers public now, I believe the agency is attempting to over-simplify a truly complex set of parameters and relationships, at the risk of inadvertently misleading a significant number of purchasers. Consumers would get more reliable value from stickers that provided them with just a few clear metrics, plus access to the information needed to work out how the vehicles among which they are choosing would be likely to perform in their circumstances of where and how they drive. And unless the basis of the letter grades can be expanded to include lifecycle emissions, rather than just those from the tailpipe, they should be jettisoned as fundamentally flawed. It will be very interesting to see what emerges from the next 60 days of public comment, and I encourage my readers to put in their two-cents worth.

Monday, August 30, 2010

Germany's Nuclear Bridge

Since I've been taking potshots at German energy policy recently, I was pleased to see that it appears the country's government is nearing a reasonable compromise concerning nuclear power, which accounts for 22% of the electricity generated in Germany. The Financial Times reported yesterday that the CDU/FDP coalition is likely to propose extending the life of the country's reactors by 12-15 years, in order to give renewable energy sources more time to ramp up. Yet while the extension makes enormous sense from the perspective of emissions and energy security, I'm puzzled by the plan's implicit assumption that nuclear power is valuable only as a bridge to more renewable energy, rather than as a key part of any future, low-emission energy mix.

In 2007 Germany's 17 reactors generated 140 billion kWh of electricity. By comparison, all renewable sources amounted to just over 100 billion kWh, with only 3 billion of that coming from the country's highly-subsidized solar photovoltaic (PV) installations. All of these reactors will reach the limits of the their currently-allowed 30-year service lifetimes by 2020, when they are required by existing law to be shut down, and all have provisional shutdown dates within the next few years. The problem is that the incremental growth in renewable electricity required to replace all of these plants does not seem feasible within that timeframe, despite its impressive expansion so far.

Replacing just the net output of those reactors would require total renewable generation to expand by roughly 150%, though much of that expansion would by necessity depend on a much smaller fraction of the renewable power base. Wind currently supplies 6.5% of generation and continues to grow steadily. PV capacity has more than doubled since 2007, from 4,000 MW to 9,800 MW last year, though that still results in a contribution of only around 1% of generation, partly due to scale and partly to Germany's low solar insolation. Wind and solar output would have to quadruple to fill the kWhs supplied by nuclear power, plus their current part of the mix. This challenge is compounded by the problems of intermittency and low output vs. nameplate capacity of both of these sources. In 2007 the calculated capacity factor for Germany's wind turbines was just 21%, while PV was under 10%. So not only would these sources have to expand by a multiple of the capacity lost from idled nuclear reactors, but much of the incremental output would have to be stored, in order to time-shift it to match demand--combined with time-shifting demand to match the variable and cyclical output from these sources. Power from other renewable sources such as biomass, waste and hydro is much more compatible with normal demand patterns, but more difficult to expand quickly and overcome resource limitations.

Most Germans are intensely practical. That German practicality is in my genes and upbringing, part of which was spent in Germany. I speak the language and know the people fairly well, yet it remains a mystery to me that Germans would choose to pit these two complementary categories of electricity generation against each other, rather than aligning them cooperatively to replace high-carbon coal and natural gas that is largely imported from Russia--hardly the world's most reliable supplier. The answer appears to reside in coalition politics (in both major groupings) and green ideology, the price of which seems likely to rise sharply. German households already pay more than twice as much for electricity as US households, while German industry pays about 250% what its US counterparts pay, and I can only guess at the comparison to Chinese energy costs. Taking large, fully-depreciated baseload power sources out of the national mix will only amplify those disparities. I'd be very surprised if Germany didn't choose a course that hews back towards practicality in the long run.

Friday, August 27, 2010

The Pitfalls of Feed-In Tariffs

I recently ran across a story indicating that regulators in Arizona are considering implementing a feed-in tariff (FIT) for solar power in that state. This is somewhat ironic, coming as it does amidst a wave of hotly-debated reductions in European solar FITs, in response to the burden they've imposed on electricity customers and the unintended consequences they've created. With Germany, Spain, and now apparently France all slashing their FITs, it's worth taking a look at how these policies differ from the US federal and typical state incentives for solar power, and why they might not be the best choice for promoting solar power here, particularly in places with solar resources as inherently attractive as Arizona's.

As I've noted before, an FIT is effectively a tax, although imposed by utilities on ratepayers rather than by governments on taxpayers. It guarantees developers of renewable energy projects--usually for solar power--a predictable price for their output and thus a predetermined potential return on their projects, barring other project risks. Because these rates are normally fixed for long intervals, and only adjusted after much consultation and debate, they don't make allowance for the kind of significant cost reductions they're often intended to stimulate in the technologies to which they apply.

The price of solar photovoltaic (PV) modules has fallen sharply in the last two years, partly due to the classic experience-curve effects that the industry likes to tout, but also because of events such as the recession and alleviation of a global bottleneck in the production of polysilicon, the basic feedstock for most silicon-based solar cells. But module costs have also come down for another reason more directly related to the generous FITs that have been in place in Germany, Spain, France and elsewhere. They were so generous, in fact, that they attracted new entrants from low-cost manufacturing centers like China that were able to undercut local suppliers significantly and gain market share. In other words, instead of just helping to grow local solar industries--a clear example of industrial policy--high FITs can also spur new imports from foreign competitors with potentially sustainable cost advantages over domestic manufacturers.

In this regard, at least, the prevailing US federal policy of providing a substantial investment tax credit, or more recently the option of taking that ITC as an up-front cash grant, has important advantages. Because it is calculated based on the cost of each project, it automatically adjusts downward as technology and project prices fall--as we are frequently told they will continue to do for PV. Most of the state solar incentives I've seen take a similar form, providing consumers and businesses tax relief based on the cost of the solar systems they install, or cash rebates that decline rapidly based on cumulative capacity. Again, these are self-correcting, compared to Europe's FITs. That's beneficial for taxpayers, but also for the domestic solar industry, by forcing it to remain competitive.

Because the global solar industry has grown to a level of scale and sophistication such that it can quickly shift a large number of projects to the countries with the most attractive policies--as for example when developers decamped from Spain to France once the former's solar capacity threshold was reached in 2008--the cost of a FIT policy can mount quickly and unexpectedly. According to the Financial Times, solar incentives last year accounted for half the €6 billion annual tab for Spanish renewable energy subsidies, even though total solar capacity in Spain at year-end was just 18% that of wind, according to the Renewables 2010 Global Status Report of REN21. That's a lot to pay for installations that collectively displace the equivalent of just one medium-sized coal-fired power plant. States such as Arizona that are considering feed-in tariffs should think carefully, not just about the laudable goal of promoting solar power, but about the accompanying financial burden they're imposing on ratepayers, as well as the potential for unintended consequences.

Thursday, August 26, 2010

Looking Back to Look Ahead

Last week the Energy Information Agency of the US Department of Energy released its Annual Energy Review for 2009. Although it doesn't offer predictions concerning the energy transition that was the subject of last Wednesday's posting, it does include a wealth of charts and graphs visualizing the remarkable energy shifts that have already occurred in the last several decades. Understanding these could help calibrate our expectations concerning the pace of the hoped-for clean energy revolution, while shedding light on characteristics that could move some technologies into the market faster than others. For energy the past isn't necessarily prologue, but it's certainly relevant.

Start with the US primary energy overview for the last 60 years, which shows the steady growth of our energy consumption, interrupted only by two sets of events: the oil shocks of the 1970s and the recent financial crisis and recession (accompanied by a demand-driven oil shock.) Since the early '70s much of that growth was fueled by imported energy, led by oil. This is the part of the story we know best, because its impact on energy security has kept us focused on it for my entire adult life, no matter how ineffective our responses have seemed at times. However, other aspects of our energy situation reflect big, but less obvious changes over that interval, particularly with regard to the production of electricity, the supply and uses of natural gas, and the growth of nuclear power.

We've recently heard a lot about the significance of shale gas, which for many parts of the country could bring the sources of our natural gas much closer to where it's used. Yet this is only the latest aspect of a broader shift that has turned gas from a mainly Gulf Coast and mid-continent resource into a truly national one. In 1970 Texas, Louisiana and Oklahoma accounted for more than 80% of US gas production, while last year they supplied well under half. In the intervening period, production outside these three states more than tripled. At the same time, the ways we use gas have also been transformed. Gas for electricity generation has outstripped residential gas consumption and is about to eclipse industrial gas demand, which has fallen steadily since the mid-'90s, due to volatile prices and the offshoring of manufacturing. The marriage of gas to electricity was driven by a major technology change, in the form of aero-derivative gas turbines for power generation. A chart I could only find in the report's Energy Perspectives section and have reproduced below indicates how much more natural gas-fired capacity has been added in the US in the last 20 years than all other generation technologies combined. Natural gas was more expensive than coal for that entire period, yet no other technology could match its combination of low capital cost, infrastructure efficiency, low emissions, and capability to deliver power when and where needed. Can renewables succeed without matching at least a majority of those attributes?


The report puts the recent upsurge of biofuels, wind, solar and geothermal power into the context of a larger renewable energy sector that still meets just 8% of our total energy needs, mainly from mature sources such as hydroelectricity and wood. I can't help wondering whether the development of the US nuclear power sector holds any relevant analogies for the new renewables. Nuclear grew from nothing to 8% of US primary energy and 20% of electricity generation between the mid-'60s and 2000, and in the process helped displace most oil from power generation. Essentially all our current nuclear capacity was built in two waves that rose quickly, peaked in the mid-'70s and again in the mid-'80s, and then subsided to little more than capacity optimization since then. Renewables and nuclear could not be more different, other than sharing a low emissions profile, but the former face enough real-world constraints--including concerns about the environment in its broadest sense--that a scenario in which they, too, stall well short of their full potential isn't so hard to imagine. When you consider a rise as steep as that exhibited by ethanol, or the asymptotic growth of photovoltaic module shipments, it's hard to look at these graphs and not wonder what the rest of the curve will look like: continued rapid growth, plateau (and at what level?), or decay.

I found numerous other charts, graphs and tables offering insights into topics as diverse as the population of alternative fuel vehicles and their energy consumption, the breakdown of electricity consumption in commercial buildings, and the steady drop in energy consumption for space-heating by households, particularly from oil--despite a 35% increase in US population--offset by a near-doubling of household electricity consumption within a generation. And I can't close without mentioning the positive trends in the energy intensity of the US economy--a steady decline for 40 years in BTUs per dollar of GDP--and more recently in per-capita energy consumption. We've accomplished that without a full-court press on energy efficiency, beyond what was incentivized by volatile market prices. What could we accomplish on this front if we put our minds to it?

Tuesday, August 24, 2010

FutureGen Switches Tracks

The standard knock on carbon capture and sequestration (CCS) is that it hasn't been tested and proven on an industrial scale. That's really only true in the narrow sense in which you start with coal, produce electricity, and then collect and bury the CO2 that comes out the stack--which I imagine is what CCS evokes for most people who have even heard of the technology. Some years back, the US government set out to close that gap by building a large-scale test facility to demonstrate the coal-to-CCS cycle, with help from a consortium of industry partners. The program was called FutureGen. It died in 2008 after reported cost overruns but was revived in a different format last year. Now the reoriented effort has spawned a new project at a different location--though still in Illinois--to replace the ill-fated Mattoon project. Its basic concept differs significantly from the original FutureGen, and in ways that might improve the odds that coal could continue to contribute a substantial share of the US energy mix for many decades.

The CO2 produced by power plants is much harder to capture and dispose of than the traditional pollutants we associate with them, not least because it is the primary chemical result of the combustion of hydrocarbons, along with water vapor, rather than a byproduct resulting from a fuel impurity or imperfect combustion. That requires dealing with emissions that exceed the mass of fuel being consumed, rather than an order of magnitude or two smaller. And when fossil fuels are burned in air, the CO2 produced must be separated from all that nitrogen, which is the largest constituent of flue gas, before it can be sequestered. All this is expensive, in both energy and financial terms. The original FutureGen was designed to finesse this problem by converting coal into a hydrogen-rich gas that could be burned efficiently in a combined-cycle gas turbine (IGCC), producing emissions consisting mainly of water vapor, plus a sequestration-ready CO2 stream from the hydrogen-production process. Unfortunately, the hardware necessary to do that isn't cheap, either.

FutureGen 2.0, as announced, would take a different tack. It aims to convert an existing power plant owned by Ameren Corporation into an "oxy-coal" plant, in which pure oxygen replaces air in the boiler for combustion, resulting in flue gas consisting mainly of CO2. This approach has pluses and minuses, compared to IGCC. It requires a bigger air separation plant to support full combustion, but it eliminates all the hardware associated with hydrogen. That should entail somewhat lower capital costs, but not necessarily lower operating costs, particularly when you consider that the efficiency of IGCC exceeds that of most existing US coal power plants, though not necessarily supercritical or ultra-supercritical pulverized coal plants. (I couldn't tell how much the basic power block of Unit 4 of Ameren's Meredosia, IL plant, which formerly burned fuel oil, will be modified.) As in FutureGen 1.0, the resulting compressed CO2 would then be pipelined to a disposal site elsewhere in the state.

Although it would take some doing to convince me that oxy-coal with CCS is a better technology than IGCC with CCS, the revised approach to FutureGen looks like a good call on the part of the government. That's because the context in which FutureGen is being pursued has altered significantly since it was first devised. Instead of a scenario of continuing to build many new coal-fired power plants every year to meet steadily-growing electricity demand, the future--at least in the US--looks quite different. An article in yesterday's Washington Post pointed out that a number of new coal plants are still under development, but the rate of new construction has slowed dramatically, due to regulatory pressures, weaker electricity demand, competition from cheaper natural gas, and the growth of renewables. If we want to have an impact on the emissions from the US coal-fired power plant fleet--which accounts for 31% of total US emissions and 91% of the emissions from the electricity sector--then our best strategy probably doesn't involve building hundreds of gleaming new IGCC plants, but rather retrofitting hundreds of existing units built with older technology, for which conversion to IGCC would likely be cost-prohibitive. If FutureGen 2.0 succeeds--technically, if not economically--it would validate that retro-fitting potential.

The world hasn't stood still while the Department of Energy wrestled with all the political and technical challenges that FutureGen faced. The original siting competition between Texas and Illinois looked like a textbook case of logrolling, and FutureGen 1.0 exhibited the hallmarks of a classic government boondoggle. Meanwhile, commercial projects such as Duke Energy's Edwardsport IGCC (without CCS, but in effect CCS-ready) and the Good Spring IGCC project of Future Fuels LLC have emerged and appear to be making progress. The latter is based on technology from the Thermal Power Research Institute of China, which is a good bet to beat all of these projects to the punch with its GreenGen power plant in Tianjin. If FutureGen 2.0 is going to matter, it must be built smartly, quickly and cost-effectively. Yet technical success still won't guarantee that this technology will be taken up and deployed widely. In a market economy, rather than a centrally-planned one, it's hard to see any of this going beyond a demonstration plant or two without a substantial price on CO2 emissions to offset the inherently higher costs of generating power this way.

Friday, August 20, 2010

Oil Plumes and the Fate of the Spill

I'm as reluctant to insert myself into the debate over what happened to all the oil that leaked from BP's Macondo well between April 22 and July 15--when the second cap stopped the flow--as I was concerning the earlier controversy regarding flow-rate estimates. At the same time, I find the coverage of this story lacking in crucial details that could help us to understand how much of the oil evaporated into the warm air of the Gulf or degraded naturally, how much was collected, and how much potentially remains in the sea. The assessment issued by the National Oceanic and Atmospheric Administration (NOAA) on August 4, 2010 has been disputed by some scientists, and reports of lingering oil plumes add to the public's apprehension that the pieces don't quite add up. But although I don't have nearly enough information to conclude which group is closer to being right, I feel much more confident in pointing out where their arguments seem weak.

Let's begin with the estimate of the total quantity of oil leaked into the Gulf, which lately seems to have become cast in stone at 4.9 million barrels (205.8 million gallons.) This is the crucial starting point for any analysis of how much of it remains in the Gulf. This figure appears to be based on the estimate by the Flow Rate Technical Group of an average rate of around 58,000 bbl/day for the 85 days that the well was leaking. NOAA indicates an uncertainty for this figure of +/- 10%, but with all due respect to the scientists who worked on it, that seems excessively precise for something that was never measured directly.

There are only two ways I know of to measure such a flow, as distinct from estimating it. The most accurate involves gathering all the oil flowing during a given interval--say, a day--and gauging the tanks into which it flowed at the beginning and end of the interval. From a quick review of the transcripts of BP's technical briefings, it appears that the largest quantity of oil that was actually collected in a 24-hour period equated to a flow rate of about 24,000 bbl/day, though this represented only a portion of the total flow, with the remainder continuing to leak into the sea due to containment limitations. So we know the rate must have been higher than that figure, but not how much higher. The other way to measure oil flow is with a flow meter. It's a pity that BP's "Lower Marine Riser Package", the second cap and valve assembly installed on the well, didn't include this capability. I don't even know if it would have been feasible, given the pressures and high flows of oil and natural gas involved.

In the absence of direct flow measurements, the Flow Rate Technical Group had to rely on sophisticated techniques for calculating the flow, based on the observed velocity of the fluid leaving the well and a complex set of assumptions--grounded in a limited amount of actual data--concerning the gas:oil ratio of the fluid, the rapid expansion of the gas coming out of solution within the space over which the velocity was determined, as well as the changing pressure and temperature within this regime. Tricky stuff, particularly considering how much of the observed flow was attributable to gas, rather than oil, as I noted in May. I'd also note that since the estimated 58,000 bbl/day flow rate is at the top of the range of flow rates observed from other oil wells in the history of the industry, it's quite possible that the range of uncertainty for the total amount leaked is not only wider than +/- 10%, but also non-symmetrical, with more downside than upside. I'm sure we will hear much more about this in the future, not least because the size of the fine BP would ultimately pay for the leak depends on it. That's not the concern of the moment, however.

The pie chart in NOAA's report indicating the breakdown of the different fates of the oil that leaked has gotten a lot of scrutiny. Some reports have interpreted it as indicating that only a quarter of the oil remains in the marine environment. I wouldn't read it that way. Instead, I'd see three distinct categories for the oil's current status. The first and least ambiguous concerns the oil physically collected directly from the well, skimmed from the surface, or burned off, constituting an estimated--and only partly measured--25% of the uncertain total discussed above. This oil is clearly no longer in the water. The next category is oil that is likely no longer in the water, and that is the portion of the "Evaporated or Dissolved" segment that evaporated. If the oil had all reached the surface, I wouldn't be at all surprised if most of that segment should be attributed to evaporation; this was, after all, light, sweet oil with a high proportion of volatile fractions. The problem is that we don't know how much of the oil that leaked a mile down made it to the surface. The portion that didn't, which in NOAA's parlance was dissolved, naturally dispersed or chemically dispersed--potentially up to 49% of their total estimate--could still be in the water column, along with the 26% "Residual"--less the unknown portion actually broken down by bacteria and other processes. And it's some of this remaining oil that makes up the plumes we've been hearing about.

The undersea oil plume currently in the news was found in June by scientists from the Woods Hole Oceanographic Institute. They describe it as being at least 22 miles long, 1.2 miles wide, and 650 ft. high. The total volume of the plume, assuming it filled that entire rectangular solid, would be about 3.6 trillion gallons. However, the critical data point that I didn't see reported in any of the newspaper accounts I read was the concentration of oil in that water. According to the report on the Woods Hole site, the concentration of specific oil-derived molecules ("BTEX") is "in excess of 50 micrograms per liter". Adjusting for the density of the chemicals in question, that means that they found oil-related concentrations of approximately 57 parts per billion by volume. So by my math, the total volume of these chemicals within the plume is on the order of 200,000 gallons, or under 5,000 bbl. Unless these chemicals are only the tip of the iceberg in terms of oil derivatives in the plume--and Woods Hole hints that there is more--then we're talking about less than 0.1% of the 4.9 million barrels estimated to have leaked into the Gulf. In other words, while a plume like this might be potentially serious for aquatic life, it's not clear how much doubt its existence casts on NOAA's analysis of where all the oil went.

I will be very interested in seeing further refinements of all these estimates in the weeks and months ahead. Perhaps the media will even include more of the details crucial for putting it into perspective.

Wednesday, August 18, 2010

Scaling the Energy Transition

The August 13 issue of Science, the journal of the American Association for the Advancement of Science (AAAS), devotes a special section to "Scaling Up Alternative Energy". Most of the section, including some nifty comparative infographics, can be accessed free of charge until August 27, requiring only a free site registration. I encourage you to read it while it's available. The articles cover topics such as the prospects for cellulosic ethanol and the challenges of siting renewable energy projects. Another entitled, "Do We Have the Energy for the Next Transition?" particularly caught by attention. I've been focused on this issue from the inception of this blog in 2004 and long before that. This is an issue that's not about to go away or be solved overnight, no matter how much wishfulness we apply to it.

I know I've been beating this drum for a long time, but here's a clear and concise explanation from the top science journal in the country on why the transition to alternative energy won't--and can't--be quick, cheap or easy, as well as why it's necessary to pursue in spite of these limitations. The low energy and power density, intermittency, and uneven geographic distribution of renewables aren't just talking points; they're genuine technical problems that must be overcome. The author compares the transition that's now underway to previous energy transitions and finds fundamental reasons why such shifts take a long time, and why the transition to renewables can't be as quick as many would like. He quotes one expert as saying, "They don't offer new services; they just cost more."

That's a crucial point for anyone who sees this energy transition driven not just by concerns about energy security and greenhouse gas emissions, but by notions of clean energy as the next big wealth-creating global trend, akin to the computer revolution. A kilowatt-hour or BTU does the same work, regardless of its source, so unless it can be produced for significantly less than from conventional sources, greener energy offers no productivity gains of the kind that have fueled the global infotech transformation. As the article notes, using current technologies it is likely to reduce productivity, at least in the energy sector, unless it addresses cost-effective energy efficiency.

And while it's certainly true that the current price of conventional energy omits a number of important externalities, including those relating to climate change, monetizing them by increasing the price of energy will not improve productivity in the sense of creating new wealth; it will merely transfer of wealth from one sector to another. We may still have to do that, but we shouldn't harbor illusions about the ultimate source of the earnings this will create for green energy companies and entrepreneurs, until someone comes up with an energy source that is truly better/faster/cheaper than what it's replacing (without subsidies.)

Although the article doesn't dismiss the potential of renewables to supply a much larger proportion of our energy needs, it suggests that the greatest near-term potential lies in reducing energy consumption, which would simultaneously stretch out our conventional energy resources, reduce their impact, increase the leverage of the renewables we have, and provide more time to improve them. It also points to a transition that looks more like a gradual shift in our energy mix than a sudden displacement of one set of sources by another. That doesn't sound nearly as radical or glamorous as what some pundits have suggested is possible, but it still provides renewable energy businesses with the enviable prospect of making steady inroads into a vast market, the potential of which they couldn't exhaust for decades, as long as they've got a proposition that makes economic sense in light of current and anticipated regulations and incentives.

Monday, August 16, 2010

China's Leverage on Renewable Energy Increases

Last month's announcement that China was cutting its export quota for rare earth elements by 72% for the second half of 2010 didn't seem to attract wide attention, but now that the other half of its strategy has been revealed, that might change. Today's Wall St. Journal reported overtures from Chinese officials to firms interested in accessing these materials, which are critical for the production of some components of renewable energy technology and advanced vehicles. The apparent deal: invest in rare earth processing in China to obtain access, with the output from new facilities incorporated into products for the rapidly-growing internal market or export. Not only would this practice compound the difficulties faced by US and other foreign renewable energy firms seeking to market their products in China, it could also make it much more expensive to produce them outside the People's Republic.

For some time I've been intrigued by growing concerns about access to rare earths and scarce metals. These include the true "rare earths" from the periodic table of the elements, as well as other scarce elements such as Indium, Gallium and Tellurium. Their uses include solar panels, wind turbines, hybrid car motors and batteries, and other "clean energy" devices, along with many non-energy applications. As the Journal noted, China accounts for over 90% of global production of the rare earths and is among the top producers of the other scarce materials. And although China doesn't have a natural monopoly on them, it currently enjoys an effective one, as plans to resume or ramp up production in North America, Australia, South Africa and elsewhere will require both time and significant capital.

This development poses an unwelcome challenge to a variety of renewable energy firms. At a minimum, it could significantly raise their production costs, just as they are trying to move down the experience curve in order better to compete with conventional energy--including newly-abundant natural gas--and at the same time that governments around the world are being forced to cut back on subsidies, due to fiscal imbalances and the weak economy. Any company that depends on a stable, let alone expanding supply of these ingredients must either be looking seriously at relocating production to China or making potentially fundamental changes in their technology to switch to more abundant raw materials. Green jobs, perhaps, but where?

China's efforts to capture higher returns and more of the value-added for these scarce materials shouldn't surprise anyone; it's basic economics. OPEC tried this strategy in the 1980s, when it built export refineries in the Middle East and bought existing ones elsewhere. This didn't work out very well, because it contributed to a persistent glut of global refining capacity that, with the exception of a few standout years, generally benefited consumers more than producers. China could experience something similar in rare earths, once new, non-Chinese sources are brought online--assuming they are. Mining and processing such deposits entails large capital costs that, once invested, can set up a classic boom-and-bust commodity cycle. Unfortunately, the prospect of a future rare earth glut will be of little comfort to makers of wind turbines, advanced car batteries, and thin-film solar cells for the next several years, at least.

Thursday, August 12, 2010

By Executive Order

I recently ran across a mention in the New York Times of a new study suggesting a variety of energy and climate measures the administration could undertake on their own, without requiring new legislation passed by Congress. I've been thinking about this during some long stretches of driving this week. At first glance, the group's ideas merit consideration, and they might indeed be sufficient to meet the near-term emissions reduction goals the US endorsed at last year's Copenhagen climate conference. However, as tempting as such an approach might be in a year of legislative gridlock on energy, its pitfalls probably outweigh its benefits.

I haven't had time to scrutinize the report of the Presidential Climate Action Project item by item, since I'm on vacation. It caught my eye mainly because of the involvement of former Senator and presidential candidate Gary Hart. So my reactions don't really constitute analysis, but are more along the line of ruminations on a first impression that I might examine in more depth later.

At the very least, the idea that the administration could take major steps--beyond what it has already done--to reduce emissions and shift our economy away from its reliance on fossil fuels represents a potentially significant new scenario for the energy/climate environment, particularly if the mid-term elections reduce or eliminate the current Democratic majority in both houses of Congress. It could provide a new policy twist that many of the companies and organizations that have invested so much time in working with Congress on these matters haven't incorporated in their planning.

The problem with such an approach arises from the same source as its appeal: the lack of a sufficient bi-partisan consensus in Congress to enact these changes legislatively. Without a consensus spanning both parties and all factions, any action the President takes on his own could be reversed within a few years. We're not going to lick climate change or our energy problems in the span of any one administration; these problems look much more like the Cold War and require a similarly enduring bi-partisan coalition to deal with them. Major energy policy swings every 4 or 8 years would make this approach much more costly and much less effective, because of the planning and investment horizons involved. The evidence of that is already on display, as this administration reverses many of the energy policies of its predecessors.

Such an outcome is even likelier if these policies become overly identified with a president whose popularity has been waning and who is by no means assured of a second term, barring an unexpectedly robust revival of the US economy. Congress might be even less popular at the moment, but it remains the venue in which a long-term, bi-partisan energy and climate strategy must be hammered out. If a comprehensive energy bill with limits on carbon isn't possible today, important elements of a least common denominator approach to energy security and lower emissions could likely still be enacted. That could include more effort on energy efficiency and a low-carbon electricity standard encompassing both nuclear power and with the currently favored list of renewables. Future administrations and congresses could build on these steps later. A modest compromise along these lines wouldn't please everyone, but it seems preferable to an approach that depends on one party controlling the White House in perpetuity.