Monday, August 30, 2010

Germany's Nuclear Bridge

Since I've been taking potshots at German energy policy recently, I was pleased to see that it appears the country's government is nearing a reasonable compromise concerning nuclear power, which accounts for 22% of the electricity generated in Germany. The Financial Times reported yesterday that the CDU/FDP coalition is likely to propose extending the life of the country's reactors by 12-15 years, in order to give renewable energy sources more time to ramp up. Yet while the extension makes enormous sense from the perspective of emissions and energy security, I'm puzzled by the plan's implicit assumption that nuclear power is valuable only as a bridge to more renewable energy, rather than as a key part of any future, low-emission energy mix.

In 2007 Germany's 17 reactors generated 140 billion kWh of electricity. By comparison, all renewable sources amounted to just over 100 billion kWh, with only 3 billion of that coming from the country's highly-subsidized solar photovoltaic (PV) installations. All of these reactors will reach the limits of the their currently-allowed 30-year service lifetimes by 2020, when they are required by existing law to be shut down, and all have provisional shutdown dates within the next few years. The problem is that the incremental growth in renewable electricity required to replace all of these plants does not seem feasible within that timeframe, despite its impressive expansion so far.

Replacing just the net output of those reactors would require total renewable generation to expand by roughly 150%, though much of that expansion would by necessity depend on a much smaller fraction of the renewable power base. Wind currently supplies 6.5% of generation and continues to grow steadily. PV capacity has more than doubled since 2007, from 4,000 MW to 9,800 MW last year, though that still results in a contribution of only around 1% of generation, partly due to scale and partly to Germany's low solar insolation. Wind and solar output would have to quadruple to fill the kWhs supplied by nuclear power, plus their current part of the mix. This challenge is compounded by the problems of intermittency and low output vs. nameplate capacity of both of these sources. In 2007 the calculated capacity factor for Germany's wind turbines was just 21%, while PV was under 10%. So not only would these sources have to expand by a multiple of the capacity lost from idled nuclear reactors, but much of the incremental output would have to be stored, in order to time-shift it to match demand--combined with time-shifting demand to match the variable and cyclical output from these sources. Power from other renewable sources such as biomass, waste and hydro is much more compatible with normal demand patterns, but more difficult to expand quickly and overcome resource limitations.

Most Germans are intensely practical. That German practicality is in my genes and upbringing, part of which was spent in Germany. I speak the language and know the people fairly well, yet it remains a mystery to me that Germans would choose to pit these two complementary categories of electricity generation against each other, rather than aligning them cooperatively to replace high-carbon coal and natural gas that is largely imported from Russia--hardly the world's most reliable supplier. The answer appears to reside in coalition politics (in both major groupings) and green ideology, the price of which seems likely to rise sharply. German households already pay more than twice as much for electricity as US households, while German industry pays about 250% what its US counterparts pay, and I can only guess at the comparison to Chinese energy costs. Taking large, fully-depreciated baseload power sources out of the national mix will only amplify those disparities. I'd be very surprised if Germany didn't choose a course that hews back towards practicality in the long run.

Friday, August 27, 2010

The Pitfalls of Feed-In Tariffs

I recently ran across a story indicating that regulators in Arizona are considering implementing a feed-in tariff (FIT) for solar power in that state. This is somewhat ironic, coming as it does amidst a wave of hotly-debated reductions in European solar FITs, in response to the burden they've imposed on electricity customers and the unintended consequences they've created. With Germany, Spain, and now apparently France all slashing their FITs, it's worth taking a look at how these policies differ from the US federal and typical state incentives for solar power, and why they might not be the best choice for promoting solar power here, particularly in places with solar resources as inherently attractive as Arizona's.

As I've noted before, an FIT is effectively a tax, although imposed by utilities on ratepayers rather than by governments on taxpayers. It guarantees developers of renewable energy projects--usually for solar power--a predictable price for their output and thus a predetermined potential return on their projects, barring other project risks. Because these rates are normally fixed for long intervals, and only adjusted after much consultation and debate, they don't make allowance for the kind of significant cost reductions they're often intended to stimulate in the technologies to which they apply.

The price of solar photovoltaic (PV) modules has fallen sharply in the last two years, partly due to the classic experience-curve effects that the industry likes to tout, but also because of events such as the recession and alleviation of a global bottleneck in the production of polysilicon, the basic feedstock for most silicon-based solar cells. But module costs have also come down for another reason more directly related to the generous FITs that have been in place in Germany, Spain, France and elsewhere. They were so generous, in fact, that they attracted new entrants from low-cost manufacturing centers like China that were able to undercut local suppliers significantly and gain market share. In other words, instead of just helping to grow local solar industries--a clear example of industrial policy--high FITs can also spur new imports from foreign competitors with potentially sustainable cost advantages over domestic manufacturers.

In this regard, at least, the prevailing US federal policy of providing a substantial investment tax credit, or more recently the option of taking that ITC as an up-front cash grant, has important advantages. Because it is calculated based on the cost of each project, it automatically adjusts downward as technology and project prices fall--as we are frequently told they will continue to do for PV. Most of the state solar incentives I've seen take a similar form, providing consumers and businesses tax relief based on the cost of the solar systems they install, or cash rebates that decline rapidly based on cumulative capacity. Again, these are self-correcting, compared to Europe's FITs. That's beneficial for taxpayers, but also for the domestic solar industry, by forcing it to remain competitive.

Because the global solar industry has grown to a level of scale and sophistication such that it can quickly shift a large number of projects to the countries with the most attractive policies--as for example when developers decamped from Spain to France once the former's solar capacity threshold was reached in 2008--the cost of a FIT policy can mount quickly and unexpectedly. According to the Financial Times, solar incentives last year accounted for half the €6 billion annual tab for Spanish renewable energy subsidies, even though total solar capacity in Spain at year-end was just 18% that of wind, according to the Renewables 2010 Global Status Report of REN21. That's a lot to pay for installations that collectively displace the equivalent of just one medium-sized coal-fired power plant. States such as Arizona that are considering feed-in tariffs should think carefully, not just about the laudable goal of promoting solar power, but about the accompanying financial burden they're imposing on ratepayers, as well as the potential for unintended consequences.

Thursday, August 26, 2010

Looking Back to Look Ahead

Last week the Energy Information Agency of the US Department of Energy released its Annual Energy Review for 2009. Although it doesn't offer predictions concerning the energy transition that was the subject of last Wednesday's posting, it does include a wealth of charts and graphs visualizing the remarkable energy shifts that have already occurred in the last several decades. Understanding these could help calibrate our expectations concerning the pace of the hoped-for clean energy revolution, while shedding light on characteristics that could move some technologies into the market faster than others. For energy the past isn't necessarily prologue, but it's certainly relevant.

Start with the US primary energy overview for the last 60 years, which shows the steady growth of our energy consumption, interrupted only by two sets of events: the oil shocks of the 1970s and the recent financial crisis and recession (accompanied by a demand-driven oil shock.) Since the early '70s much of that growth was fueled by imported energy, led by oil. This is the part of the story we know best, because its impact on energy security has kept us focused on it for my entire adult life, no matter how ineffective our responses have seemed at times. However, other aspects of our energy situation reflect big, but less obvious changes over that interval, particularly with regard to the production of electricity, the supply and uses of natural gas, and the growth of nuclear power.

We've recently heard a lot about the significance of shale gas, which for many parts of the country could bring the sources of our natural gas much closer to where it's used. Yet this is only the latest aspect of a broader shift that has turned gas from a mainly Gulf Coast and mid-continent resource into a truly national one. In 1970 Texas, Louisiana and Oklahoma accounted for more than 80% of US gas production, while last year they supplied well under half. In the intervening period, production outside these three states more than tripled. At the same time, the ways we use gas have also been transformed. Gas for electricity generation has outstripped residential gas consumption and is about to eclipse industrial gas demand, which has fallen steadily since the mid-'90s, due to volatile prices and the offshoring of manufacturing. The marriage of gas to electricity was driven by a major technology change, in the form of aero-derivative gas turbines for power generation. A chart I could only find in the report's Energy Perspectives section and have reproduced below indicates how much more natural gas-fired capacity has been added in the US in the last 20 years than all other generation technologies combined. Natural gas was more expensive than coal for that entire period, yet no other technology could match its combination of low capital cost, infrastructure efficiency, low emissions, and capability to deliver power when and where needed. Can renewables succeed without matching at least a majority of those attributes?


The report puts the recent upsurge of biofuels, wind, solar and geothermal power into the context of a larger renewable energy sector that still meets just 8% of our total energy needs, mainly from mature sources such as hydroelectricity and wood. I can't help wondering whether the development of the US nuclear power sector holds any relevant analogies for the new renewables. Nuclear grew from nothing to 8% of US primary energy and 20% of electricity generation between the mid-'60s and 2000, and in the process helped displace most oil from power generation. Essentially all our current nuclear capacity was built in two waves that rose quickly, peaked in the mid-'70s and again in the mid-'80s, and then subsided to little more than capacity optimization since then. Renewables and nuclear could not be more different, other than sharing a low emissions profile, but the former face enough real-world constraints--including concerns about the environment in its broadest sense--that a scenario in which they, too, stall well short of their full potential isn't so hard to imagine. When you consider a rise as steep as that exhibited by ethanol, or the asymptotic growth of photovoltaic module shipments, it's hard to look at these graphs and not wonder what the rest of the curve will look like: continued rapid growth, plateau (and at what level?), or decay.

I found numerous other charts, graphs and tables offering insights into topics as diverse as the population of alternative fuel vehicles and their energy consumption, the breakdown of electricity consumption in commercial buildings, and the steady drop in energy consumption for space-heating by households, particularly from oil--despite a 35% increase in US population--offset by a near-doubling of household electricity consumption within a generation. And I can't close without mentioning the positive trends in the energy intensity of the US economy--a steady decline for 40 years in BTUs per dollar of GDP--and more recently in per-capita energy consumption. We've accomplished that without a full-court press on energy efficiency, beyond what was incentivized by volatile market prices. What could we accomplish on this front if we put our minds to it?

Tuesday, August 24, 2010

FutureGen Switches Tracks

The standard knock on carbon capture and sequestration (CCS) is that it hasn't been tested and proven on an industrial scale. That's really only true in the narrow sense in which you start with coal, produce electricity, and then collect and bury the CO2 that comes out the stack--which I imagine is what CCS evokes for most people who have even heard of the technology. Some years back, the US government set out to close that gap by building a large-scale test facility to demonstrate the coal-to-CCS cycle, with help from a consortium of industry partners. The program was called FutureGen. It died in 2008 after reported cost overruns but was revived in a different format last year. Now the reoriented effort has spawned a new project at a different location--though still in Illinois--to replace the ill-fated Mattoon project. Its basic concept differs significantly from the original FutureGen, and in ways that might improve the odds that coal could continue to contribute a substantial share of the US energy mix for many decades.

The CO2 produced by power plants is much harder to capture and dispose of than the traditional pollutants we associate with them, not least because it is the primary chemical result of the combustion of hydrocarbons, along with water vapor, rather than a byproduct resulting from a fuel impurity or imperfect combustion. That requires dealing with emissions that exceed the mass of fuel being consumed, rather than an order of magnitude or two smaller. And when fossil fuels are burned in air, the CO2 produced must be separated from all that nitrogen, which is the largest constituent of flue gas, before it can be sequestered. All this is expensive, in both energy and financial terms. The original FutureGen was designed to finesse this problem by converting coal into a hydrogen-rich gas that could be burned efficiently in a combined-cycle gas turbine (IGCC), producing emissions consisting mainly of water vapor, plus a sequestration-ready CO2 stream from the hydrogen-production process. Unfortunately, the hardware necessary to do that isn't cheap, either.

FutureGen 2.0, as announced, would take a different tack. It aims to convert an existing power plant owned by Ameren Corporation into an "oxy-coal" plant, in which pure oxygen replaces air in the boiler for combustion, resulting in flue gas consisting mainly of CO2. This approach has pluses and minuses, compared to IGCC. It requires a bigger air separation plant to support full combustion, but it eliminates all the hardware associated with hydrogen. That should entail somewhat lower capital costs, but not necessarily lower operating costs, particularly when you consider that the efficiency of IGCC exceeds that of most existing US coal power plants, though not necessarily supercritical or ultra-supercritical pulverized coal plants. (I couldn't tell how much the basic power block of Unit 4 of Ameren's Meredosia, IL plant, which formerly burned fuel oil, will be modified.) As in FutureGen 1.0, the resulting compressed CO2 would then be pipelined to a disposal site elsewhere in the state.

Although it would take some doing to convince me that oxy-coal with CCS is a better technology than IGCC with CCS, the revised approach to FutureGen looks like a good call on the part of the government. That's because the context in which FutureGen is being pursued has altered significantly since it was first devised. Instead of a scenario of continuing to build many new coal-fired power plants every year to meet steadily-growing electricity demand, the future--at least in the US--looks quite different. An article in yesterday's Washington Post pointed out that a number of new coal plants are still under development, but the rate of new construction has slowed dramatically, due to regulatory pressures, weaker electricity demand, competition from cheaper natural gas, and the growth of renewables. If we want to have an impact on the emissions from the US coal-fired power plant fleet--which accounts for 31% of total US emissions and 91% of the emissions from the electricity sector--then our best strategy probably doesn't involve building hundreds of gleaming new IGCC plants, but rather retrofitting hundreds of existing units built with older technology, for which conversion to IGCC would likely be cost-prohibitive. If FutureGen 2.0 succeeds--technically, if not economically--it would validate that retro-fitting potential.

The world hasn't stood still while the Department of Energy wrestled with all the political and technical challenges that FutureGen faced. The original siting competition between Texas and Illinois looked like a textbook case of logrolling, and FutureGen 1.0 exhibited the hallmarks of a classic government boondoggle. Meanwhile, commercial projects such as Duke Energy's Edwardsport IGCC (without CCS, but in effect CCS-ready) and the Good Spring IGCC project of Future Fuels LLC have emerged and appear to be making progress. The latter is based on technology from the Thermal Power Research Institute of China, which is a good bet to beat all of these projects to the punch with its GreenGen power plant in Tianjin. If FutureGen 2.0 is going to matter, it must be built smartly, quickly and cost-effectively. Yet technical success still won't guarantee that this technology will be taken up and deployed widely. In a market economy, rather than a centrally-planned one, it's hard to see any of this going beyond a demonstration plant or two without a substantial price on CO2 emissions to offset the inherently higher costs of generating power this way.

Friday, August 20, 2010

Oil Plumes and the Fate of the Spill

I'm as reluctant to insert myself into the debate over what happened to all the oil that leaked from BP's Macondo well between April 22 and July 15--when the second cap stopped the flow--as I was concerning the earlier controversy regarding flow-rate estimates. At the same time, I find the coverage of this story lacking in crucial details that could help us to understand how much of the oil evaporated into the warm air of the Gulf or degraded naturally, how much was collected, and how much potentially remains in the sea. The assessment issued by the National Oceanic and Atmospheric Administration (NOAA) on August 4, 2010 has been disputed by some scientists, and reports of lingering oil plumes add to the public's apprehension that the pieces don't quite add up. But although I don't have nearly enough information to conclude which group is closer to being right, I feel much more confident in pointing out where their arguments seem weak.

Let's begin with the estimate of the total quantity of oil leaked into the Gulf, which lately seems to have become cast in stone at 4.9 million barrels (205.8 million gallons.) This is the crucial starting point for any analysis of how much of it remains in the Gulf. This figure appears to be based on the estimate by the Flow Rate Technical Group of an average rate of around 58,000 bbl/day for the 85 days that the well was leaking. NOAA indicates an uncertainty for this figure of +/- 10%, but with all due respect to the scientists who worked on it, that seems excessively precise for something that was never measured directly.

There are only two ways I know of to measure such a flow, as distinct from estimating it. The most accurate involves gathering all the oil flowing during a given interval--say, a day--and gauging the tanks into which it flowed at the beginning and end of the interval. From a quick review of the transcripts of BP's technical briefings, it appears that the largest quantity of oil that was actually collected in a 24-hour period equated to a flow rate of about 24,000 bbl/day, though this represented only a portion of the total flow, with the remainder continuing to leak into the sea due to containment limitations. So we know the rate must have been higher than that figure, but not how much higher. The other way to measure oil flow is with a flow meter. It's a pity that BP's "Lower Marine Riser Package", the second cap and valve assembly installed on the well, didn't include this capability. I don't even know if it would have been feasible, given the pressures and high flows of oil and natural gas involved.

In the absence of direct flow measurements, the Flow Rate Technical Group had to rely on sophisticated techniques for calculating the flow, based on the observed velocity of the fluid leaving the well and a complex set of assumptions--grounded in a limited amount of actual data--concerning the gas:oil ratio of the fluid, the rapid expansion of the gas coming out of solution within the space over which the velocity was determined, as well as the changing pressure and temperature within this regime. Tricky stuff, particularly considering how much of the observed flow was attributable to gas, rather than oil, as I noted in May. I'd also note that since the estimated 58,000 bbl/day flow rate is at the top of the range of flow rates observed from other oil wells in the history of the industry, it's quite possible that the range of uncertainty for the total amount leaked is not only wider than +/- 10%, but also non-symmetrical, with more downside than upside. I'm sure we will hear much more about this in the future, not least because the size of the fine BP would ultimately pay for the leak depends on it. That's not the concern of the moment, however.

The pie chart in NOAA's report indicating the breakdown of the different fates of the oil that leaked has gotten a lot of scrutiny. Some reports have interpreted it as indicating that only a quarter of the oil remains in the marine environment. I wouldn't read it that way. Instead, I'd see three distinct categories for the oil's current status. The first and least ambiguous concerns the oil physically collected directly from the well, skimmed from the surface, or burned off, constituting an estimated--and only partly measured--25% of the uncertain total discussed above. This oil is clearly no longer in the water. The next category is oil that is likely no longer in the water, and that is the portion of the "Evaporated or Dissolved" segment that evaporated. If the oil had all reached the surface, I wouldn't be at all surprised if most of that segment should be attributed to evaporation; this was, after all, light, sweet oil with a high proportion of volatile fractions. The problem is that we don't know how much of the oil that leaked a mile down made it to the surface. The portion that didn't, which in NOAA's parlance was dissolved, naturally dispersed or chemically dispersed--potentially up to 49% of their total estimate--could still be in the water column, along with the 26% "Residual"--less the unknown portion actually broken down by bacteria and other processes. And it's some of this remaining oil that makes up the plumes we've been hearing about.

The undersea oil plume currently in the news was found in June by scientists from the Woods Hole Oceanographic Institute. They describe it as being at least 22 miles long, 1.2 miles wide, and 650 ft. high. The total volume of the plume, assuming it filled that entire rectangular solid, would be about 3.6 trillion gallons. However, the critical data point that I didn't see reported in any of the newspaper accounts I read was the concentration of oil in that water. According to the report on the Woods Hole site, the concentration of specific oil-derived molecules ("BTEX") is "in excess of 50 micrograms per liter". Adjusting for the density of the chemicals in question, that means that they found oil-related concentrations of approximately 57 parts per billion by volume. So by my math, the total volume of these chemicals within the plume is on the order of 200,000 gallons, or under 5,000 bbl. Unless these chemicals are only the tip of the iceberg in terms of oil derivatives in the plume--and Woods Hole hints that there is more--then we're talking about less than 0.1% of the 4.9 million barrels estimated to have leaked into the Gulf. In other words, while a plume like this might be potentially serious for aquatic life, it's not clear how much doubt its existence casts on NOAA's analysis of where all the oil went.

I will be very interested in seeing further refinements of all these estimates in the weeks and months ahead. Perhaps the media will even include more of the details crucial for putting it into perspective.

Wednesday, August 18, 2010

Scaling the Energy Transition

The August 13 issue of Science, the journal of the American Association for the Advancement of Science (AAAS), devotes a special section to "Scaling Up Alternative Energy". Most of the section, including some nifty comparative infographics, can be accessed free of charge until August 27, requiring only a free site registration. I encourage you to read it while it's available. The articles cover topics such as the prospects for cellulosic ethanol and the challenges of siting renewable energy projects. Another entitled, "Do We Have the Energy for the Next Transition?" particularly caught by attention. I've been focused on this issue from the inception of this blog in 2004 and long before that. This is an issue that's not about to go away or be solved overnight, no matter how much wishfulness we apply to it.

I know I've been beating this drum for a long time, but here's a clear and concise explanation from the top science journal in the country on why the transition to alternative energy won't--and can't--be quick, cheap or easy, as well as why it's necessary to pursue in spite of these limitations. The low energy and power density, intermittency, and uneven geographic distribution of renewables aren't just talking points; they're genuine technical problems that must be overcome. The author compares the transition that's now underway to previous energy transitions and finds fundamental reasons why such shifts take a long time, and why the transition to renewables can't be as quick as many would like. He quotes one expert as saying, "They don't offer new services; they just cost more."

That's a crucial point for anyone who sees this energy transition driven not just by concerns about energy security and greenhouse gas emissions, but by notions of clean energy as the next big wealth-creating global trend, akin to the computer revolution. A kilowatt-hour or BTU does the same work, regardless of its source, so unless it can be produced for significantly less than from conventional sources, greener energy offers no productivity gains of the kind that have fueled the global infotech transformation. As the article notes, using current technologies it is likely to reduce productivity, at least in the energy sector, unless it addresses cost-effective energy efficiency.

And while it's certainly true that the current price of conventional energy omits a number of important externalities, including those relating to climate change, monetizing them by increasing the price of energy will not improve productivity in the sense of creating new wealth; it will merely transfer of wealth from one sector to another. We may still have to do that, but we shouldn't harbor illusions about the ultimate source of the earnings this will create for green energy companies and entrepreneurs, until someone comes up with an energy source that is truly better/faster/cheaper than what it's replacing (without subsidies.)

Although the article doesn't dismiss the potential of renewables to supply a much larger proportion of our energy needs, it suggests that the greatest near-term potential lies in reducing energy consumption, which would simultaneously stretch out our conventional energy resources, reduce their impact, increase the leverage of the renewables we have, and provide more time to improve them. It also points to a transition that looks more like a gradual shift in our energy mix than a sudden displacement of one set of sources by another. That doesn't sound nearly as radical or glamorous as what some pundits have suggested is possible, but it still provides renewable energy businesses with the enviable prospect of making steady inroads into a vast market, the potential of which they couldn't exhaust for decades, as long as they've got a proposition that makes economic sense in light of current and anticipated regulations and incentives.

Monday, August 16, 2010

China's Leverage on Renewable Energy Increases

Last month's announcement that China was cutting its export quota for rare earth elements by 72% for the second half of 2010 didn't seem to attract wide attention, but now that the other half of its strategy has been revealed, that might change. Today's Wall St. Journal reported overtures from Chinese officials to firms interested in accessing these materials, which are critical for the production of some components of renewable energy technology and advanced vehicles. The apparent deal: invest in rare earth processing in China to obtain access, with the output from new facilities incorporated into products for the rapidly-growing internal market or export. Not only would this practice compound the difficulties faced by US and other foreign renewable energy firms seeking to market their products in China, it could also make it much more expensive to produce them outside the People's Republic.

For some time I've been intrigued by growing concerns about access to rare earths and scarce metals. These include the true "rare earths" from the periodic table of the elements, as well as other scarce elements such as Indium, Gallium and Tellurium. Their uses include solar panels, wind turbines, hybrid car motors and batteries, and other "clean energy" devices, along with many non-energy applications. As the Journal noted, China accounts for over 90% of global production of the rare earths and is among the top producers of the other scarce materials. And although China doesn't have a natural monopoly on them, it currently enjoys an effective one, as plans to resume or ramp up production in North America, Australia, South Africa and elsewhere will require both time and significant capital.

This development poses an unwelcome challenge to a variety of renewable energy firms. At a minimum, it could significantly raise their production costs, just as they are trying to move down the experience curve in order better to compete with conventional energy--including newly-abundant natural gas--and at the same time that governments around the world are being forced to cut back on subsidies, due to fiscal imbalances and the weak economy. Any company that depends on a stable, let alone expanding supply of these ingredients must either be looking seriously at relocating production to China or making potentially fundamental changes in their technology to switch to more abundant raw materials. Green jobs, perhaps, but where?

China's efforts to capture higher returns and more of the value-added for these scarce materials shouldn't surprise anyone; it's basic economics. OPEC tried this strategy in the 1980s, when it built export refineries in the Middle East and bought existing ones elsewhere. This didn't work out very well, because it contributed to a persistent glut of global refining capacity that, with the exception of a few standout years, generally benefited consumers more than producers. China could experience something similar in rare earths, once new, non-Chinese sources are brought online--assuming they are. Mining and processing such deposits entails large capital costs that, once invested, can set up a classic boom-and-bust commodity cycle. Unfortunately, the prospect of a future rare earth glut will be of little comfort to makers of wind turbines, advanced car batteries, and thin-film solar cells for the next several years, at least.

Thursday, August 12, 2010

By Executive Order

I recently ran across a mention in the New York Times of a new study suggesting a variety of energy and climate measures the administration could undertake on their own, without requiring new legislation passed by Congress. I've been thinking about this during some long stretches of driving this week. At first glance, the group's ideas merit consideration, and they might indeed be sufficient to meet the near-term emissions reduction goals the US endorsed at last year's Copenhagen climate conference. However, as tempting as such an approach might be in a year of legislative gridlock on energy, its pitfalls probably outweigh its benefits.

I haven't had time to scrutinize the report of the Presidential Climate Action Project item by item, since I'm on vacation. It caught my eye mainly because of the involvement of former Senator and presidential candidate Gary Hart. So my reactions don't really constitute analysis, but are more along the line of ruminations on a first impression that I might examine in more depth later.

At the very least, the idea that the administration could take major steps--beyond what it has already done--to reduce emissions and shift our economy away from its reliance on fossil fuels represents a potentially significant new scenario for the energy/climate environment, particularly if the mid-term elections reduce or eliminate the current Democratic majority in both houses of Congress. It could provide a new policy twist that many of the companies and organizations that have invested so much time in working with Congress on these matters haven't incorporated in their planning.

The problem with such an approach arises from the same source as its appeal: the lack of a sufficient bi-partisan consensus in Congress to enact these changes legislatively. Without a consensus spanning both parties and all factions, any action the President takes on his own could be reversed within a few years. We're not going to lick climate change or our energy problems in the span of any one administration; these problems look much more like the Cold War and require a similarly enduring bi-partisan coalition to deal with them. Major energy policy swings every 4 or 8 years would make this approach much more costly and much less effective, because of the planning and investment horizons involved. The evidence of that is already on display, as this administration reverses many of the energy policies of its predecessors.

Such an outcome is even likelier if these policies become overly identified with a president whose popularity has been waning and who is by no means assured of a second term, barring an unexpectedly robust revival of the US economy. Congress might be even less popular at the moment, but it remains the venue in which a long-term, bi-partisan energy and climate strategy must be hammered out. If a comprehensive energy bill with limits on carbon isn't possible today, important elements of a least common denominator approach to energy security and lower emissions could likely still be enacted. That could include more effort on energy efficiency and a low-carbon electricity standard encompassing both nuclear power and with the currently favored list of renewables. Future administrations and congresses could build on these steps later. A modest compromise along these lines wouldn't please everyone, but it seems preferable to an approach that depends on one party controlling the White House in perpetuity.

Monday, August 09, 2010

Corn Nation

Driving across Ohio, Indiana, Illinois and Wisconsin, it's impossible not to be struck by the dominance of corn cultivation in this part of the Midwest. Those "amber waves of grain" in the song look more like amber waves of corn tassels, here. My family makes this drive every few years, and my unscientific impression is that we see more and larger corn fields every time. Given my focus on energy, I couldn't resist spending a few minutes using the hotel Wi-Fi to check on my hunch that most of this has been driven by the explosion of ethanol production in the last decade--up an average of 22% per year since 2000.

Based on data from the USDA, corn production has grown substantially in the last decade, though by a much smaller annual percentage than ethanol output, and with the peaks and valleys you'd expect for a crop affected by the variability of the weather. The total acres planted in corn have risen, though at an even slower rate, so that steady yield improvements have contributed as much as acreage to the growing corn supply. It might seem like an exaggeration to suggest that ethanol is the main factor behind all this growth, yet when you examine the trends in corn disposition, it would be hard to avoid that conclusion, as the chart below shows.




Since 2000, ethanol has gone from consuming about 10% of the annual corn crop to taking just over a third of a much larger crop. Over that same interval, the other main corn demand sectors have remained relatively constant. The modest drop in corn used for livestock feed probably correlates with the increased output of distillers dried grains byproduct from ethanol plants. For all the controversy about high-fructose corn syrup in our food and beverages, the quantity of corn involved in sweeteners hasn't changed much since the mid-'90s, and the US exports roughly as much corn annually today as it did before the current ethanol boom began. So it really does seem to come down to ethanol, and that has important implications for farmers and ethanol producers, because the practical limits of the expansion of corn ethanol are now in sight.

Corn ethanol output has doubled three times in the last decade, but it is most unlikely to double again. Even if the industry can convince the EPA to forestall the looming "blend wall" by raising the proportion of ethanol allowed in gasoline, ethanol's favored status under the federal Renewable Fuels Standard ends at 15 billion gallons per year of output, beyond which it will find it very hard to compete with biofuels from other mandated sources, including sugar cane and--whenever it becomes commercial--cellulosic ethanol. The impending debate over the extension of the current ethanol incentives signals big changes in the trends that have provided such a boost for US agriculture.

Tuesday, August 03, 2010

Electric Vehicle Choices Expand

One of my basic assumptions about our energy future is that most automobiles will eventually be electrified. That's based on extensive scenario work done with my former colleagues at Texaco, Inc. in the late 1990s. Nothing I've seen since then has changed my view on that. However, vehicle electrification is not necessarily synonymous with "electric vehicle" (EV) in its common usage to connote a car powered only by electricity stored in batteries. It's a much broader category, covering all three electrification options now slated to be available to consumers by year-end: hybrids, plug-in hybrids, and "pure" EVs. It also encompasses fuel cell vehicles, though these have yet to move beyond the test-market stage. The characteristics of the three current varieties of electrified vehicles differ in important ways that will affect both their impact on our energy consumption and their success as consumer products.

With hybrids already well established and plug-in and EV models intended for the mass market about to go on sale, this is no longer just a theoretical comparison. Shortly, consumers will be assessing these cars against each other, as well as against more conventional choices, including clean diesels and ordinary gasoline-powered cars, which are becoming more energy-efficient all the time, as noted for the new-model Ford Explorer SUV. Only part of that comparison will hinge on how their drivetrains are energized. In order to achieve mass-market success, they must compete on the whole array of product attributes, since for many people cars are much more than simple transportation.

Start with hybrids, which are sometimes referred to as conventional hybrids, or even "non-plug-in hybrids", to distinguish them from other types. According to the June Hybrid Cars Dashboard at hybridcars.com, 26 hybrid models represented 2.3% of the cars sold in the US in the first half of 2010. That's down from about 2.8% last year. The Prius by itself accounted for half those sales, and it's still the archetypal hybrid for comparison purposes. Like other hybrids of this type it gets all its energy from the gasoline that's put in its tank, and it uses this fuel more efficiently than non-hybrid cars by recovering and recycling part of the energy otherwise lost through braking, and by avoiding idling. (The latter feature is pretty much all that some "mild" or stop/start hybrids do.) The EPA rates the 2010 Prius at 51 mpg city/48 mpg highway. The base model Prius has a sticker price of $22,800, and as far as I can tell it is no longer eligible for any federal purchaser tax credits.

The Chevrolet Volt is based on a different hybrid design, as a plug-in hybrid (PHEV) or more accurately a range-extended electric vehicle (REEV). It also represents a different car philosophy, presumably aimed at a different segment of the market than the Prius. This kind of hybrid gets its primary power from an external electricity source, stored in a battery pack that gives it a range of roughly 40 miles without using gasoline. At that point, and before the battery's charge is fully depleted, the car's onboard generator--a four-cylinder gasoline engine--kicks in to recharge the battery, which continues to send power to the electric motor. Actual fuel economy thus depends on how often and how far one drives with the generator running. I believe the EPA is still grappling with an appropriate methodology to represent this fairly. Of course even when driven only on battery power, it still consumes energy, and in most parts of the US that means that some fossil fuel will be burned somewhere to power it, most likely natural gas.

GM just announced the base sticker price for the Volt, and at $41,000 before tax credits this should make it pretty clear that GM had someone other than Prius buyers in mind. Having driven a pre-production Volt this winter, I'd see it competing more with the Lexus HS250 hybrid, which starts at $34,650, and with non-hybrid entry-level luxury cars like the Acura TSX ($29,310 MSRP but more like $32,410 similarly equipped.) If it lives up to its potential, the Volt could significantly broaden the appeal of hybrids in general, while also saving a lot of gasoline for its owners. Whether it will also save them money is much harder to assess, because the calculation hinges on the specifics of where and how the car would be used.

Nissan's new Leaf is a bolder, if technologically less-complex step than the Volt, because it relies entirely on grid power stored in a 24 kWh battery pack, with no back-up other than a cable and plug--or a tow-truck. At $32,780 before tax credits the stakes are also somewhat less daunting for buyers willing to risk a bit of range anxiety and some adjustments in their lifestyles. I'm not the only one who sees the Leaf aimed squarely at the green consumers who have formed the core of Prius buyers. That's important for several reasons. It reduces the substantial product launch risks for Nissan, which already has thousands of prospective buyers on its waiting list. However, if the Leaf cannibalizes existing hybrid sales, rather than dramatically broadening the electrified vehicle market, then its impact on US oil consumption and the economics behind those tax credits will look a lot less valuable to policy makers.

My skepticism about the Leaf goes a lot farther than Nissan's incredible claims concerning its equivalent miles per gallon. In the basic architecture of the Leaf I see many of the same issues that caused the launch of GM's ground-breaking EV-1 electric car to fail. Attitudes towards oil and the environment have changed significantly in the last decade, and the government is pushing recharging infrastructure much harder and with much more financial support than when the EV-1 was launched. The Leaf also benefits from not being the only plug-in vehicle coming to market, though it requires its plug, whereas the Volt merely works better with one. Fundamentally, however, I just don't know if enough Americans are ready for a car that can only go about 100 miles on a good day, and potentially a lot less than that when conditions aren't ideal. That's particularly important when we recognize that at the price points for both the Leaf and Volt their realistic market isn't first-time buyers in their early twenties for whom basic assumptions about range and refueling times might not be so ingrained. Taking advantage of the entire $7,500 federal tax credit would require an adjusted gross income of at least $55,000 for single taxpayers ($74,000 for married couples), based on last year's tax tables--and probably even higher when taking into consideration itemized deductions, dependents, and other factors. In my view, likely buyers for both cars would be solidly middle-to-upper-middle class.

Rather than making expansive predictions based on guesses about how well these new cars will do with real consumers, I will be watching the start of this grand experiment with great interest. If the Leaf catches on as well as Nissan hopes, then the trickle of other EV launches that are expected to follow could turn into a tidal wave of automotive innovation. If the Volt does better than the Leaf, despite its higher price, that could signal that consumers still value the comfort of knowing they can pull into a gas station and refuel in three minutes--rather than several hours--more than they value their independence from oil. And if both do well without eroding the sales of conventional hybrids, then that would bode well for a much more efficient vehicle fleet in the years ahead, relying on a much wider mix of energy sources than today's.