Thursday, July 26, 2012

How Secure Are Green Jobs?

It's been an article of faith among advocates of "green jobs" from expanding renewable energy deployment that wind and solar installation jobs are secure because they can't be sent offshore, even as the manufacturing of wind turbines and solar equipment increasingly shifts to Asia.  A story in MIT's Technology Review casts doubts on that assumption, for reasons that have much to do with recent reductions in the cost of solar photovoltaic (PV) cells, modules and panels.  Green jobs, which in any case shouldn't be viewed as the main selling point of renewable energy, turn out to be much like other jobs in facing competition from automation, as well as from globalization.

Why would it suddenly make sense to consider installing utility-scale solar panels using the robots highlighted in the article?  PV module costs have declined dramatically in the last two years.  As I've noted in other postings, this trend reflects the expected experience curve effects--such goods become cheaper as you produce more of them--but also the fierce competition resulting from enormous over-building of global PV manufacturing capacity as countries competed with each other to offer generous subsidies for this industry.  One consequence of these PV hardware price declines is to increase the share of "non-module" costs in the total installed cost of solar panels. Because the power produced by PV is still more expensive than conventional energy in most markets, that tends to shift the focus of innovation toward ways to reduce the costs of the mounting hardware, inverters, and labor used to put these arrays in place.

The article makes it clear that only certain parts of the solar installation trade are currently threatened by robotic installation.  Robots apparently aren't suited to rooftop and small ground installations, yet.  However, with politicians busily blurring the distinctions between outsourcing and offshoring, while neglecting the ongoing transformation of work by automation, computing and telecommunications, it's worth recalling that energy remains a capital-intensive commodity business.  Keeping costs down is crucial for both energy providers and their customers, and thus for the entire economy they energize.  When labor is involved in producing energy, its productivity must be very high, or it naturally becomes a target of innovation and process reengineering.  That needn't mean low wages, but it does imply fewer workers working smarter, with more automation.

The energy industry offers excellent opportunities in many sectors, especially those that are growing rapidly because of new technology or the removal of artificial constraints.  Yet we shouldn't fool ourselves that these jobs are any more protected or permanent than any others, especially in segments that aren't yet cost-competitive.

Friday, July 20, 2012

Food vs. Fuel and the Midwest Drought

It was bound to happen.  As long as US corn output continued to climb year after year, the federal mandate to blend steadily increasing quantities of ethanol into gasoline could be accommodated without creating a shortage of this staple grain.  Unfortunately, crops are subject to all sorts of uncertainties, including the severe drought conditions that the middle of the country is experiencing this year.  Estimates for this year's corn crop have been revised downward, and corn prices have already broken through $8 per bushel, up from less than $6 a month ago, with consequences for the livestock, processed food and ethanol industries, as well as for export markets.  As soaring feed grain prices begin to translate into higher grocery prices for meat, poultry, dairy and other goods, will consumers demand relief from the EPA, which has the authority to curtail ethanol volumes?  The current betting appears to be that the administration will stand fast on the mandate, but anything can happen in an election year. 

Ethanol now accounts for at least 10% of US gasoline blending, by volume.  To meet that demand, ethanol producers will require around 5 billion bushels of corn.  In recent years, the ethanol industry's expanding corn demand was met by a combination of increasing yields and planting more acres in corn.  However, corn yields per acre are dropping sharply this year, potentially pushing output below last year's 12.4 billion bushels, if conditions don't improve soon.  That's in contrast to earlier expectations that this year's corn crop would exceed last year's by 20% .  This isn't the first time that the food vs. fuel trade-off inherent in crop-based biofuels has become an issue, but it might be the first time when both the demand for corn for ethanol is so high and the need for that ethanol in the gasoline blending pool is arguably so low.  In this context, food vs. fuel quickly boils down to a debate over the tangible benefits of corn-based ethanol as a fuel.  There's growing evidence that those benefits have been oversold, despite industry claims.

Start with the widely touted study from Iowa State University indicating that ethanol saved consumers $1.09 per gallon at the gas pump in 2011 and $0.89/gal. in 2010.  I read both the original study and its updated version when they came out.  It seemed obvious to me that the authors' grasp of gasoline markets and oil refining were inadequate, but I lacked the time necessary to dig through their math to uncover the source of their exaggerated results.  Fortunately, a pair of researchers from MIT and my alma mater, U.C. Davis, have now done that work and concluded that the Iowa State paper's findings--and the claims based on them--depended on a "spurious correlation": the relationships they saw were coincidental.

In contrast to the Iowa State studies, the MIT/Davis paper is very readable, and I recommend it to you.  In addition to debunking the statistics, the authors point out the key flaws in their counterparts' logic.  Foremost among these is that in order to have a large influence on gasoline prices, ethanol would have to have had a large impact on crude oil prices, which are the largest determinant of gas prices, by far.  From 2005-11 US ethanol production expanded by 10 billion gallons per year, the energy equivalent of 350,000 barrels per day of oil, or 0.4% of 2011 global oil supply. I've argued many times that the oil market responds disproportionately to modest changes in supply and demand, but the idea that a few hundred thousand barrels per day could translate into the equivalent of $45/bbl exceeds the wildest dreams of any trader I ever met.  The MIT paper concludes with the authors summarizing the likely impact of ethanol on gasoline prices as "near zero and statistically insignificant."

However, if ethanol hasn't done much to hold down gas prices, could a drop in US ethanol production resulting from paring back the ethanol mandate to reduce the pressure on corn prices cause a big spike in gasoline prices?  That's where the analysis in a paper presented to members of Congress yesterday comes in.  Dr. Elam's report suggests that rather than displacing imported crude oil, the main effect of increasing US ethanol use in fuels has been to divert domestic gasoline production into exports, while US crude imports have fallen based on a combination of lower demand (from the recession) and improved product yields per barrel of crude oil refined.  Even if you are inclined to be skeptical of these findings because the study was supported by poultry interests, data from the US Energy Information Agency and elsewhere show that US refineries are not fully utilizing their capacity, are exporting significant volumes of gasoline, and have a wider array of domestic and imported crude oils at their disposal than they did just a few years ago. In short, we're in a far better position to forgo a few billion gallons of ethanol this year than we would have been in 2008, the last time food vs. fuel concerns spiked along with gas prices. 

Corn growers have experienced droughts before, and in the past the price of corn sorted out who needed it most.  However, the market can't prioritize fairly among the competing calls on a drought-diminished corn crop when the single largest segment of demand is locked in place by a federal mandate. This represents a massive distortion that only the government can rectify. I'm sympathetic to the ethanol industry's dilemma.  After all, the federal government virtually begged them to overbuild capacity, but it couldn't guarantee they would earn a profit, even when it was providing a $0.45/gal. subsidy for their customers, who are required by law to use their main product.  However, the economic and environmental benefits of ethanol are too modest to shield this industry while forcing all other corn users to absorb the likely shortfall in corn supply.  The most sensible remedy would be to unshackle ethanol demand, at least temporarily, and waive at least a portion of the ethanol mandate for 2012-13.

Wednesday, July 18, 2012

Should the US Become An Oil Exporter, Again?

Last week I missed attending a fascinating panel on the growth of US oil production, hosted by  the New America Foundation in Washington, D.C. Fortunately, I was able to catch most of the live webcast, which is still available for replay. Much of the discussion focused on the potential of new "tight oil" production techniques, similar to those used to extract shale gas, to help usher in a new period of relative oil abundance.  If this comes to pass, among other things it could challenge long-established views about exporting US oil.  The politics of oil exports look absolutely dire at the moment, but the economic and logistical benefits--not just for oil companies but to the nation--are such that we shouldn't dismiss the possibility lightly.

Two hours was not enough time to do justice to all the ramifications of resurgent US oil production, and I know from following the Twitter feed for the event that some in the web audience were frustrated by the limited attention given to the climate implications of these developments.  However, if you'd like an overview of the possible economic and geopolitical impact of the US becoming more self-sufficient in petroleum for at least the next decade or two, this stellar panel was highly informative and worth your time.  Much of the discussion focused on tight oil, liquid hydrocarbons trapped in rocks that can't be economically tapped by conventional drilling, but that have proved susceptible to combinations of horizontal drilling and hydraulic fracturing similar to those that have unleashed the current shale gas boom. Although the full potential of this resource hasn't been reflected in the latest forecasts from the Energy Information Agency (EIA) of the US Department of Energy, the results from the Bakken shale in the Dakotas and the Eagle Ford shale in Texas are instructive.  Together these two fields now produce around 750,000 barrels per day, or 12% of current US crude oil output, up from just a trickle a few years ago.  They also hold billions, and possibly tens of billions of barrels of recoverable resources.

I was a little surprised that the first panelist to mention the possibility of exporting some of this oil--with appropriate caveats--was Adam Sieminski, the newly confirmed EIA Administrator. After all, current US law restricts the export of most US crude oil production, with special exceptions for some oil from Alaska, California, and near the Canadian border.  In practice, crude exports from those fields have declined to very low levels.  Despite that, and even after significant reductions in imports since the onset of the recession, the US is still a major net oil importer.  If that's the case, and if US refineries can benefit from the increasing domestic output, why would we even consider exporting any of this new oil?

Unfortunately, the answer doesn't reduce to a neat soundbite; it depends on two key factors that require a bit of explanation.  The first issue is the quality of the oil coming out of these tight oil plays, which at least so far has been very high. Oil from different fields varies as much as fingerprints, even when we consider only a few characteristics of concern to refiners, and these differences strongly influence the market values of the various grades of oil.  Light crudes refine easily into valuable products like gasoline, diesel and jet fuel, while heavier crudes require more processing, using more expensive hardware, and often yield large quantities of low-value products like petroleum coke, even after intensive refining. There's also sulfur content--the sweet to sour spectrum that overlays the light/heavy distinctions--as well as other impurities.  Eagle Ford crude is light and sweet, as is the North Dakota Sweet crude produced from the Bakken. These crudes compare favorably with West Texas Intermediate (WTI), Brent and other premium crude streams.

The second, related factor involves the complexity of US oil refineries and the crude diet they've evolved to run. As production of high quality crudes in the continental US declined over the last four decades, many refiners invested billions of dollars to enable their facilities to run some of the heaviest, most sour crudes from around the world, because these were more readily available and usually significantly cheaper than the light sweet crudes.  This trend was particularly evident on the West Coast and Gulf Coast. The addition of complex processing hardware like hydrocrackers, delayed or fluid cokers, and residuum fluid catalytic crackers has given these refineries tremendous flexibility, but it also increased their operating costs and made it harder for them to go back to a diet of much lighter crudes.  As a result, while many of them could handle significant quantities of light crude from the tight oil fields, this would be less than optimal, resulting in economic penalties and perhaps eroding the advantages that have recently enabled gulf coast refiners to capitalize on export markets for their products. Those penalties would translate into discounts for the tight oil grades, compared to similar international crudes, much like the large gap in value we currently see for WTI compared to Brent, though for different reasons as discussed previously.

At current production levels, the mismatch of quality and capabilities isn't as big a problem as the lack of infrastructure for transporting these crudes to market.  That has resulted in discounts so large that it makes sense for private equity firm Carlyle to plan to ship large quantities of Bakken crude by rail from North Dakota to the Philadelphia refinery they've just acquired from Sunoco.  However, if tight oil output grows in line with forecasts such as those in a recent analysis from Citibank, domestic sweet crude refiners will have more than enough supply and the excess must either be sold to heavy crude refineries at a discount or left in the ground.  That's where exports come in. 

The last time exporting domestic crude became a big issue was in the late 1980s, when output from Alaska's North Slope (ANS) field reached peak levels of roughly 2 million barrels per day, far more than west coast refineries could absorb. I was trading crude on the West Coast at the time, and I observed first-hand the effects of the export restrictions that had been put in place when the Trans Alaska Pipeline was originally approved.  Those restrictions didn't just depress the price of ANS crude; they also depressed the price of the California crudes with which ANS competed, and made both types less attractive to produce. West coast consumers benefited from a few years of lower gasoline prices than they would have otherwise paid, but the net result was less industry investment and probably higher oil imports in the long run.  By the time ANS exports were finally approved in 1996, the field was already in decline and the biggest opportunity had been missed. 

The advantages of allowing a portion of these new tight-oil streams to be exported would derive from the difference between the global market premium for crude of this quality and the typical discount paid for the lower-quality crudes that gulf coast refiners would continue to import in order to optimize their product yields and costs.  A difference of just $5 per barrel across a million barrels per day of exports would translate into a nearly $2 billion per year improvement in the US trade balance.  The benefits might also include higher tax revenues and royalties if exports supported higher production.  The biggest drawback I see is that in the event of a global supply disruption, some domestic crude would be committed to non-US buyers, reducing our emergency cushion.  However, that problem might be circumvented by requiring exporters to include provisions in their contracts allowing them to suspend deliveries whenever the US government released oil from the Strategic Petroleum Reserve, or a similar contingency.

Perhaps the best summary of the benefits that US oil exports could provide was given by President Clinton, when he authorized exports from the Alaskan North Slope: "Permitting this oil to move freely in international commerce will contribute to economic growth, reduce dependence on imported oil and create new jobs for American workers."  It's probably premature to provide a similar exemption for tight oil now, but it's certainly not too soon to start the national debate that should precede such a decision.

Wednesday, July 11, 2012

The 2013 US Energy Agenda

It's tempting to focus mainly on the energy issues that have come up in the context of the presidential campaign, such as the Keystone XL pipeline, tax breaks for energy companies, and whether and how to regulate hydraulic fracturing, a.k.a "fracking".  Yet whoever is inaugurated next January, and however he resolves these issues, he will also face a much wider array of energy concerns, including some that are outgrowths of current policies or have emerged after a long gestation.  Though not intended as an exhaustive list, here are a few such issues that merit close attention from the next president's energy team.

They should begin by taking a fresh and objective look at the overall US energy posture and devising a clear and concise way to describe it to the public.  Big changes have taken place, with many of the issues that preoccupied us for the last decade or longer having become less relevant or out of date.  Topping that list is the sense of energy scarcity that has burdened us since the oil crises of the 1970s and early 1980s.  There's a realistic possibility that the combination of "tight oil" and the gas liquids production from shale gas could push domestic US petroleum/liquids production back above its early '70s peak of around 11 million barrels per day. At the same time, our net oil imports are declining, due in large part to the weak economy.  However, as the share of fuel efficient vehicles in our car fleet increases, it's reasonable to think that we've already seen the peak of US demand for petroleum fuels, even after the economy returns to healthy growth.  The net result might fall short of energy independence, but it will put us in a much better position than our largest economic rivals in terms of real energy security. 

Then there's shale gas.  Not only has it reversed a worrisome decline in US natural gas production that prompted numerous projects to import liquefied natural gas (LNG), but it has upended our assumptions about future prices and emissions in the electric power sector, while completing the divorce of oil and electricity that began in the 1980s.  Now we're talking seriously about exporting natural gas. When you combine all these changes with biofuels that are contributing roughly a million barrels per day to US supply (in volumetric, though not BTU-equivalent terms) the need to revisit some of our most basic assumptions about energy looks compelling. 

Energy scarcity isn't the only paradigm that needs to be rethought.  The current administration apparently took office with a view that was prevalent in the environmental community and among some in energy circles, that the solutions to climate change and energy security were effectively synonymous and synergistic.  That view predates the shale/tight oil revolution and was founded on the notion that renewable energy and efficiency were the only serious answers to both concerns.  That linkage was always oversimplified, because it ignored the trade-offs inherent in the shortcomings of every energy technology available.  And now, thanks to unexpected technological developments, we face an explicit choice between energy abundance based on hydrocarbons and a lower-emissions future based on renewables and electric vehicles that won't reach the required scale for decades, despite promising early signs. The transition from the former to the latter appears long and largely unpredictable, nor will it be cheap. 

The next administration also faces a set of practical issues, along with the big-picture reframing described above. Two of these issues involve urgent tasks.  The first is the growing need for a thorough evaluation of the recent and current approach to incentivizing renewable energy technologies and projects.  Since early 2009 we've spent tens of billions of dollars on a constellation of federal grants, tax credits, and loan guarantees to stimulate the growth of a domestic renewable and advanced energy industry and the deployment of its products. There's a lot of new hardware on the ground, but the sustainability of this industry looks uncertain. Although only a fraction of the companies that received federal support have failed, the tally has grown large enough--with the addition of Abound Solar last week--that it's no longer acceptable merely to shrug off these losses as par for the course.  We need some hard-nosed, detail-oriented outsiders to conduct a comprehensive post-expenditure review and extract the major lessons learned.  That should be an absolute prerequisite before anyone contemplates renewing or expanding any of these programs, including the Pentagon's $210 million "green fleet" program.

Another urgent clean-up task is the reform of the federal Renewable Fuel Standard (RFS).  This 2007 mandate was premised on the imminent arrival of cellulosic biofuel technologies that have turned out to be much harder than expected to transfer from demonstration to commercial scale.  That has resulted in drastic annual revisions to the cellulosic biofuel targets of the mandate, but even these lower targets have not been achieved.  Instead, the EPA imposes penalties on refiners and gasoline blenders for failing to blend non-existent volumes, with consumers ultimately absorbing the higher costs at the pump.  The attractive vision of abundant renewable fuels has thus turned into a bureaucratic game.  And while corn ethanol supplies 10% of gasoline and consumes nearly 40% of the US corn crop, it cannot more than double to meet the entire 36 billion gallon per year RFS target for 2022, nor should we wish it to.  Instead, the RFS must be updated to reflect reality, and the associated biofuel-credit trading system should be restructured to squeeze out the fraud that is infecting it, instead of leaving refiners and blenders--and again ultimately consumers--to pick up a tab estimated at $200 million

These items don't constitute an entire energy agenda by themselves, but together with a few higher-profile proposals from among those that both campaigns will announce and debate during the next four months, they could fill out a worthy first-hundred-days' energy plan for 2013.

Thursday, July 05, 2012

A Sign of Sanity in Solar Manufacturing

I've been writing for some time about the chronic overcapacity in global solar manufacturing and the consolidation this is likely to produce.  Now here's a sign that at least one company realizes how bad the situation is.  GE is apparently delaying the construction of its previously announced Aurora, Colorado, thin-film solar panel factory, and "taking this opportunity to re-look at our solar strategy."  I couldn't find a GE press release to back this up, but it's been reported by RECharge and confirmed by Forbes.  It's easy to read too much into a single event, but I think this looks significant, particularly in the wake of Monday's Chapter 7 bankruptcy filing by Abound Solar, incidentally another recipient of a sizable federal renewable energy loan guarantee.

If this information is correct, GE is backing away--for at least 18 months--from building a 400 MW thin-film photovoltaic (PV) solar line in Colorado.  That suggests that they have concluded that even a brand new facility using the latest technology and large enough to compete on scale with thin-film leader First Solar wouldn't be able to earn an attractive margin in this market.  And as a global competitor, GE would presumably regard the new US tariffs on China-based PV manufacturers as insufficient to resolve global PV overcapacity that appears to be stuck at about the same magnitude as demand, despite the continued rapid growth of the latter.

In the last year I've seen numerous articles and blog posts attributing the recent PV price declines to the predicted scale-related effects that have long anchored the industry's central narrative: If we build and deploy enough PV, the cost will fall to the point at which it will be competitive with conventional electricity generation.  That may still be true in the long run, but few of these advocates seem to have understood that the industry was getting ahead of its own narrative--that a big slice of the recent price declines was the result of intense competition among producers who over-expanded and whose margins have contracted sharply or turned negative in the process.  That's a good reason for GE to hit the pause button and focus on improving its technology in the lab, rather than the fab, while other, less well-capitalized firms struggle to survive long enough to participate in the expected growth surge when solar reaches "grid parity" on a sustainable basis.

PV is an important energy technology with a bright future, but its present doesn't look so great.  It's not unusual for manufacturing industries to experience boom-bust cycles, though in my experience those are more common in commodities like chemicals and fuels.  However, it is distinctly unusual for governments to contribute so much to the inflation of the boom part of the cycle through a wide array of incentives, loan guarantees and loans to manufacturers and with subsidies--in some cases extravagantly generous ones--to the industry's customers.  Such interference may have been necessary to jump-start PV supply and demand, but it will almost certainly make for a harder and messier landing for companies, investors and employees, and in cases like that of Abound Solar for taxpayers.  

Friday, June 29, 2012

Could Oil's Surge Sink Renewable Energy?

A new forecast of global oil production by the end of the decade attracted a fair amount of attention this week.  The study, from Harvard's Kennedy School of Government, indicates that oil production could expand by about 20% by 2020 from current levels.  The Wall St. Journal's Heard on the Street column cited this in support of the view that the influence of "peak oil" on the market has itself peaked and fallen into decline.  I was particularly intrigued by a scenario suggested in MIT's Technology Review that this wave of new oil supplies could trigger an oil price collapse similar to the one in the mid-1980s that helped roll back the renewable energy programs that were started during the oil crises of the 1970s.  That's possible, though I'm not sure this should be the biggest worry that manufacturers of wind turbines and solar panels have today.

The Harvard forecast is based on a detailed, risked country-by-country assessment of production potential, with the bulk of the projected net increase in capacity from today's level of around 93 million barrels per day (MBD) to just over 110 MBD coming from four countries: Iraq, the US, Canada and Brazil. However, the study's lead author, former Eni executive Leonardo Maugeri, sees broad capacity growth in nearly all of today's producing countries, except for Iran, Mexico, Norway and the UK.  Although this is certainly a diametrically opposed view of oil's trajectory than the one promoted by advocates of the peak oil viewpoint, it is accompanied by the customary caveats about political and other risks, along with new concerns about environmental push-back.  The latter point is particularly important, since much of the expansion is based on what Mr. Maugeri refers to as the "de-conventionalization of oil supplies", based on the expansion of unconventional output from heavy oil, oil sands, Brazil's "pre-salt" oil, and the "tight oil" that has reversed the US production decline

Although this de-conventionalization trend is very real, it's one thing to envision a shift to an environment in which oil supplies could accommodate, rather than constrain global economic growth; it's another to see these new supplies bringing about an oil price collapse.  It's helpful in this regard to consider the three previous oil-price collapses that we've experienced in the last several decades.  The mid-1980s collapse is the one that Kevin Bullis of Technology Review seems to have latched onto, because much like today's expansion of unconventional oil, the wave of new non-OPEC production that broke OPEC's hold on the market was the direct result of the sharp oil price increases of the previous decade, after allowing for inherent development time lags. The analogy to this period looks even more interesting if the new Administrator of the Energy Information Agency of the Department of Energy is correct in speculating that the US government might be willing to allow exports of light sweet crude from the Bakken, Eagle Ford and other shale plays, to enable Gulf Coast refineries to continue to run the imported heavy crudes for which they have been optimized at great expense.  That could dramatically alter the dynamics of the global oil market.

However, I see two significant differences in the circumstances of the 1980s price collapse, compared to today. First, oil consumption was then dominated by a small number of industrialized countries, the economies of which were still much more reliant on oil for economic growth than they are today. Second, these economies were already emerging from the major recession of the late-1970s and early '80s--a downturn in which the 1970s' energy price spikes played a leading role.  For example, US GDP grew at an annual rate of 7.2% in 1984, the year before oil prices began their slide from the high $20s to mid-teens per barrel.  So when new supplies from the North Slope and North Sea came onstream, the market was ready and eager to use them.  Lower, relatively stable oil prices persisted for more than a decade

Current global economic conditions have much more in common with either the late-1990s Asian Economic Crisis or the combined recession and financial crisis from which we're still emerging.  Each of these situations included a short-lived global oil price collapse that ended when OPEC constrained output and the economy moved past the point of sharpest contraction.  The late-90s oil price collapse looks especially relevant for today, because increased production contributed to it.

A new factor that would tend to make any oil-price slump due to unconventional oil self-limiting is its relatively high cost.  Mr. Maugeri makes it clear that his output forecast depends on prices remaining generally above $70/bbl, and that any drop below $50-60/bbl would result in curtailed investment and slower expansion.  The picture that this paints for me is one in which new oil supplies would be there if we need them to meet growing demand but not otherwise.  That should narrow the implications of such an expansion for renewable energy.

As Mr. Bullis reminds his readers, the connection between oil and renewable energy is much more tenuous than many of the latter's proponents imagine.  The US gets less than 1% of its electricity supply from burning oil, so technologies like wind and solar power simply have no bearing on oil consumption, and vice versa.  That is less true outside the US, but the trends there are also moving in this direction.  So other than for biofuels, a steep drop in oil prices for any reason would have little impact on the rationale for renewables, except perhaps psychologically.  The two factors on which renewable energy investors and manufacturers should stay focused are the economy and the price of natural gas, against which renewables actually do compete and have generally been losing the battle, recently. 

Time will tell whether the Harvard oil production forecast turns out to be more accurate than other, more pessimistic views.  Yet while a drop in oil prices due to expanding supply wouldn't do any good for renewables, the single biggest risk the latter face is the same one that would be likeliest to trigger a major oil price collapse: not surging unconventional oil output, the impact of which OPEC will strive hard to manage, but a return to the kind of weak economy and frozen credit that we should all be able to recall vividly.  If anything, the consequences for renewables from that risk look much bigger today than a couple of years ago, because of the global overcapacity in wind turbine and solar panel manufacturing that built up as the industry responded to policy-induced irrational exuberance in several key markets.

Wednesday, June 27, 2012

Does All-of-the-Above Energy Include Long Shots?

An article in Tuesday's Washington Post described the current funding woes of US research into nuclear fusion, focused on anticipated budget and job cuts at the Princeton Plasma Physics Laboratory, MIT and several other sites.  Aside from the general challenge of funding all of the Department of Energy's programs at a time of huge federal deficits and ballooning debt, it appears that domestic fusion research is being cut mainly to meet our commitments to the International Thermonuclear Experimental Reactor (ITER) being built in France.  The article goes on to suggest that fusion has been excluded from the list of "all-of-the-above" energy technologies that the administration has embraced.  That raises questions that would merit attention at any time but seem particularly relevant in an election year.

Before discussing its proper priority in US federal energy research and planning, it's important to recognize, as the article does, that fusion is very much a long-shot bet.  We know that nuclear fusion works, because it's the process that powers our sun and all the stars.  However, that doesn't guarantee that we can successfully harness it safely here on earth for our own purposes.  I've heard plenty of energy experts who think that the only fusion reactor we need is the one 93 million miles away, which remains the ultimate source of nearly all the BTUs and kilowatt-hours of energy we use, except for those from nuclear (fission) power plants and geothermal energy. 

Unfortunately, the challenges of harnessing the sun's energy bounty in real time, rather than via the geologically slow processes that produced fossil fuels or the faster but still ponderous growing cycles of biofuels, are distinctly non-trivial--hence the debate about whether and how to overcome the intermittency and cyclicality of wind and solar power through optimized dispersal, clever use of Smart Grid technology, or with energy storage that requires its own breakthroughs if it is to be an economical enabler of wind or solar. A working fusion reactor would provide an end-run around all those problems and fit neatly into our current centralized power grid, with what is expected to be negligible emissions or long-term waste.  Who wouldn't want that?

Of course fusion power isn't easy, either; it's the definition of difficult.  Scientists around the world have been chasing it for at least five decades.  I recall eagerly reading about its potential when I was in my early teens.  Then, it was seen to be 30-40 years from becoming commercial, and that's still a reasonable estimate, despite significant progress in the intervening decades.  I admit I don't follow fusion research nearly as closely as I used to, in all its permutations of  stellarators, tokamaks, laser bombardment chambers and other competing designs, all pursuing the elusive goal of "net energy"--getting more energy back than you must put into achieving the temperatures and pressures necessary to fuse the chosen hydrogen isotopes.

So where does a high-risk, high-reward investment like fusion fit into the concept of all-of-the-above energy that now dominates the energy debate on both sides of the political aisle, and in the trade-offs that must accompany any serious energy strategy or plan for the US?  After all, "all of the above" is an attempt to recognize the widely differing states of readiness of our various energy options, the time lags inherent in replacing one set of sources with another, and the need to continue to supply and consume fossil fuels during our (long) transition away from them.  While I've never seen an official list of what's in and what's out, my own sense of all of the above is that it's composed of technologies that are either commercial today or that have left the laboratory but still require improvement and scaling up to become commercial.  In contrast, fusion hasn't left the lab and it's not clear when or if it will, at least on a timescale that's meaningful either for energy security or climate change mitigation. No one can tell us when the first fusion power plant could be plugged into the grid, and every attempt at predicting that has slipped, badly. 

Fusion wasn't mentioned once in the Secretary of Energy's remarks to Congress concerning the fiscal 2013 Energy Department Budget, and it was only shown as a line item in his latest budget presentation.  Yet I can't think of any other new technology that's customarily included in all of the above that has even a fraction of fusion's potential for delivering clean energy in large, centralized increments comparable to today's coal or nuclear power plants.  We could spend all day arguing whether that's as desirable now (or in the future) as it was just a few years ago, but from my perspective it contributes to the option value of fusion.  No one would suggest fusion as a practical near-term alternative, but with the prospect of a shale-gas bridge for the next several decades, it might be an important part of what we could be bridging towards.

Overall, the DOE has budgeted just under $400 million for fusion R&D in fiscal 2013, out of a total budget request of $27 billion.  That's not insignificant, and devoting 1.5% of the federal energy budget to fusion might be about the right proportion for such a long-term endeavor that is decades from deployment, relative to funding for medium-term efforts like advanced fission reactors and near-term R&D on renewables and efficiency.  The problem is that DOE is cutting deeply into US fusion capabilities, not just at Princeton but also at Lawrence Berkeley Laboratory, Livermore, Los Alamos and Sandia, in order to boost US funding for ITER from $105 million to $150 million next year. Only the fusion budgets for Oak Ridge Laboratory, which is managing the US role in ITER, and for the D.C. HQ grew.

I'm certainly not against international cooperation in science, which has become increasingly important as the costs of "big science" projects expand.  However, even if ITER represented the very best chance to take fusion to the next level on its long path to deployment, the long-term implications of these cuts for US fusion science capabilities look significant.  As with the space program, once the highly trained and experienced fusion workforce and teams are laid off and broken up, it becomes enormously difficult to reconstitute them, if needed.  This is particularly true of those with advanced degrees in fields that have declined in popularity at US universities, or for which the majority of current graduates are non-US students who will return to their countries of origin in search of better opportunities.  I wouldn't support keeping these programs going just to provide guaranteed employment for physicists, but we had better be sure that we won't need them later.  I am skeptical that we can be sufficiently certain today of the likely deployment pathways for fusion to be able to make such an irreversible decision with confidence.

I understand that in times like these we must make tough choices; that's the essence of budgeting.  I'm also sympathetic to those who might think that fusion researchers have had ample time and support to deliver the goods, already.  Yet I can't help being struck by the contradiction of a DOE budget in which US R&D for such a long-term, high-potential technology is cut, at the same time that Secretary Chu and the President are pushing hard for multi-billion dollar commitments to extend the Production Tax Credit for renewable energy and reinstate the expired 1603 renewable energy cash grant program, a substantial portion of the past benefits from which went to non-US manufacturers and project developers. The total 2013 budget cuts for the US fusion labs are equivalent to the tax credits for a single 90 MW wind farm, which would contribute less than 0.01% of annual US power generation.  Although we clearly can't fund every R&D idea to the extent researchers might wish, I believe it is a mistake to funnel so much money--about 40% of which must be borrowed--into perpetual support for the deployment of relatively low-impact and essentially mature technologies like onshore wind, when the same dollars would go much farther on R&D.

Wednesday, June 20, 2012

Does Energy-Related Drilling Trigger Earthquakes?

Last week the National Research Council published a comprehensive study of the seismic hazards and risks of a variety of energy-related drilling activities.  Despite widely publicized reports of drilling-related quakes in Ohio and Arkansas, the report concluded that such events are very rare, compared to both the total number of wells drilled and to naturally occurring earthquakes.  Nor are the technologies with the highest rates of induced seismicity necessarily the ones that come first to mind.  Rather than ignoring these risks because of their rarity, the committee of university and industry experts that produced the report recommended the development of new protocols for monitoring and managing these risks, as well as further research into the potential for induced seismicity from emerging technologies like carbon capture and storage (CCS.) 

The study encompassed four categories of energy-related drilling, including oil & gas exploration and production, geothermal energy, liquid disposal wells, and CCS.  Within oil & gas, they looked at conventional production and "enhanced recovery", along with hydraulic fracturing or "fracking". The latter two techniques involve pumping water or some other fluid into a reservoir to stimulate production.  For geothermal, they considered conventional geothermal, both liquid- and vapor-dominated reservoirs, and "enhanced" or engineered geothermal systems, which pump fluid into hot, dry rock to extract useful heat.  They found recorded seismic events in all categories and sub-categories, though again the numbers are small, particularly for quakes large enough to cause damage: Fewer than 160 recorded events globally over magnitude 2.0 within a period of about 30 years from a well population in the millions, and against a natural annual background of 1.4 million small earthquakes of 2.0 or greater and more than 14,000 larger quakes of 4.0 or greater.

In assessing the incidence of seismic events attributed to or suspected to have been caused by energy activities, the committee set a threshold for what they called "felt seismic events".  This is crucial, because all of these technologies routinely cause minuscule events--"microseisms"--that can be detected by a seismometer in close proximity, but would go unnoticed by anyone standing on the surface.  Magnitude 2.0 seems to be the lowest level event likely to be felt by an observer in the vicinity, while an event of 4.0 would be accompanied by more shaking over a larger area, and thus felt by many more people.  Having grown up in earthquake country, I can attest to this.  Anything below about 4.0 would often be mistaken for a train or large truck passing by, while most damage was due to quakes of 5.0 or greater.  For comparison, last year's quake in Mineral, VA that affected the Washington Monument and National Cathedral registered 5.8. Only about a dozen of the induced seismic events included in the study were larger than that.

It's important to note that the mechanisms by which various energy-related drilling and injection processes trigger felt seismic events are fairly well understood.  Scientists and engineers have known since the 1920s that human activities can trigger quakes, and the geosciences have advanced enormously since then.  The main contributing factors identified in the report were the effect of fluid injection on increasing the pressure in the pores of subsurface rocks near faults, along with the "net fluid balance", which they defined as the "total balance of fluid introduced into or removed from the subsurface."  As a result of these factors, drilling approaches in which the net fluid balance isn't materially altered, such as in waterflood enhanced oil recovery, or for which the changes are short-lived, as in hydraulic fracturing, tend to have very low rates of inducing felt seismic events.   In particular, the study found only one documented felt seismic event, of magnitude 2.8, attributable to shale fracking, out of 35,000 fracked shale gas wells. 

By contrast, liquid disposal wells, which steadily increase subsurface pore pressure over time, along with several types of geothermal production, exhibit somewhat higher rates of felt seismic events, though these are still relatively rare and generally minor in impact.  At least theoretically, CCS seems to have a somewhat higher potential for causing seismic events, although this has apparently not been manifested in the substantial number of wells injecting CO2 for enhanced oil recovery--cited in the report as 13,000 as of 2007 and many more today.  Surprisingly, the largest quakes attributed to human activities were associated with conventional oil production, including a couple of 6+ quakes in California and one measuring 7.3 in Uzbekistan. 

One of the most interesting findings in the report was that there is no single government agency in the US with jurisdiction over induced seismic events associated with energy production.  Responsibility--and capabilities--appear to straddle the Environmental Protection Agency, US Geological Survey, Forest Service and Bureau of Land Management, along with various state agencies.  The committee proposed the development of new coordination mechanisms to address these events, as distinct from the ad hoc cooperation that has taken place to date.

I'm not sure what policy makers--the report was commissioned by the Chairman of the Senate Energy and Natural Resources Committee--and the public will make of these findings.  At least from a statistical perspective the technologies assessed here look safe in terms of their seismic risks, and it would be hard to justify sweeping new regulations on the basis of this report.  (I don't know how practical the "traffic light" monitoring system the authors propose would be.)  On the other hand, with the exception of a few people in naturally quake-prone areas--including one neighbor back in California who thinks they are "fun"--earthquakes are fear-inducing, in both anticipation and experience.  Arriving at a consensus on how low a risk of felt seismic events is acceptable might not be easy, especially where natural earthquakes are rare.  Although the public's appetite for reassurance seems to be fairly low these days, it's clear that the National Research Council, an arm of the private, non-profit National Academies chartered by Congress during the Lincoln administration, sees no reason to panic about the seismic hazards and risks entailed in energy-related drilling.

Friday, June 15, 2012

Politics and The Global Cleantech Shakeout

For all the enthusiastic comparisons of the cleantech sector to infotech or microelectronics that we've encountered in the last decade, one rarely employed analogy is turning out to be more apt than the rest: Cleantech seems just as capable as dot-coms and chip makers of undergoing an industry shakeout and consolidation at the same time it experiences growth rates that most other industries would envy.  US and European solar firms continue to fall by the wayside, and this week saw the sale by the world's leading wind turbine manufacturer, Vestas, of one of its Danish plants to a China-based competitor.  Because the cleantech industry has been driven mainly by policy rather than market forces, and has thus been deeply intertwined with politics, the global shakeout now underway will continue to have political repercussions.  Should Europe's monetary problems unleash a new financial crisis, then both the cleantech shakeout and its political fallout could expand.

The strained comparisons this week between the failures of Solyndra and Konarka, a much smaller solar panel maker, likely won't be the last example of this that we'll see this year.  Although I can understand the temptation to link these two situations, the contrast between an award-winning company that took more than eight years to go bankrupt in an economic and competitive environment vastly different than the one in which it was launched, and a business that was already doomed on the day that its half-billion dollar federal loan was inked should have dissuaded anyone from raising this issue.  The analogy looks even worse when you realize that Solyndra was only able to undertake the massive expansion that drove it into bankruptcy as a result of serious deficiencies in the DOE's due diligence process, which failed to spot the crashing price of polysilicon, the previous spike in which had underpinned Solyndra's business model.

Past shakeouts have left other industries in excellent shape, despite the pain they entailed.  Numerous US automakers went out of business during the Great Depression, which was also a period of great innovation that set up the survivors to become a pillar of the US economy for the next half-century.  It's premature to write the epitaph of US cleantech, which could yet emerge much stronger.  At the same time, have we ever experienced such a shakeout in an industry so dominated by government subsidies and industrial policy, against the backdrop of globalized competition with similarly supported industries in Europe and Asia?  The ultimate outcome looks highly uncertain.

In the long run, the administration's investments in cleantech will either look farsighted and courageous or tragically mistaken, rooted in a "green jobs" fallacy that emerged as an expedient Plan B after successive failures to legislate a price on CO2 and other greenhouse gas emissions.  Of course this year's election won't take place with the benefit of history's verdict.  Its energy aspects are likely to be dominated by the behavior of oil and gasoline prices and a potential string of further high-profile cleantech bankruptcies, if the economy remains weak.  (The list of DOE loan guarantee recipients doesn't lack for candidates.) Is it due to defects in our system or merely human nature that such events seem destined to overshadow the positive energy visions that both sides will present to voters?

Wednesday, June 13, 2012

The Summer Oil Slump

Instead of US consumers facing $5 gasoline this summer, as some analysts had predicted, we now find prices slipping well below $4 per gallon as oil prices respond to weakening demand, a stronger dollar, and steady supply growth.  Yet as welcome as this is, it's largely the result of a mountain of bad news: Not only does financial turmoil threaten the very existence of the European Monetary Union and its currency, the Euro, but economic growth in the large emerging economies is also slowing, at least partly in response to the weakness in the developed countries that constitute their primary export markets.  The engine of global growth for the next year or two just isn't obvious.  That's the backdrop for this week's OPEC meeting in Vienna.

Before we become too enthusiastic about the prospect of a period of cheaper oil, we should first put "cheap" in context.  Even ignoring West Texas Intermediate (WTI), the doldrums of which I've discussed at length, the world's most representative current crude oil price, for UK Brent, has fallen consistently below $100 per barrel for the first time since the beginning of the Arab Spring in 2011.  Yet even if it fell another $10/bbl, to about where WTI is currently trading, it would still exceed its annual average for every year save 2008 and 2011.  So while oil might be less of a drag on the economy at $90/bbl than at $120, that's still short of the kind of drop that would be necessary for it to provide a substantial positive stimulus, particularly when much of the drop reflects buyers around the world tightening their belts. 

The US is in a somewhat better position, thanks to surging production of "tight oil" in North Dakota and onshore Texas. This has more than made up for the inevitable slide in output from the deepwater Gulf of Mexico, two years after Deepwater Horizon and the ensuing drilling moratorium. With much of the new production trapped on the wrong side of some temporary pipeline bottlenecks, parts of the country are benefiting from oil prices that are $10-15/bbl below world prices, although short-term gains are a poor reason to perpetuate those bottlenecks, rather than resolving them and allowing North American production to reach its full potential.

Then there's the issue of speculation, which some politicians blamed for the recent spike in oil prices.  To whatever extent that was true--and I remain skeptical that the impact was nearly as large as claimed--we could be about to see what happens when the dominant direction of speculation flips from "long" to "short"--bullish to bearish--as noted in today's Wall St. Journal.  Since the main effect of speculation is to increase volatility, we could see oil prices temporarily drop even further than today's weak fundamentals would suggest they should.

All of this will be on the minds of the OPEC ministers meeting in Vienna Thursday, along with the usual dynamics between OPEC's price doves and hawks.  The pressures on the latter have intensified as Iran copes with tighter sanctions on its exports and Venezuela's ailing caudillo faces a serious election challenge.  OPEC meetings are rarely as dramatic as last June's session, but the global context ensures a keenly interested audience for this one.  Given the impact of gas prices on US voters, both presidential campaigns should be watching events in Vienna as closely as any traders.  $3.00 per gallon by November isn't beyond the realm of possibility.  It would only require a sustained dip below $80/bbl.

Thursday, June 07, 2012

Five Stars for Robert Rapier's "Power Plays"

It's a pleasure to have the opportunity to recommend a new book by a fellow energy blogger, especially when the blogger in question has the kind of deep, hands-on industry experience that makes Robert Rapier's work so authoritative.  Robert has been communicating about a variety of energy-related topics for years, first at his own "R-Squared Energy" site, where I encountered him in about 2006, and lately at Consumer Energy Report and at The Energy Collective. You should not assume from the book's title, "Power Plays: Energy Options in the Age of Peak Oil" or the image on its cover that it is just another in a long line of recent bestsellers proclaiming an imminent and permanent global oil crisis.  Robert's description of the risks of peak oil is nuanced and balanced, as is his assessment of the many other timely subjects included in the book.  The chapter on "Investing in Cleantech" is worth the price of the entire book for would-be inventors and investors, as well as for those setting or administering government renewable energy policies and programs. 

In some respects this is the hardest kind of book for me to review.  It covers much of the same territory as my own writing, drawing on similar educational and career experiences, so I'm hardly representative of its intended or ideal audience.  It is also very close to the book that I've long been tempted to write, myself, after well over a thousand blog posts on the same set of topics and issues.  With those caveats, I enjoyed reading "Power Plays", mainly because despite superficial similarities, our perspectives are still different enough that I found it thought-provoking.  I even picked up a few new facts.  And I should make it very clear that although the book certainly reflects the large body of writing Robert has produced over the last half-dozen years or so, it does not read like a collection of recycled blog posts.  It is also as up-to-date as any project like this could be, including assessments of the Keystone XL pipeline controversy, the Fukushima nuclear disaster, and other recent events.

"Power Plays" is structured as an overview of the complex set of energy sources and applications in use today, including their intimate connection to domestic and geopolitics.  (The book includes a sobering, non-partisan analysis of the efforts of eight US presidents to promote energy independence.)  It is also based on an explicit point of view about the need to reduce our dependence on fossil fuels and to attempt to mitigate human influence on climate change, while being exceptionally realistic about our available options and likely success.  Robert has definite ideas on energy policies that would be useful, particularly in guiding our long transition away from oil.  I don't agree with all of them, but they're well-reasoned and well-articulated. 

The book is also very sound on the facts.  I didn't spot any notable errors, with the possible exception of a brief explanation of why hybrid cars are more efficient than conventional cars--in my understanding this derives from the optimization of engine output and the recycling of energy otherwise lost in braking, rather than from inherent differences in energy conversion efficiencies between electric and combustion motors.  Otherwise, aside from the natural differences of interpretation one would expect, Robert delivers 250 pages of straight talk about energy.

One word of warning along those lines: If you come to this book as a firm and uncritical advocate of any particular energy technology to the exclusion of most others, you should prepare either to have your feathers ruffled or find yourself questioning some of your beliefs.  That is particularly true for renewable energy and biofuels, which constitute Robert's current main focus as Chief Technology Officer of a forestry and renewable energy company.  On the other hand, if you'd like to learn more about why fuels like corn ethanol are less-than-ideal substitutes for oil, and why cellulosic biofuel is more challenging to produce and scale up than the promoters of many start-up companies would like you to think, this is a great place to start.  And in addition to the obligatory assessment of vehicle electrification and electric trains, his chapter on oil-free transportation features a serious discussion of bicycling and walking, something it might never have occurred to me to include.  All of this is handled with rigor, ample references, and a leavening of tables and graphs that shouldn't overwhelm those who are more comfortable with words than numbers or data.

I highly recommended "Power Plays" for my readers.  It is available in print and e-book formats from Barnes & Noble and Amazon, where it has garnered exclusively five-star ratings at this point.  I intend to post my own five-star review there when time permits. 

Monday, June 04, 2012

Does A Golden Age of Gas Depend on Golden Rules for Gas?

Last Friday I was in Washington, DC for the presentation of the International Energy Agency's latest report on natural gas, "Golden Rules for A Golden Age of Gas."  It is a follow-up to last year's IEA scenario describing the enormous gas potential now being unlocked by new combinations of technology. According to IEA's chief economist, Fatih Birol, who was the lead speaker at the event at the Carnegie Endowment for International Peace, the new report addresses the key uncertainty in delivering on that potential, including the potential of new gas supplies to "fracture established balances in the world energy system."  In IEA's view the resources and technologies are in place, but environmental and social challenges represent serious potential roadblocks; overcoming those obstacles calls for a new set of principles along the lines of the ones included in the report.  Fundamentally, as Dr. Birol put it, the industry must focus on its "social license to operate", if it is to develop the massive global resources of shale and other unconventional gas to the extent now being envisioned.  I believe many in the industry would agree that that license can't be taken for granted. 

The report spells out seven principles that IEA sees as prerequisites for securing the necessary concurrence from governments and publics.  While several of them merely enunciate common sense, others will likely be controversial on one side or the other--if not in theory then in their implementation.  IEA's description of these principles can be found in the report's executive summary. I would paraphrase them as:

1. Operate with transparency
2. Choose appropriate sites
3. Contain potential contaminants
4. Be vigilant with water!
5. Control emissions
6. Recognize scale
7. Regulate carefully

None of these is likely to startle my regular readers, since I've been writing about shale gas extraction and its potential economic, environmental and geopolitical consequences for several years.  The aspect of these principles that got my attention during Friday's presentation concerned IEA's admonition to "Be ready to think big."  Dr. Birol cited statistics indicating that there are currently around 100,000 unconventional wells in the US today--a figure that might include unconventional oil wells.  Supplying the levels of shale gas forecasted by IEA and other agencies would require on the order of one million wells.  That compares to a total US well population of roughly a half-million.  Drilling on that scale requires that we get it right, because if we don't, even small consequences could compound.  However, Dr. Birol also made it very clear that in the view of the IEA, the industry is entirely capable of getting it right.

The findings of this report--at least the high-level findings--have been widely embraced across the environmental and business spectrum.  Among groups embracing the report are the American Petroleum Institute and the Investor Environmental Health Network, while EPA Assistant Administrator Gina McCarthy, who was also on Friday's panel, seemed to place her agency's regulatory approach to shale gas in the context of IEA's principles.  The harshest criticism I've seen so far is that while it acknowledges the industry's work on best practices, it fails to recognize that much of this is already standard practice, at least in the US.  Along those lines, API and the American Natural Gas Alliance (ANGA) are jointly issuing a new report on methane emissions from hydraulically fractured ("fracked") gas wells today.

IEA's report and the early reactions to it clearly illustrate that despite the many thousands of unconventional gas wells that have already been drilled, and the dramatic impact of shale gas on both natural prices and gas-dependent industries, we are still in the early days of a possible global energy revolution.  The extent of that revolution hasn't yet been determined, and it will be shaped as much by the reactions of numerous stakeholders as by the investment plans of producers. Whether you see that as a good or bad thing, it's an indisputable feature of the world in which we now live.  However, I don't think it's appropriate to view IEA's seven principles exclusively as a set of rules to be imposed on a reluctant industry; they're as much about getting the rest of society comfortable with an energy resource that could provide enormous economic and environmental benefits, globally, particularly with regard to greenhouse gas emissions.  Although Dr. Birol emphasized that unleashing all this shale gas won't be sufficient to solve the climate problem, he also demonstrated that without it, our chances of reining in emissions look even worse, because the main trade-off globally is not gas vs. renewables, but gas vs. coal.  Getting this right is crucial for many reasons, and the IEA's report looks like a helpful contribution to the dialogue that must take place.

Wednesday, May 23, 2012

Can the US Military Afford More Biofuels?

Last week the US House of Representatives passed the fiscal 2013 National Defense Authorization Act by a wide, bi-partisan margin. It included two controversial provisions relating to energy that will presumably be debated when the Senate Armed Services Committee takes up the bill this week.  Sections 313 and 314 would exempt the Department of Defense from a provision of the Energy Independence and Security Act of 2007 (EISA) barring the government from purchasing alternative fuels with higher emissions than conventional fossil fuels, while prohibiting the purchase of any alternative fuel that costs more than the conventional fuel it would replace, except for testing and certification purposes.  If enacted, the bill would require drastic revisions to the current alternative energy strategies of the US military branches. 

It would be easier to attribute these provisions to partisan maneuvering, if our economic and fiscal circumstances hadn't changed so dramatically subsequent to the passage of EISA in 2007.  Although I don't dismiss the influence of election-year politics in such matters, we are now in the third full year of a recovery so weak that many Americans still think we're in a recession, and we face deficits and a ticking debt bomb that forced a reluctant Congress to agree to deep spending cuts starting next January.  Nearly $500 billion of those cuts are targeted at military spending.  Moreover, our perspective on US energy security has been altered by the emergence of shale gas and so-called "tight oil", and by our recent shift from net importer to net exporter of petroleum products--though certainly not of crude oil.  While it remains desirable for the US military to diversify its energy sources, the value of that diversification has arguably fallen.  Meanwhile, the biofuels industry, despite tremendous growth and advances, has been unable thus far to compete with petroleum-based fuels without either large subsidies or strict mandates, even with a global price of oil that has remained consistently above $100 per barrel since January 2011. 

Last year I had a couple of opportunities to question Defense Department officials about their alternative energy strategies, as part of an Army/Air Force energy forum and a subsequent Air Force media briefing at the Pentagon.  Although I was impressed by the changing military culture concerning energy and the methodical way they were approaching the introduction of new fuels, I was concerned that at some point the services' procurement of higher-cost renewable fuels would conflict with their other priorities, including the need to replace equipment worn out in Iraq and Afghanistan and to field the next generation of aircraft and naval vessels.  What I thought I heard very clearly from the Air Force Deputy Assistant Secretary for energy was that his service was not going into the fuel-production business, and would only buy renewable fuels--other than for certification with their fleet--if they were competitive with conventional fuels. That approach seems very different than the one embodied in the Navy's "Great Green Fleet" initiative.

The rationale behind the military's adoption of alternative fuels rests on many complex issues, including the vulnerability of military supply chains and budgets to potential disruptions in oil supplies and price spikes, consistency with the government's imposition of renewable energy mandates on the private sector, and the desirability of reducing the environmental footprint of the military's global activities.  There's also the human dimension of personnel put at risk delivering fuel to front-line units, although it's not clear how biofuels would alleviate that risk unless they were produced in forward locations. In any case, however, all these concerns must be reconciled with a realistic response to budget constraints. That looks extremely challenging, and it shouldn't be divorced from deeper questions about the evolving drivers for biofuels or other alternative fuels for the US military.

Consider the question of supply disruptions, for example.  US oil production looks set to continue increasing and oil imports to keep falling, while we now enjoy a refining surplus that is supporting new product exports.  We also have a Strategic Petroleum Reserve that could replace up to half of our net crude oil imports for up to 5 months, or a smaller disruption for much longer.  As a result of these factors, it's become more difficult to envision a scenario in which an oil market event affected the military's access to fuels in a manner that the present renewable energy industry could alleviate.  And with the cost of most alternatives still above even today's elevated prices for oil and its products, the investment required to develop an alternative fuel industry capable of making a meaningful dent in the military's needs under such a scenario would be very substantial.  Should the military make that investment, should someone else, or should it be left to the market?  And that doesn't begin to address the issues related to the non-renewable alternative fuels that would be enabled by Section 313, including synthetic fuels derived from natural gas or coal, though these would still be subject to the restriction that they must be price-competitive with conventional fuels. 

I suspect that the House bill will not be the last word on this subject, though I also imagine that in the new world of "sequestered" budgets and the fiscal challenges that lie ahead, the US military may need to rethink what can be achieved in this area without sacrificing readiness and combat capabilities. It's also important to note that the 2013 Defense Authorization Act's provisions on alternative fuels shouldn't affect the services' efforts to integrate renewable electricity generation, which looks like a real boon for some forward-deployed applications.


Friday, May 18, 2012

E15's Problems Are Symptomatic of A Failing Biofuels Policy

A new report on automobile engine durability casts further doubt on the compatibility of mid-level ethanol blends such as E15 (15% ethanol, 85% gasoline) with the existing US light-duty vehicle fleet. The report was issued this week by the Coordinating Research Council (CRC) under the auspices of API, Global Automakers, and the Alliance of Automobile Manufacturers.  It found that at least some of the vehicles included in EPA's certification of E15 for use in cars manufactured since 2001 experienced excessive valve wear and other mechanical problems over the course of a simulated engine lifetime.  Together with previous research highlighting the risks of E15 for gas station pumps, the report's findings raise serious questions about the federal government's current ethanol policy and who will ultimately bear its hidden costs.

I've written extensively about the EPA's approval of E15 for use in vehicles and the underlying rationale for increasing the ethanol content of most gasoline beyond the 10% limit (E10) for which most cars on the road today were designed.  At current volumes, domestically produced corn-based ethanol accounts for roughly 10% of all US gasoline and displaces the energy equivalent of 600,000 barrels per day of imported petroleum products.  However, without increasing the amount of ethanol blended into each gallon of gasoline, and in the absence of a miraculous transformation in the public's minuscule appetite for E85 (the 85% ethanol blend sold for flexible fuel vehicles) the US ethanol strategy has hit its natural limit.  Since US gasoline consumption, which prior to the recession routinely grew at 1-2% per year, has stalled at a level comparable to what we used ten years ago, the enthusiasm of the US ethanol industry for E15 to expand its market is entirely understandable.

The CRC's results have been criticized by both the ethanol industry and the Department of Energy.  Although I don't have the background to judge CRC's report assumption by assumption and result by result, it does appear that many of the criticisms raised by the DOE were addressed in the body of the report, including the choice of ethanol-free gasoline as the reference fuel.  As for complaints that the auto and oil producers have a vested interest in making E15 look bad, ethanol producers are at least as conflicted for their part.  Moreover, without impugning the integrity of the fine folks at the DOE, the federal government also has a significant conflict of interest in this matter: The administration and its cabinet agencies are stewards of a 2007 national biofuels policy that now depends on the adoption of mid-level ethanol blends like E15 if it is to have any chance of reaching its goal of 36 billion ethanol-equivalent gallons per year by 2022, from around 15 billion gallons per year today.  The apparent damage to some engines running on E15 under test conditions similar to those used by the car manufacturers for their own product testing highlights risks that must be addressed before consumers should be asked to put this fuel into their cars.

In addition to concerns about the safety of this fuel for the mechanical integrity of the tens of millions of vehicles for which the EPA has approved it, including both of my family's vehicles, E15 still faces substantial practical obstacles to its widespread distribution--obstacles that will likely require significant new federal funding to overcome. My industry contacts tell me that gasoline retailers considering selling E15 must install brand new gas pumps, because the nationally recognized testing laboratories like UL won't certify existing product dispensers for use with E15.  Anyone who ignores this requirement faces serious liabilities and could end up in violation of local fire codes.  So not only would retailers selling E15 instead of E10 be excluding a large portion of their existing market--at a minimum all pre-2001 cars--but they would have to make significant investments to do so.  Such investments are unlikely to be repaid by higher prices for E15 than for E10, because if anything, E15 should sell for a discount to E10.  Its nearly 2% lower energy content than E10 would translate into a requirement for roughly one extra fill-up a year for the average driver, or a penalty of about $37 per year at current prices.

There's an obvious solution for the risks that the administration is asking motorists to take on with E15 fuel.  Since neither vehicle manufacturers nor fuel retailers are prepared to accept the liability for excessive engine wear or fuel system damage from using E15 instead of E10 or purer gasoline--a position the Congress is considering granting statutory protection--the federal government should step up to this role.  The DOE and EPA claim E15 is safe for cars.  In the private sector, such claims would have to be backed up by warranties, explicit or implied.  Why should this situation be any different?  The President should therefore instruct DOE and EPA to carve out a portion of their annual budgets--after cuts--to fund a new federal warranty program for vehicles damaged by E15.  If these agencies are unwilling to stand behind their assessment of E15, then perhaps this fuel is not as ready for prime time as they suggest.  In any case, foisting this liability on consumers would represent a hidden and likely regressive new tax.

Wednesday, May 16, 2012

Are Chesapeake's Problems A Red Flag For Shale Gas?

Chesapeake Energy has been in the news a lot, lately, concerning both the significant challenges it faces in financing its ambitious development program, and its high-profile CEO, who was recently forced to relinquish his role as Chairman.  The company's stock is trading for half its value one year ago and less than a fourth of its 2008 peak.  Chesapeake is the second-largest producer of natural gas in the US after ExxonMobil, and probably the company most associated with the shale gas revolution, yet it is struggling.  I wouldn't be surprised if skeptics regarded the firm's travails as a warning that the transformative potential of shale gas for the US has been oversold.  However, in evaluating that concern it's important to distinguish among the physical resource, the economics of the industry, and the unusual business model of this one company.  Investors and policy makers may not share the same perspective on these issues.

Start with this key fact: If Chesapeake is in trouble, it isn't because the gas resource isn't there or can't be exploited profitably at a reasonable gas price.  We'll come back to that.  Fundamentally, Chesapeake is on the wrong side of a historic divergence between US crude oil and natural gas prices, and it is playing catch-up to get on the right side of that spread.  This is the downside of natural gas that is trading for the equivalent of $15 per barrel when the global crude oil price--and the price of the most valuable US crude--is still over $100 per barrel.  That relationship is great for US energy consumers but lousy for US gas producers.  It's particularly difficult for Chesapeake, because the company has embarked on a strategy of reducing its focus on natural gas and increasing its production of liquids--crude oil and natural gas byproducts like pentane, butane and propane.  That shift requires significant new investments at a time when the cash flow from its core gas properties has fallen off, despite steadily increasing output.  It also doesn't help that Cheseapeake began this strategic shift later than competitors like EOG Resources.

Chesapeake's problems are further complicated by its business model, which has focused on finding and proving resources and then selling them down to fund the next big project.  Consider transactions such as last year's sales of one-third interests in Chesapeake's Eagle Ford and Niobara shale holdings to China-based CNOOC for around $1.6 B plus a substantial share of development costs. This isn't your father's gas company.  But with natural gas prices so low--even a year out they're still only equivalent to $20/bbl--its existing portfolio of gas assets is worth less to potential buyers.  Other companies such as BHP that have bought shale assets in the last couple of years are reportedly considering write-downs.  As a result of these market conditions, Chesapeake is apparently looking at a variety of new funding opportunities, including selling interests in oil-bearing properties and pre-selling production streams. 

I don't have an opinion on whether they'll navigate these rapids successfully or not. I'm not in the business of providing investment advice to my readers, nor do I have any financial interest in Chesapeake.  My interest here is in the implications of Chesapeake's dilemma for the larger US energy situation.  The current low natural gas prices are clearly putting a lot of stress on independent producers, including the majority that have much simpler business models than Chesapeake's, based on acquiring leases, producing gas, and generating revenues that exceed their costs.  If low prices persist, then a lot of producers could be in deep trouble, and some of them won't make it.  That's the risk they took.  However, the more salient question is what all that means for US natural gas production, which last year broke the production record set in 1973, when we produced 60% more crude oil than today.  Could an industry shakeout lead to lower US natural gas production?

One indicator is that drilling for gas has already slowed significantly.  According to Energy Information Agency statistics, the number of exploration and development wells targeting gas is down sharply, while those pursuing more valuable oil are up.  Sometime soon that switch should be reflected in output trends, though because of the higher productivity of shale gas wells actual gas production may not decline before demand picks up.  In its latest investor presentation Chesapeake forecasted producing nearly as much gas in 2013 as last year, despite its big shift to liquids.

As odd as it may sound, the future trajectory of US gas production likely depends on a race between potential supply and potential demand, both of which are enormous.  The oil & gas industry hasn't seen the likes of this since the 1980s, and perhaps not since the early 20th century. Today's shale gas output is just scratching the surface of a new resource conservatively estimated at 482 trillion cubic feet (TCF), or 96 times 2010's shale gas output of 5 TCF. Meanwhile, gas is already eating coal's lunch in the utility sector, reducing the latter's share of electricity generation by about 6 percent of the market in the first two months of this year, compared to 2011.  The opportunity in transportation is even larger, though clearly more difficult to exploit, because of infrastructure and fleet investments.  Chesapeake seems to understand this, given the money it's investing in market development, on both advocacy and specific projects to turn gas into transportation fuels or use it directly in cars and trucks.

If natural gas could be stored and shipped as easily as oil, the current oversupply wouldn't be a problem, nor would domestic producers be forced to pace their expansions to match the growth of demand or exports.  However, gas producers have always had to develop major new resources in tandem with markets, with that discipline often enforced by price, as we are seeing today. Time will tell whether Chesapeake expanded too rapidly, or like so many others merely missed the warning signs of the recession and financial crisis that hobbled demand growth at just the wrong time for them.  Yet no matter how their story turns out, the fate of US shale gas is not dependent on the fortunes of a single company, and there will be plenty of larger, better-capitalized players waiting to snap up the assets of any first-generation shale gas producers that don't make it.  That's because no matter how challenging today's low prices are for producers, the prices that prevailed just a year or two ago were adequate to support the growth of both production and new demand.