Tuesday, July 30, 2013

A First-Hand View of Fracking in Pennsylvania's Marcellus Shale

  • Producing shale gas requires drilling numerous wells, often in regions that have seen little or no oil & gas activity for decades. Addressing environmental and other community concerns is crucial to acceptance.
  • On a recent visit to a shale operation in Lycoming County, PA, I was impressed by the rigorous attention I saw to safety, spill prevention, soil conservation, and other impacts. 
It’s easy to talk about the shale gas revolution in the abstract and forget that it is the cumulative result of thousands of operations in locations across the country. It combines the technological marvel of precisely planned and executed drilling more than a mile below ground with the efforts of teams of skilled workers on the surface, and affects the surrounding community in many ways. Last month I had my first opportunity to visit one of these sites, near Williamsport in north-central Pennsylvania. I also saw several nearby sites in different stages of development. Although I was consistently impressed, I also tried to observe with the concerns of shale gas critics in mind.

Anadarko Williamsport 001Anadarko Williamsport 004

The Anadarko Petroleum well site, or “pad” that I toured is located in Cogan House Township in rural Lycoming County, atop the Marcellus shale formation. API arranged this site visit for bloggers and other media and paid for accommodations in Williamsport. Anadarko provided experts from its local engineering and public affairs staffs and hosted a dinner with members of the community the evening before the site tour. No one paid me to write about the visit, nor was there any expectation that I would report anything other than my candid, objective opinion of what I saw.

I’m no stranger to industrial sites or oil fields. I’ve also invested countless hours researching and discussing shale drilling and hydraulic fracturing. When it comes to complex technical subjects like this, however, no amount of reading or Youtube videos can substitute for seeing the real thing and being able to talk to the people actually doing the job about how it all works.

One example of that is safety. Safety plans, targets and slogans are important, but it carries more weight when the site engineer looks you in the eye and says emphatically in his own words, “The most important thing is that everyone goes home at night,” and then proceeds to explain the stop-work rules, the “red zones” that have to be clear of workers when the fracking pumps are running, and other aspects of onsite safety. We were constantly reminded to watch where we stepped and to make sure we had multiple points of contact with the ground whenever we looked at something or photographed it.

Concern for environmental impacts was similarly thorough. I consider surface spills a much bigger potential risk to groundwater than fracturing a layer of shale thousands of feet below any aquifer. The first thing I noticed at the site, all five wells of which had already been drilled and prepared for fracturing, was the floor. The entire pad was covered with a three-layer mat of black felt, HDPE plastic and fabric, to isolate any spills from the ground. The pad was also surrounded by a berm to contain any spills, which would promptly be vacuumed up by a waiting truck. They even vacuum up rainwater. Yet the real key to spill control is prevention, which in Anadarko’s case is reinforced by its “Eyes On” program. This requires an extra observer any time a liquid other than fresh water is being handled or transferred. Soil conservation efforts looked similarly scrupulous.

Another issue I asked about was noise. I couldn’t gauge it for myself, because aside from trucks delivering supplies the site was shut down during our visit. It’s not prudent to have untrained people wandering around when 30,000 HP of truck-mounted pumps are running, injecting fluids down a well at nearly 10,000 psi. When I inquired, I was told that the pumps themselves were loud, requiring ear protection nearby but not at the perimeter of the site. How far the sound carries beyond the site is a function of terrain, foliage and weather conditions.

Then there were the fluids themselves. An Anadarko engineer described the company’s approach to the five wells at this site as minimal and “green”. The fracking fluid was a simple “slickwater frac”. The main ingredients consisted of around 4 million gallons of water per well--much of it filtered and recycled from nearby gas wells--and 4-6 million pounds of sand, to prop open the fractures created by high-pressure water. The formula also includes a little hydrochloric acid for downhole cleanup, and two other ingredients: a low dose of “biocide” to prevent corrosion from bacterial growth in the well, and a friction reducer, without which significantly higher fracking pressure would be required. The details of the chemicals used at the pad will be available on the public disclosure site www.FracFocus.org once the wells are complete.

I also inquired about methane emissions during well completion. Some critics claim--incorrectly, per independent analysis--that such emissions, along with other leakage, negate the climate benefits of shale gas. Although I was told Anadarko wasn’t specifically employing “green completion” techniques at this site, it was taking steps to minimize emissions, starting with having the gas gathering pipeline ready to go. As each well is completed, it’s hooked up to production so no methane escapes. That maximizes revenue. The site also had a temporary flare to burn off any excess gas from operations before the well could be connected to the pipeline. That sounds wasteful but is environmentally preferable to venting gas.

Of course for all the precautions and evident best practices there’s no disguising that while it is being prepared, drilled, fractured and completed, each drilling site is a compact industrial operation and hub of activity. Numerous trucks carry water, sand, chemicals and equipment back and forth. Anadarko has improved over 200 miles of country roads to handle this traffic, while minimizing freshwater haulage by the use of water pipelines connecting its sites.

The consolation for the neighbors is that the entire process runs its course like any construction project. A few weeks or months of intense activity are followed by years of unobtrusive operation, during which gas flows into pipelines and royalties into the community. The employment and other local economic benefits this creates are significant, especially for communities that have lost many of the industries that sustained their economies in the past.

Anadarko well

Several of the participants at the dinner the evening before drew comparisons to wind turbines, which are much taller than the drilling rigs used for gas wells, and remain on-site for decades. That got me thinking about relative energy contributions. At Anadarko's estimated lifetime gas production (EUR) of 8 billion cubic feet each well could generate more than 1.1 billion kilowatt-hours (kWh) of electricity in an efficient gas-fired power plant. By comparison, a 1.5 MW wind turbine would normally generate less than 80 million kWh over 20 years. So when brought online the five wells on the one pad I visited will together produce energy equivalent to a wind farm of more than 70 turbines.

Even if the ultimate recoveries from these wells turned out to be closer to the lower figures estimated by third parties from the limited data available to them, compared to those used in Anadarko's calculations, it would still represent a very substantial energy yield for such a small site.

I came away from the tour with a strong impression of a well-trained and experienced team, focused on doing the job right — safely and with minimal impacts, because this is where they and their families live; the landowners from whom they lease their sites are their neighbors. And for all the truly impressive technology deployed, what really counts is the people using it.I can understand skepticism about the balance of risks and benefits from shale gas development--this is a skeptical age--but nothing I observed in Williamsport would validate such concerns. Instead, I saw a well-tuned operation that is a microcosm of the biggest US energy revolution of the last 40 years.

A different version of this posting was previously published on Energy Trends Insider.

Tuesday, July 23, 2013

Could Natural Gas Fuel a Trucking Revolution?

  • Natural gas occupies a tiny niche in transportation energy, dwarfed by oil. Conditions are now right for that disparity to begin to change.
  • Heavy-duty trucking looks like the logical beachhead for gas, with higher usage intensity and more manageable infrastructure needs than light-duty vehicles.
The International Energy Agency (IEA) released its latest Medium-Term Gas Market Report in St. Petersburg, Russia last month.  Although the IEA sees the growth of gas in the power sector slowing, they also cite its emergence as "a significant transportation fuel."  What really caught my eye was their projection that gas over the next five years would have "a bigger impact on oil demand than biofuels and electric cars combined," in light of the US shale gas revolution and tougher pollution rules in China.

That's quite an assertion, considering oil's longstanding dominance in transportation energy.  As I noted in March, Italy, Pakistan and several other countries already have well-established demand for compressed natural gas (CNG) for passenger cars.  Despite these hot spots only 3% of gas is currently used in transportation, globally, based on analysis from Citigroup.  The IEA is forecasting that transportation growth will consume 10% of the projected global gas production increase of roughly 20 trillion cubic feet (TCF) per year by 2018.  That's 2 TCF per year of additional natural gas demand in the transport sector, equivalent to 1 million barrels per day of diesel fuel.

I'd be more skeptical about that figure if I hadn't seen a presentation from Dr. Michael Gallagher of Westport Innovations at the Energy Information Administration's annual energy conference in Washington, DC last Monday.  Westport specializes in natural gas engine technology for heavy-duty trucks and played a major role in implementing the LNG vision of the ports of Los Angeles and Long Beach, CA a few years ago. 

Dr. Gallagher made a strong case for gas in heavy-duty trucking, starting with the low cost of US natural gas compared to oil and its products. Initial growth rates in several segments look encouraging, including transit buses and new trash trucks, for which natural gas now has around half the market.  Growth in China has apparently been even faster, with LNG vehicles increasing at over 100% per year (from a small base) and natural gas refueling stations growing at 33% per year since 2003.

In the US, trucking companies can save $1-2 per diesel-equivalent-gallon in fuel costs, while new heavy-duty trucks equipped with natural-gas-compatible engines and fuel tanks cost from $50-75,000 more than conventional diesel trucks. A successful transition to gas for trucking will require  a combination of fuel availability, including retail infrastructure, along with high utilization to defray those up-front costs.

Gas supply looks ample for the purpose. The IEA's forecast includes an increase in US dry natural gas production (gas with the liquids removed) from 24 TCF last year to 28 TCF by 2018.  That's an increase of a little more than 11 billion cubic feet per day (BCFD.) Based on the latest assessment from the US Energy Information Administration, US gas resources equate to 87 years of production at that higher rate. 

From my perspective achieving this scenario depends less on the availability of the gas than on the ability of new transport-sector users to compete with other segments that are equally eager to use more gas.  In the last 4 years gas demand for power generation has grown by 6.8 BCFD, mainly at the expense of coal, and there are many who would like to see that trend continue.  The IEA report also cited US LNG export projects totaling more than 5 BCFD that already have either Department of Energy approval or signed contracts.  New gas supplies won’t wait around for transportation demand to emerge.

The biggest advantage that gas’s new transportation customers have is the value they stand to gain, compared to other gas users.  US LNG projects are selling into increasingly competitive global markets paying up to three times the US wellhead price of gas of around $4 per million BTUs (MMBTU).  However, exporters must cover the cost of liquefaction and shipping, so their netback over wellhead prices might not be that large. Meanwhile gas's encroachment on coal in the utility sector has been driven mainly by its low price.  As the IEA notes, we've already seen this trend slow and reverse somewhat as US natural gas prices recovered from last year's lows. 

Against that, the US retail price of diesel fuel through mid-June of this year has averaged $3.96 per gallon, equivalent to $31/MMBTU.  With retail LNG currently available at a small but growing number of locations for under $3 per gallon, the incentive for truckers to switch fuels looks substantial.  And with a typical heavy-duty truck burning more than 10,000 gallons per year of fuel, each gas conversion is equivalent to the consumption of several dozen automobiles.

Fuel transitions take time.  One of Dr. Gallagher’s charts showed that it took more than 40 years for diesel to displace gasoline from heavy-duty trucks in the mid-20th century.  A lot could happen along the way to a multi-decade shift from diesel to LNG and CNG, including new cellulosic biofuels or battery breakthroughs.  For now, though, gas looks like a strong contender to provide a cleaner, cheaper fuel with sufficient energy density to be practical for long-distance trucking.  This is a trend worth watching.

A slightly different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Tuesday, July 16, 2013

Comparing Driving Costs of EVs and Conventional Cars

  • A new Department of Energy website helps consumers compare the energy costs of EVs to non-plug-in cars by converting kilowatt-hours into "eGallons".
  • How valid this proxy is depends heavily on assumptions about the cars being compared to EVs.  If hybrids set the bar, then DOE's eGallon prices are significantly understated.
I’ve been looking through a new website developed by the US Department of Energy (DOE) to assist consumers in comparing the energy costs of driving an electric vehicle (EV), relative to posted gasoline prices in their state. I heard about this site at the US Energy Information Administration’s (EIA) annual energy conference in Washington, DC last month. It sounded like a handy tool for both current EV owners and those considering buying one, but I couldn’t help thinking about it in the context of a presentation I saw at the same conference on the cost effectiveness of federal tax credits for EV purchases. A key question in both instances concerns just what kind of car is being replaced by that new EV.

The website uses simple math, together with the EIA’s continuously updated data on gasoline and electricity prices around the country, to come up with a national and state-by-state price for an “eGallon”. This imaginary construct is essentially the quantity of electricity that would take a typical EV as far as a gallon of gasoline would take the average new conventional car. As the text points out, it’s hard for consumers to calculate this for themselves. They see gasoline prices everywhere they drive but must dig through their utility bills to find their electricity price–not always obvious–and then might not know how to compare the two.

The site’s documentation indicates the eGallon calculation is based on the average energy usage of five specific EVs, including the Chevrolet Volt, Nissan Leaf, and Ford Focus EV, along with the 2012 EPA fleet average fuel economy for what EPA defines as small and mid-size cars. The result is side-by-side postings of the US average gasoline and eGallon prices, plus a drop-down menu to replicate that for each state. The site also includes the chart below, comparing these two prices over the last decade.

egallon

Two facts become immediately apparent. First, electricity is generally a cheaper fuel for cars than retail gasoline. That’s true for a variety of reasons, including the higher end-use efficiency of electric motors compared to internal combustion engines and the lower cost of most of the fuels used to generate electricity in the US. For example, the natural gas burned in power plants sold for the equivalent of $ 20.40 per barrel last year, while the global benchmark for oil averaged nearly $112/bbl. It also appears to be less volatile, at least at the level of national averages.

However, just as there’s no single gasoline price for the whole country, neither is there a single electricity price. Even the state averages used by the DOE to calculate eGallon prices mask a bewildering variety of regional electricity price tariffs and tiers. So your cost to recharge an EV might not just vary by location, but by time of year, time of day, and the specific rate plan that applies to you.

My main concern about the site derives from something much simper: the big central assumption that EVs compete with the average cars sold in America last year. According to the eGallon site, the average small-to-medium US car in 2012 got 28.2 miles per gallon (mpg) in combined city and highway driving. Using that figure, and with residential US electricity prices averaging 11.6 ¢/kilowatt-hour (kWh) in March 2013, the national eGallon price for March would have been $1.14/gal., compared to $3.71/gal. for unleaded regular gasoline. But what if we assumed that the cars most often compared to a new EV were not average cars, but other efficient cars, as logic and my intuition suggest? If we substituted the fuel economy data for a conventional Ford Focus or Toyota Prius hybrid, the eGallon price would jump to $1.26 or $2.03, respectively.

In some respects this result is fairly obvious. If you were already contemplating buying a hybrid, an EV won’t save you as much as if you were thinking of buying a conventional mid-size sedan. However, this distinction is important enough that the DOE should consider refining its eGallon calculator. EVs are much like wind and solar installations that cost more than conventional alternatives, but are expected to produce over their lifetimes economic or environmental benefits that offset those higher costs. The attractiveness of that big up-front investment is directly proportional to those benefits. I don’t have the data that would clarify the actual comparisons EV buyers are making, but someone must, perhaps including DOE. And it turns out that this isn’t just important for calculations like eGallon, but also for assessing the cost-effectiveness of federal EV policy.

That brings me to the Congressional Budget Office’s analysis of federal EV tax credits last fall. The report merits a posting of its own, but one nugget I gleaned from the presentation at the EIA Conference was that the CBO found that the current federal credit of up to $7,500 per car was still insufficient to make most EVs cost-competitive on a full-life basis with conventional cars. Yet despite this, the effective cost to taxpayers of each gallon of gasoline saved by a Leaf-type EV was well over $6 when compared to conventional cars getting average fuel economy, and over $10 vs. high fuel-economy compact cars. That’s assuming they save any gas at all, because of the way the Corporate Average Fuel Economy rules have been structured. Implied costs for greenhouse gas emissions avoidance were even more startling, at over $400/ton of CO2 in most cases.

The desirability of a tool like “eGallon” is rooted in the convoluted way we talk about transportation fuel economy and energy costs in this country. Miles per gallon is itself a poor metric, compared to something like gallons per 100 miles, or even miles per dollar. That's because it obscures the high value of modest improvements in high-consumption vehicles, while exaggerating the value of shifting from very efficient to ultra-efficient cars. It’s also more useful for policy makers than consumers, who are ultimately concerned about outcomes in dollars per mile or dollars per trip.

Recognizing the impracticality of training 300 million consumers to think about this subject differently, eGallon might prove useful, but only as long as it is grounded in the best information we have about the vehicle choices that potential EV buyers are actually considering. Since current EV incentives apparently provide a poor return to taxpayers, an overly simplistic tool that drives consumers too far in that direction might be worse than not having such a tool at all.

A different version of this posting was previously published on Energy Trends Insider.

Thursday, July 11, 2013

Global Shale Oil and Gas Estimates Expand

  • The Department of Energy's revised shale resource estimates shed new light on the global extent of shale gas and especially shale oil potential.
  • While in the US shale gas preceded large-scale shale oil development, other countries may find fewer obstacles for the latter, and an eager market.
Recently revised estimates of global shale oil and gas resources from the Energy Information Administration (EIA) of the US Department of Energy represent a significant increase over the EIA's 2011 estimates.  Technically recoverable shale oil (tight oil) grew more than tenfold, due to the inclusion of formations outside the US, while estimated global shale gas resources rose by 10%. With these revisions, shale formations now constitute 10% of global crude oil resources and nearly a third of global natural gas resources, although the actual impact of these resources on production and markets is still likely to vary greatly from region to region and country to country.

This year's report reflects a greater focus on tight oil, incorporating insights from the significant development of US tight oil resources that has occurred since the previous report was published. Tight oil development is largely responsible for the 19% increase in US crude oil production from 2010 to 2012.  The smaller adjustment to shale gas is the net result of downward revisions for some countries assessed in 2011, such as Poland and Norway, together with the inclusion of resources in additional shale formations and countries, including Russia, Indonesia and Thailand.

The EIA and the consulting firm that prepared the report were careful to differentiate the technically recoverable resources (TRRs) identified in this data from the more restrictive categories of economically recoverable resources and proved reserves. In other words, these figures represent the quantities of oil and gas that could be recovered if prices justified development and infrastructure was available to carry them to market, not the amounts that producers currently plan to develop.  At the same time, these estimates constitute only a small fraction--at little as 5-25%--of the oil and gas thought to be present in the assessed shale deposits.  Further improvements in technology could substantially increase future TRRs. 

It's interesting to note that although the US leads the world in production of both tight oil and shale gas, it ranks second and fourth, respectively, in global resources of these fuels.  The report also indicates that estimated US tight oil resources of 58 billion barrels (bbl) are more than double current proved oil reserves, which represent just under 7 years of current production.  That's significant, because a sizable fraction of the 139 billion bbls of US conventional unproved TRR--non-shale crude oil not currently included in proved reserves--sits in onshore and offshore areas currently off-limits to drilling. So shale provides a pathway for US oil production to sustain higher output than in the recent past, without having to overcome barriers such as those impeding development offshore California or in the Arctic National Wildlife Refuge. 

Or consider Russia, for which the report cites proved reserves equivalent to 21 years of production and slightly exceeding tight oil TRRs.  Russia possesses many of the factors conducive to shale development, including a large drilling fleet and an oil industry accustomed to drilling large numbers of wells, along with oil-transportation infrastructure. It remains to be seen whether Rosneft and other producers will choose to develop the Bazhenov shale and other deposits rapidly, to increase total output and exports, or more gradually, to offset declines in mature fields and maintain current production rates.

The EIA also reported 32 billion bbls of tight oil TRR in China.  Conventional reserves are comparable to those of the US, supporting current production less than half America's.  Without tight oil, China's economic expansion and the rapid growth of its vehicle fleet put it on track to displace the US as the world's largest oil importer within a few years.  China-based companies are seeking oil in Africa, South America and North America, so it's hard to envision them leaving their own shale resources undeveloped. 

The situation is more complicated for shale-rich OPEC members like Libya and Venezuela.  For example, aside from its current political instability, Libya has nearly 90 years of conventional oil reserves at its current OPEC quota of around 1.5 million bbl/day, before considering the 26 billion bbls of tight oil identified by the EIA.

On balance, the latest EIA shale resource assessment presents a wider and more realistic view of shale outside the US than in 2011. That includes tempering some of the previous report's enthusiasm for shale gas prospects in places like Poland, where few wells had been drilled until recently. The new element is the report's portrayal of the tight oil resource base as broad and deep, centered mainly on countries likely to be motivated to develop it. The shale gas revolution may be slow to spread globally, due to much-discussed differences in the conditions for development, compared to those in the US.  By contrast the development of shale oil, or tight oil, faces fewer obstacles and an eager market.

A slightly different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Monday, July 08, 2013

Early Retirement of US Nuclear Plants Is a Step Backward

  • The early retirement of the San Onofre reactor complex could increase California's greenhouse gas emissions by up to 6 million tons per year.

  • Together with other announced and plausible retirements, the loss of existing US nuclear capacity would more than offset new reactors now under construction, along with their contribution to emissions reduction. 

Last month Southern California Edison announced that the utility’s San Onofre Nuclear Generating Station (SONGS), consisting of half of California's nuclear generating capacity, will close permanently.  The facility had nine years remaining on its operating license. The plant’s two reactors were shut down for repairs in early 2012, and the Nuclear Regulatory Commission (NRC) still hadn’t approved the company’s plan to restart them, despite a protracted review. Although this event is less dramatic than the 2011 Fukushima accident in Japan, its ripples are likely to extend beyond California, where both the state’s electricity market and its greenhouse gas emissions will be adversely affected.

Before considering how the San Onofre closures will affect the nation’s nuclear industry and generating mix, let’s focus on California. While accounting for only 3% of the state’s 2011 generating capacity from all sources, the SONGS reactors typically contributed around 8% of the state’s annual electricity generation, due to their high utilization rates. That’s a large slice of low-emission power to remove from the energy mix in a state that is committed to reduce its emissions below 1990 levels.

How much emissions will increase following the shutdown depends on the type of generation that replaces these units. If it all came from renewable sources like wind and solar, emissions wouldn’t go up at all, but that’s impractical for several reasons. Start with the inherent intermittency of these renewables, and then compound the challenge by its scale. Even in sunny California, replacing the annual energy contribution of the SONGS units would require around 7,200 MW of solar generating capacity, equivalent to nearly 2 million 4-kilowatt rooftop photovoltaic (PV) arrays. That’s over and above the state’s ambitious “Million Solar Roofs” target, which was already factored into the state’s emission-reduction plans.

Grid managers from the state’s Independent System Operator indicated that in the near term much of the replacement power for SONGS will be generated from natural gas. Even if it matched the mix of 71% gas and 29% renewables added from June 2012 to April 2013, based on “net qualifying capacity”, each megawatt-hour (MWh) of replacement power would emit at least 560 lb. more CO2 than from SONGS. That’s an extra 4 million metric tons of CO2 per year, or 8% of California’s 2010 emissions from its electric power sector and almost 1% of total state emissions. If gas filled the entire gap, or if the natural gas capacity used was not all high-efficiency combined cycle plants, the figure would be closer to 6 million metric tons, equivalent to the annual emissions from about 1.5 million cars.

The SONGS shutdown brings to four the number of nuclear reactors that have been closed permanently this year, reducing the operating US nuclear power plant fleet to 100 units. Several other plants face severe challenges, including the ongoing legal battle over the “certificate of public good” for Vermont Yankee, strong local opposition to the Pilgrim unit on Cape Cod, and a hotly contested license renewal process for the two Indian Point units near New York City. The early retirement of San Onofre can only embolden the opposition to other nuclear plants.

A few years ago, when the nuclear power sector planned a large new-build program in the US, it seemed reasonable to assume that most existing plants would easily obtain 20- or 30-year license extensions, in line with well-established precedent. That would carry the bulk of the fleet into the 2040s and beyond. Meanwhile, new construction would add many gigawatts of new capacity and enable nuclear power to gain market share against coal and gas. However, between a recession that stalled the growth of US electricity demand and the low natural gas prices brought about by the combination of the same recession and the shale gas revolution, the economics of new nuclear power in the US have become tenuous. Some operators have even canceled relatively low-cost “uprate” projects to increase capacity at existing plants.

As part of its Annual Energy Outlook for 2013, the Energy Information Administration (EIA) of the US Department of Energy looked at various scenarios for nuclear expansion or retrenchment. In addition to the four reactor retirements announced this year, Exelon Corp. has already announced that its Oyster Creek plant in New Jersey will shut down in 2019, after 50 years of operation. If the two Indian Point units were also shut down, then total retirements since 2012 would reduce US nuclear generating capacity of 101,400 MW by more than the 5,580 MW combined capacity of the five new reactors currently under construction and scheduled to start up by late 2018. The difference of around 650 MW would likely be made up by natural gas.

Between now and 2020, despite the first new nuclear power plants in a decade coming on-line, nuclear’s contribution to our energy mix won’t grow by much, and may actually shrink. That will have consequences for consumers and for efforts to reduce greenhouse gas emissions. Retiring fully depreciated power plants that still have many years of potential operating life remaining, and replacing them with new generation of any technology, is bound to increase the cost of electricity in the markets where these plants have operated. And even if the net loss of nuclear capacity were directly replaced with high-reliability renewable generation such as hydropower or geothermal, that’s still that much renewable capacity not available to displace higher-emitting generation. Opponents of nuclear power may see that as progress, but it looks like a step backward to me.

A slightly different version of this posting was previously published on Energy Trends Insider.

Monday, July 01, 2013

The President's Climate Plan Hinges on Natural Gas

  • President Obama's plan for addressing US greenhouse gas emissions depends heavily on expanded fracking of US shale gas resources.
  • Reducing power sector emissions will be expensive, unless implemented with maximum flexibility.
  • The President's endorsement of climate adaptation was helpful, if overdue.
President Obama's much-anticipated speech on climate change emphasized measures affecting our production and use of energy, which accounts for 86% of US emissions of the greenhouse gases implicated in global warming.  In its recognition of the ongoing importance of fossil fuels to the economy and inclusion of climate adaptation, it probably represents the most balanced approach on this subject from this White House.  However, many of the components of this plan could prove costlier than other solutions that have been debated in recent years.

Some context is necessary.  In 2009 the US Congress, controlled by the President's party, took up comprehensive climate legislation centered on "cap-and-trade." In principle, that would have limited emissions while enabling emitters with high abatement costs to purchase offsets from others who could cut emissions less expensively. Unfortunately, the Waxman-Markey Bill's version of cap-and-trade was so distorted by handouts to favored constituencies that its effectiveness at reducing future emissions was highly questionable.  By the time it died in the Senate, it looked more like a piñata of revenue allocation than a serious effort to address climate change.

So depending on your perspective, the President's new climate plan is either a punishment for Congressional failure to pass a climate bill, or the unavoidable sequel to legislative stalemate. Its prescriptive approach to parceling out emissions cuts to different sectors, with a heavy focus on electricity generation, might prove effective at reducing some emissions but will certainly be more expensive than a broad, market-based approach.

The hallmark of the speech was the President's instruction to the Environmental Protection Agency (EPA) to develop carbon emissions standards for both new and existing power plants.  This feature had been leaked in advance, and it was hailed by many environmentalists.  How much it will cost us depends on how EPA constructs the regulations necessary to put it into effect. 

If they approach the power plant carbon standard in a manner similar to the corporate average fuel economy (CAFE) rules applied to each automaker's new car fleet, then the resulting flexibility could moderate both its cost and adverse consequences. Conversely, if they cap emissions at each individual power plant--analogous to specifying the mpg of each new car--we should expect a wave of power plant closures, including at some gas-fired plants necessary to manage peak loads and back up wind and solar power.  The implications for utility bills and electric reliability are significant, so the details will matter enormously.

The President has a point when he says that past predictions of economic harm associated with previous environmental regulations largely failed to materialize, due to the ingenuity of American businesses. However, I believe his dismissal of concerns about EPA regulation of CO2 emissions is overly cavalier, because it ignores two fundamental facts. 

First and foremost, energy-related CO2 has little in common with the pollutants the EPA has regulated in the past.  It results neither from small impurities in fuel, such as sulfur or mercury, nor as a minor byproduct of combustion with air, such as NOx.  Along with water and heat, CO2 is a primary and unavoidable outcome of all hydrocarbon combustion. We can't clean it up cheaply by purifying the fuel or adding a catalytic converter or scrubber to an exhaust pipe or smokestack.  We must either send it to the atmosphere, as we've done since the invention of fire, or chemically separate it from the exhaust, and then compress it and bury it underground, or react it chemically to produce new fuel or some other product.  So far, both of the latter options are expensive and energy-intensive.   There's simply no free lunch to be had in dealing with emissions from the fossil fuels without which, as the President admitted, "Our economy wouldn't run very well."

The other problem, familiar to my long-time readers, is scale.  President Obama said that his new plan would "double again our energy from wind and sun", as it doubled in the last four years. Unfortunately, that further doubling would only yield enough electricity to displace 9.5% of the electricity generated from coal last year, saving less than 3% of total US emissions. New nuclear power plants require many years to build, and those now under construction only offset the announced and plausible retirements of existing nuclear units.  Thus, any shortfall in electricity supply due to the retirement of additional coal plants must be made up mainly by natural gas, which a new report from the Breakthrough Institute calls the "Coal Killer". 

For the first time in a generation, it appears we have enough gas to take on such a challenge. Replacing half of 2012's coal-fired power generation would require a 50% increase in the quantity of gas sold to US generating plants last year, supported by another 20% increase in US dry gas production, to nearly 29 trillion cubic feet per year. That level of output would be consistent with the Energy Information Administration's current forecast for US production by the mid-2020's.  Of course the power sector would have to compete with other sectors that are also seeking more gas, including manufacturing and transportation, so the future price of natural gas--even with abundant shale sources--is uncertain.  The bottom line is that President Obama's plan for reducing CO2 emissions from the power sector depends mainly on raising US natural gas production through expanded hydraulic fracturing of shale deposits.

I should also briefly mention a few other aspects of the speech. The President wants to promote energy efficiency and renewable energy by having the federal government lead in their adoption.  This sounds like motherhood and apple pie, and there's no question that efficiency measures have a role to play in reducing both energy consumption and emissions.  However, they shouldn't be divorced from a broader perspective encompassing financial and operational efficiency.  I would wager that the federal government could cut its energy consumption from buildings just as quickly, and perhaps less expensively, by reducing its vast inventory of owned and leased buildings.

Federal adoption of renewable energy should also be guided by financial and operational considerations.  For example, it is desirable for the military to ensure that as many of its aircraft, ships and vehicles as possible are capable of using alternative fuels, and to certify new systems on such fuels. However, it would be counterproductive verging on irresponsible to prioritize the purchase of uncompetitively priced alternative fuels, when budget cuts and sequestration are grounding fighter squadrons and otherwise impairing readiness.  Funds are fungible, as CFOs are wont to say, and every extra dollar spent on renewable vs. ordinary jet fuel is a dollar that can't be spent on training or maintenance.  And investments in high-cost renewables could achieve more if diverted into support for innovation on improved energy technologies capable of competing without endless subsidies.

The President's remarks on the Keystone XL pipeline were consistently vague. Some supporters of the project were encouraged, while opponents could still conclude a rejection was inevitable.  From my perspective the only new element was his apparent dismissal of any objections on the basis of potential leaks and other local concerns from the final decision.

Finally, there's the issue of adaptation.  President Obama effectively acknowledged that if the climate models that underlie his plan are correct, we face additional warming and other consequences, no matter how much we reduce emissions. He proposes to "protect critical sectors of our economy and prepare the United States for the impacts of climate change that we cannot avoid."  This is overdue and ought to attract bipartisan support, irrespective of whether our elected representatives are convinced that human activities are responsible for the changes we see in the climate.  Too many people and too many assets have been placed in the path of natural disasters that could become more frequent or severe, if the globe continues to warm. Adaptation offers "no regrets" opportunities, though building more resilient infrastructure is only part of the appropriate response.  Expect to hear more about this subject in the weeks and months ahead.

The Executive Branch actions outlined in the President's speech constitute a Plan B for climate change.  They may yield emissions reductions, though likely at a higher cost than broader, more even-handed measures that have become politically unpalatable. However, in a persistently weak economy the resulting higher energy prices are also likely to threaten an emerging source of US competitive advantage. Instead of solving the politics of climate change, the President's plan could deepen the polarization that already exists on this issue.

Thursday, June 20, 2013

It's Time To Reform US Ethanol Policy

  • Virtually all of the assumptions underlying the Renewable Fuels Standard enacted in 2007 have changed, as the US emerges from energy scarcity into abundance.
  • The linkage between the RFS and food prices is controversial, but a new quantitative model underscores concerns, especially for its impact on developing countries.
This April, two separate bills were introduced in the US House of Representatives to reform, or repeal, the federal Renewable Fuel Standard (RFS) that mandates how much ethanol and other biofuels must be blended into gasoline. A similar bill has just been introduced in the Senate. To understand why reform or repeal makes sense now, we should recall the factors that led Congress to enact this standard six years ago and consider how many of the basic assumptions underlying its design have changed since then.

That requires a review of US fuel consumption and import trends, commodity prices, and the impact of the RFS on food prices. After summarizing the other points I want to focus on the last one, based on an interview I conducted with Dr. Yaneer Bar-Yam, an expert on complex systems who has developed a model that explains the behavior of food prices since the introduction of the first, less ambitious RFS in 2005.
In the fall of 2007, when Congress was debating the Energy Independence and Security Act that included the current, enhanced RFS, the US energy situation looked dire. For four years oil prices had been rising more or less steadily from their historical level in the low-to-mid $20s per barrel (bbl) to around $90, on their way to an all-time nominal high of $145/bbl the following summer. US crude oil production was in its 22nd consecutive year of decline, while our crude oil imports had climbed to 10 million bbl/day, twice domestic production that year.

Even more relevant to the thinking behind the RFS, US gasoline consumption stood at a record 142 billion gallons per year and had been growing at an average of 1.6% per year for the previous 10 years–another 2 billion gallons added to demand each year. In its annual long-term forecast for 2007, the Energy Information Administration (EIA) of the US Department of Energy had projected that gasoline demand would grow to 152 billion gal/yr in 2013 and 168 billion gal/yr by 2020. Meanwhile, US net imports of finished gasoline and blending components had reached a million barrels per day in 2006, equivalent to 15 billion gal/yr–equal to the corn ethanol target set by the 2007 RFS for gasoline blending in 2015. And by the way, US corn prices for the 2006-7 market year averaged $3.04 per bushel (bu). In this environment, policy makers regarded ethanol as a crucial supplement to dwindling hydrocarbon supplies, from a feedstock that was cheap and readily expandable.

Without belaboring the events of the last five years, virtually every one of those trends has reversed course. That has occurred partly as a result of the recession and the lasting changes it produced in the US economy, and partly due to an energy revolution that was largely invisible in 2007 but had already begun.

US gasoline consumption peaked in 2007 and has since declined to 133 billion gal/yr last year. The EIA forecasts it to fall to 128 billion by 2020 and 113 billion by 2030. US crude oil output is the highest in 22 years and is set to exceed imports this year, while the US has become a net exporter of gasoline and other petroleum products. Since 2007 US ethanol production has grown from 6.5 billion gal/yr to 13.3 billion gal., and it seems more than coincidental that corn prices had doubled to an average of $6.22/bu by last year.

That brings us to the controversy that has been widely referred to as “food vs. fuel”. In the last several years I’ve read numerous papers attempting to determine by correlation or other empirical methods whether and to what extent the increase in US ethanol production from corn has affected food prices. To put this in context, since 2005 the quantity of corn used for US ethanol production has grown from 1.6 billion bu/yr to 5 billion bu/yr, or from 14% to 40% of the annual US corn crop.

Some studies, such as this 2009 analysis from the non-partisan Congressional Budget Office found a significant influence on food prices. Others, including an Iowa State study recently cited in a blog post from the Renewable Fuels Association, found a negligible influence. What differentiates the work of Dr. Bar-Yam is that he and his colleagues have developed a quantitative model based on two key factors — corn consumed for ethanol and commodity speculation — that closely fits the behavior of a global price index. Their model also accounts for the “distillers dried grain” byproduct from ethanol plants, which returns about 20% of the corn used in the form of protein-upgraded animal feed.

Before speaking with Dr. Bar-Yam, I was a bit skeptical of his results. Aside from skepticism being my default mode in such situations, I had spent a lot of time looking at claims of speculator influence on crude oil prices in the 2006-8 period and was never convinced that they were more than the “foam on the beer”, rather than a basic driver of prices. However, as I was reviewing his paper prior to our call, a light went on.

The curve his model predicted, which closely matched food price behavior, looked very much like the behavior of a process control loop responding to a ramped change in the set point–forget the jargon and think about how the temperature of your home responds to a steady increase in your thermostat setting: overshooting, then undershooting, before converging. We discussed this and he confirmed that it was effectively an "under-damped oscillator", which can be characterized the same way whether you're talking about an electrical circuit or a market. In effect, the steadily increasing corn demand from the ratcheting up of the RFS started corn prices rising, and the presence of lots of speculators, including “index fund” investors, caused the price to successively overshoot and undershoot the equilibrium price track one would expect.

Dr. Bar-Yam explained that he had arrived at these two factors by eliminating factors that other groups had investigated, but that turned out to have no predictive value. These included shifting exchange rates, drought in Australia, a dietary shift in Asia from grains to meat, and linkages between oil and food prices. In his view the focus on ethanol and speculation is validated by the shift in dialog on this issue away from other, extraneous causes.

He also emphasized that his main concern is not the price of processed foods in developed countries such as the US, for which commodity grain costs are only one input, but rather the price paid for simple foods by poor people in the developing world. From that standpoint he doesn’t just want to see the RFS reformed. ”It is important not just to repeal, but to roll back the amount of ethanol used in the US.” He would prefer not 10% ethanol in gasoline, let alone 15%, but about 5%. “The narrative has to shift,” he said, “to recognize that people are going hungry.” Those are powerful words, and I’m still thinking about them.

At current production levels ethanol from corn contributes the energy equivalent of 6% of US gasoline consumption and about 2.5% of total US liquid fuel demand. That’s not trivial, and there’s a whole domestic industry of investors, employees and suppliers who made that happen at our collective request. However, If Dr. Bar-Yam has accurately captured the relationship between ethanol and global food prices, then we urgently need to reassess what we’re doing with this fuel.

We are also in a far better position now to consider scaling back our use of ethanol produced from grain than we were when the RFS was established. With increasing production of shale gas, tight oil and various renewables, the energy scarcity that has defined our policies for the last four decades is far less relevant to our policy choices going forward. I’ll tackle the practical aspects of RFS reform, in terms of the so-called “blend wall” and its impact on gasoline prices, in a future post.

A slightly different version of this posting was previously published on Energy Trends Insider.

Thursday, June 13, 2013

"All of the Above" Must Be Weighted by Common Sense

  • "All of the Above" is just a cliché if not tempered by an appreciation of the strengths and weaknesses of different energy sources, and a standard basis of comparison.
  • Renewable energy is gaining market share, but fossil fuels--especially oil and gas--will play crucial roles in the energy mix for decades.

Last month, Real Clear Politics and API hosted an energy summit in Washington, DC entitled, “Fueling America’s Future”. It was intended to provide a quick overview of most of the key technologies and issues associated with an all-of-the-above energy strategy for the United States. Going through the highlights of the webcast gives me an opportunity to summarize my point of view for new readers of this blog. I’d sum that up as “All of the Above”, with asterisks for the proportions and situations that make sense.

This slogan, at least in the manner in which it has been espoused by politicians in both parties, has attracted fair criticism for being overly bland and safe. I suspect that critique reflects a general sense that our energy mix has always been composed of all of the above, or at least all of the technologies that were sufficiently proven and economic to contribute at scale at any point in time. However, as both our technology options and choice criteria expand, our understanding of the evolving energy mix is hampered by metrics and assumptions that are overdue to be revisited.

The summit’s first panel examined the technologies of the mix, in a “lightning-round” format of five minutes apiece. The panel covered oil, natural gas, coal, nuclear and renewables, led by wind power.

The interim CEO of the main US wind energy trade association, AWEA, cited his industry’s progress in reducing the technology’s cost, increasing the domestic content in its US value chain from 25% to 67%, and expanding its market penetration. Mr. Gramlich was also surprisingly forthright about wind power’s continued dependence on federal subsidies, a point to which I’ll return in future posts.

He began with a statistic indicating that wind power was #1 in new US electric generation capacity last year. This is more than just a talking point, but it calls for some refinement if we’re to see an accurate picture of the changing US electricity mix. When most generating facilities operated within a narrow band of expected utilization, say 60%-80% of the time, comparing their nameplate capacities like this was satisfactory. Exceptions such as “peaking” gas turbines that only operate a few dozen or hundred hours a year were never the recipients of targeted government incentives.

Now, however, our energy mix includes technologies with effective utilization rates, or “capacity factors”, ranging from as low as 10% for solar photovoltaic (PV) installations in cloudy northern locations, to roughly 90% for nuclear power. Wind comes in around 20-35%, depending on site and turbine size. In terms of their likely annual power generation, new natural gas facilities actually led new wind farms by roughly 2:1 last year.

Given the enormous and largely unanticipated natural gas renaissance in the US, that shouldn’t surprise anyone. In my first blog post over nine years ago I posed a series of questions, including whether we were on the verge of an energy technology breakthrough. I had in mind something involving renewable or nuclear energy, energy storage, or vehicle technology. The shale gas revolution was already starting to emerge from obscurity, but I, along with most other energy experts at the time, remained oblivious to it.

The new head of the American Natural Gas Alliance described gas as clean, abundant and affordable. At least the last two points should be uncontroversial by now, backed up by market prices and resource assessments. We tend to think of gas as a bridge fuel to a lower emission future, but I think we’ll increasingly hear it called a “foundation fuel,” as Mr. Durbin did.

The spokesman from the Solar Energy Industries Association accurately referred to solar as our fastest growing energy source, though he didn’t explain how it would grow from 0.1% of US generation last year to more than 1% by next year. He alluded to a plausible inflection point based on policy and innovation, but his enthusiasm that solar was expanding rapidly outside California and the Southwest ought to worry us.

Until PV prices fall much lower than they have, a surge of installations in places like Vermont and Wisconsin means that taxpayers and ratepayers are paying more than they should to make that happen. And the global competition and “survival of the fittest” he touted has mainly resulted, not from capitalism, but from dueling government incentives for solar, especially in Europe and Asia. I’m much more positive about solar than the above might suggest, but like other renewables, it will cost less and achieve more for us in locations with high-quality resources.

The discussion on oil was more globally focused, based on BP’s forecasts and annual Statistical Review. Contrary to the widespread view of oil’s continued dominance, it has been losing market share over the last 40 years — including the last 13 years in a row — and stands at its lowest market share in the US since at least World War II.  The representative from BP linked this performance to oil’s concentration in transportation fuel, where it has been squeezed out by efficiency, low economic growth (and to some extent biofuels, which got short shrift in the session). At the same time, the growth of North American production, another dividend of the shale revolution, puts increasing pressure on OPEC. I’ll come back to this dynamic in future posts.

Wind and solar aren’t the only, or even the biggest, renewables, despite the attention they receive. I was glad to see hydropower–often the forgotten renewable–represented on the panel, though I was disappointed by the absence of geothermal power. Both are more geographically constrained, yet have features that wind and solar could only wish for.  Hydro generated nearly 7% of US electricity last year from just 3% of US dams, with significant potential for growth through retrofitting unpowered dams. The Executive Director of the National Hydropower Association quoted Senator Ron Wyden (D-OR), the new chairman of the Senate Energy and Natural Resources Committee, as saying, “Hydro is back.” That could prompt some interesting discussions.

I’m glad I wasn’t there representing coal, which must surely be the least loved energy source today. It continues to grow globally, with US coal exports playing a role, but the domestic US story is a “decline narrative” as the VP of the National Mining Association described it. He managed to find a note of optimism in the more efficient coal power fleet that will remain after 68,000 MW of old capacity retires by 2020, under pressure from various regulations and competition from natural gas. Unfortunately, efficiency alone isn’t sufficient. From my perspective, carbon capture and sequestration (CCS) is the key to reconciling coal’s convenience and low energy cost with its high emissions. CCS wasn’t mentioned by name, but was only alluded to as “technology that does not exist.” That dismisses it too lightly, as I’ll explain when time permits.

The head of government affairs for the Nuclear Energy Institute spoke last in the lightning round on technology. (The subsequent panel on energy issues is worth your time, too.) He emphasized nuclear’s anchor role in the US electricity mix, with 12% of US generating capacity contributing around 20% of the electricity supply at a cost of 2¢ per kilowatt-hour (kWh). Yet despite five new reactors under construction and a wave of license extensions, post-Fukushima the center of the nuclear industry is shifting to places like China and India. 66 reactors are under construction outside the US, mainly in the developing world, because that’s where demand is growing.

I’ve worked in various aspects of energy for more than 30 years, and for much of that time our energy mix and the forces that drive it have been in a state of flux. With that in mind, my recipe for “all of the above”  starts with what we have now, recognizes the inertia of existing fleets and infrastructure, and evolves as costs shift and our emphasis on environmental consequences grows.

Wind and especially solar will grow, but will add the most value when used with, rather than against the grain of their limitations. Nor will energy storage turn them into reliable, baseload energy sources like nuclear and coal, at least until it is much cheaper. The US natural gas opportunity looks transformative in a way that renewables don’t, yet, with value well beyond power generation. Coal will linger, but without effective CCS will remain vulnerable from many angles. Meanwhile, oil remains the indispensable fuel for transportation, which is the cornerstone of our global economy. Yet its indispensability will erode in increments each year, as EVs eventually grow from novelty to significance and new biofuels start to emulate oil’s trump cards of convenience and energy density. It’s a great time to be talking about energy, as it has been for the last nine years.

A slightly different version of this posting was previously published on Energy Trends Insider.

Friday, June 07, 2013

Could US Oil Trends Alter How Oil Prices Are Set?

  • Oil prices weren't always set by a transparent global market. Current pricing mechanisms emerged from much less transparent precursors.
  • Resurgent US production, combined with restrictions on US oil exports, could disconnect the US from the global oil market, with unexpected results.
If you follow energy closely, you've likely lost count of the number of times you've heard an economist, executive or government official explain that oil prices are set by the global market, and not by oil companies or the US government.  Although somewhat over-simplified, this statement has been valid for roughly 30 years.  However, it hasn't always been the case. Current trends in US production, together with existing regulations, make me wonder if it will remain accurate in the future, as the US inches closer to what is commonly referred to as energy independence. 

The market-based system of oil prices, with its transparency and easy trading among regions, didn't appear overnight.  Until the early 1970s, Texas played a role similar to Saudi Arabia's current swing producer role within OPEC.  By limiting the output of the state's oil wells, the Texas Railroad Commission effectively determined the global price of oil--to the extent there was one--until Texas had no spare capacity left.  That set the stage for OPEC, a succession of oil crises, and the US oil price controls that were imposed in the 1970s in an attempt to help manage inflation. There was also no single, representative oil price.  Instead, prices were set by producers' contract terms and the discounts large refiners could negotiate, or by federal regulations.  The current system emerged from a series of developments in the 1980s.

When US oil price controls ended in 1981, oil futures trading was just getting underway on the New York Mercantile Exchange.  The heating oil contract was launched in 1980, followed by the West Texas Intermediate (WTI) crude oil contract in 1983. This combined large-scale oil trading with an unprecedented level of transparency.   It was also significant that the US, the world's biggest oil consumer, had become a major oil importer after domestic production peaked in 1970.  Because refineries on the coasts competed for oil supplies with refiners on other continents, the price of WTI couldn't get too far out of line with imported crudes without creating arbitrage opportunities for traders.  And any part of the US connected by pipeline to the Gulf Coast was effectively linked to oil prices in Europe, the Middle East and Asia.

After OPEC miscalculated the response to the very high prices its members were demanding in that period--reaching $100 per barrel in today's dollars--global oil demand shrank by nearly 10% from 1979 to 1983, while non-OPEC production grew by more than 12%.  Prices soon collapsed, and OPEC's dominance of oil markets faded for most of the next two decades, during which the futures exchanges and trading relationships of the modern oil market took hold. 

What could shake the current system of oil prices?  It has already withstood recessions, wars in the Middle East, the collapse of the Soviet Union, and the explosive growth of Asia, with China alone adding oil demand comparable to that of the EU's five largest economies.  However, since the current system is based on the free flow of oil between regions, anything that impedes that flow could undermine the way oil is currently priced.

Setting aside conflict scenarios, consider the potential impact of sustained growth in US production, combined with flat or declining demand and no change in the current prohibition on most US crude oil exports.  The gyrating differential between WTI and UK Brent crude, reflecting rising production in the mid-continent and serious logistical bottlenecks, provides a glimpse of what this could be like.  With much of the new US production coming in the form of oils lighter than those for which most Gulf Coast refineries have been optimized, keeping rising US crude output bottled up here could result in US crude prices diverging even farther  from global prices, while forcing US refineries to operate less efficiently and import and export more refined products.  With oil imports drastically reduced and oil exports still banned, US oil prices might be influenced more by the global market for refined products, with its different dynamics and players, than by the global crude oil market .

In some respects, that sounds a lot like what many politicians and "energy hawks" have been seeking for years: a US no longer subject to foreign oil producers' price demands.  Yet this same scenario could yield all sorts of unintended consequences, including a less competitive US refining industry and higher or at least more volatile prices for gasoline, diesel and jet fuel.  And just as we've seen with cheap natural gas, cheaper oil could undermine the economics of the unconventional oil and gas production that makes it possible in the first place. 

US oil export policy merits a thorough reevaluation, and soon, because the regional impacts of a continued no-export stance could become pronounced, even if the US never reached overall oil self-sufficiency. Such a review should include related regulations, such as the Jones Act restrictions on shipping. With crude oil exports to Canada -- virtually the only allowed export destination for our newly abundant crude types--already rising rapidly, some Canadian refineries may be positioned to supply US east coast fuel markets more cheaply than refineries in New Jersey.  That certainly qualifies as an unintended consequence.

A slightly different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Monday, June 03, 2013

...and Two Steps Back for Cleantech

  • The Better Place bankruptcy ends an interesting effort to circumvent some big impediments to the wider adoption of electric vehicles.
  • DESERTEC's original concept would have matched European solar investment with superior North African solar resources, but was no match for European politics.
Within the last week two of the previous decade's Big Ideas for accelerating the shift from fossil fuels to renewable energy--or at least to electricity generated from a variety of cleaner sources--have come up short.  On May 26th electric-vehicle-battery-swapping firm Better Place filed for bankruptcy liquidation in Israel, and just a few days later the DESERTEC Foundation reportedly "abandoned its strategy to export solar power generated from the Sahara to Europe".  Both of these concepts originally looked promising, and I take no satisfaction in their apparent failure.  However, these events must be telling us something.

Better Place was aimed squarely at two of the largest perceived barriers to wider acceptance of electric vehicles (EVs): the limited range of today's EV batteries and the relatively long times required to recharge them, compared to a typical three-minute fill-up at the gas pump.  Better Place's big idea involved the standardization of EV battery packs on a design that could be quickly removed from the vehicle and robotically replaced with a fully charged battery. This required large up-front investments in facilities and hardware, but the firm didn't fail for lack of capitalization. 

Despite having raised around $800 million since its founding in 2008, and convincing French carmaker Renault to produce vehicles designed to work with their technology, Better Place failed to standardize the emerging EV battery market.  Tesla used a different battery configuration from the start and has focused on its own fast-charging technology, while even Renault's global alliance partner Nissan didn't make compatibility with Better Place a standard feature of its Leaf EV in markets like the US or Australia. That led Better Place to invest in building more-conventional EV recharging networks to accommodate other EVs, diluting both its capital and its concept. 

I see two lessons here. First, EVs and related services are still a niche market, and in spite of its aspirations Better Place became a niche within this niche, largely dependent on the success of EV manufacturers at growing their potential market.  That's a poor place from which to launch a business that ultimately depends on achieving high volumes.  The other lesson is that when you can't make sense of a company's revenue and working-capital model, there's probably a good reason.  At this stage in their development, EV battery packs are apparently still too expensive to sit idle in large numbers, waiting for a swap, when the hardware to exchange them requires the same retail footprint as a car-repair bay--all this to support a service arguably only worth a few hundred dollars per year to an EV owner, compared to the normal cost of recharging.

DESERTEC's big idea was even simpler than Better Place's.  A well-sited solar array in North Africa would inherently generate at least twice as much electricity per year as the same array in Germany, the Netherlands, or Belgium.  All else being equal, it would make more sense to invest in solar where the sun shines brightly for more than 6 hours a day, on average, and to send it by wire to the cloudy, northern countries that want more green power.  Of course physics can't always trump politics, and I suspect that this has more to do with DESERTEC's withdrawal from its basic concept than the cited concerns about transmission capacity and grid congestion across Spain and France. 

Politics enter the story in two main ways.  Renewable energy in the EU is deeply entangled with industrial policy and green jobs. From that standpoint, it's even better if a PV panel in Germany produces half the output as one in Morocco, because you can sell twice as many, all installed by local firms and workers. Then there's the interaction between the EU's generous solar subsidies and the solar manufacturing incentives in Asia and elsewhere, resulting in enormous overcapacity, relative to demand, and a now-global wave of solar bankruptcies and defaults.  This has pushed PV module prices down to a level at which the other costs of solar energy, including installation and transmission, begin to outweigh the module costs. That erodes North Africa's solar advantage relative to its northern neighbors. Throw in the lingering effects of the financial crisis, and a once-big idea looks like an unworkable dead end, at least for now.

Neither the failure of Better Place, which might yet find a bargain-hunting savior, nor the retreat of DESERTEC looks like a mortal blow to the long energy transition now underway.  However, they do suggest that the timeline is a little less likely to be shortened by the kinds of big leaps they offered.  EVs will have to gain market share the hard way, with better, cheaper batteries and ample recharging infrastructure--plus continued taxpayer subsidies--while inefficient solar subsidies continue to divert investment away from some of the world's best renewable energy resources, keeping the technology's global contribution smaller for longer.    

Thursday, May 23, 2013

Can Energy Storage Make Wind and Solar Power As Reliable As Coal?

Wind and solar power generated 3.5% and about 0.1%, respectively, of US electricity last year.  These figures represent large increases from much smaller levels in the last decade as the cost of these technologies declined significantly, particularly for solar photovoltaic (PV) modules. However, other barriers to wider deployment remain, including their intermittent output.  Energy storage is often portrayed as the killer app for overcoming the intermittency of renewables, and a number of interesting developments have occurred on this front, including a new "hybrid" wind turbine with integrated storage from GE. To what extent could more and cheaper storage enable wind and solar to function as the equivalent of high-utilization, baseload generation? 

Assessing that potential requires, among other things, recognizing that energy storage is neither new nor monolithic. Nor is the intermittency of renewable energy a single challenge.  For example, the output of a wind turbine and the wind farm in which it operates varies on time scales of minutes, hours and days, as well as months and years.  The output of a PV installation varies somewhat more predictably, but no less dramatically. 

Generating companies and project developers have an array of new storage options, involving various battery technologies, flywheels, and compressed air. Pumped storage, in which water is pumped uphill and generates power later when it flows back downhill, is an old, though hardly obsolete option and already operates on a large scale. According to the National Hydropower Association the US has 22,000 MW of installed pumped storage. This, too, is expanding and remains one of the cheapest forms of power storage in terms of cost per megawatt-hour (MWh) delivered.   Enough new projects have received preliminary permits to more than triple that figure, in 23 states.

All of these storage alternatives have limitations or drawbacks.  Batteries and flywheels, while very responsive, are still expensive.  Compressed air storage often relies on unique local geological features, and some versions essentially function as a supercharger for a gas-fired turbine, resulting in some emissions. Pumped storage works well at a variety of scales but is less responsive than batteries, has a larger physical footprint, and requires suitable terrain. 

What makes GE's "brilliant turbine" with battery storage look clever is that, with the help of predictive models, it requires a very small amount of battery storage--perhaps as little as that in an electric car--to smooth the output of the turbine for 15 minutes to an hour. That provides significant benefits, including financial ones, in terms of integrating it predictably into the power grid. However, it doesn't transform the turbine into a fully dispatchable generator capable of sending power to the grid whenever demanded.  That would require storing much more energy per turbine and delivering it at rates sufficient to replace the entire output of the installation for at least several hours, along the lines of concentrated solar power installations with thermal storage.

Even these techniques don't get us to the point at which a dedicated wind farm or solar installation could replace a baseload coal-fired power plant of similar capacity running 80% of the time.  For starters, energy storage doesn't alter the total amount of energy collected from the wind or sun.  In an area with good onshore wind resources, generating the same energy as 100 MW of coal capacity would take around 267 MW of wind turbines, because the wind doesn't blow at optimum speed all the time, and other times it doesn't blow at all. The wind farm would also need enough storage to absorb any output over 100 MW, and then make up any shortfalls below 100 MW for the longest duration that would be expected.  The figures for a solar installation would be similar. It just doesn't sound very practical, unless storage became dirt cheap.

Fortunately for renewable energy developers, that isn't what grid operators expect of wind or solar.  In most situations the local grid takes their output whenever it's available, though not necessarily at the price that a generator capable of committing its capacity in advance or responding on demand would receive.  So there's a financial incentive for renewables to add a bit of storage to "firm up" some capacity, while bulk storage appears to be more desirable as a separate asset available to the grid, just like a "peaking" gas turbine, to support multiple renewable sources. Of course in that case there's no guarantee that the power stored would come from renewables.  It's likelier to come from whatever is the cheapest off-peak generation in that market.

So while it's easy to see how improved energy storage can enhance the economics of renewable energy and enable it to be integrated into the grid to a greater extent than otherwise, it's less obvious that even cheap, large-scale energy storage is a panacea for intermittent renewables like wind and solar.  It might even have greater benefits for low-emission but more reliable forms of generation, such as nuclear and geothermal, by allowing them routinely to shift a set portion of their output into more valuable segments of the regional power market. 

Disclosure: My portfolio includes investment in GE, which makes products mentioned above.