Wednesday, August 27, 2014

Threats and Opportunities of Distributed Power Generation

  • Rooftop solar panels aren't the only distributed generation technology that could challenge existing utility business models as it grows.
  • Some power companies see DG as an opportunity and are entering this segment in ways that could prove challenging to their start-up competitors.
Two recent news stories highlighted different ways that utilities and large generating companies are beginning to respond to the emergence of distributed generation (DG) as more than back-up power. Arizona Public Service (APS) is launching its version of potentially the most challenging type of DG for utilities, rooftop solar. Meanwhile, Exelon Corp. announced an investment partnership with a provider of gas-powered fuel cells. The success of such ventures and the evolution of DG will have implications for electrical grid stability and our future energy mix, including the role of flexible, large-scale gas-fired generation.

APS is seeking regulatory approval for a program that might be characterized as free rooftop solar. In effect, they would lease approved homeowners' rooftops for $30 per month, in order to host a total of 20 MW of solar panels that would be owned and controlled by APS. The idea has generated some controversy, partly due to the utility's rocky relationship with the solar industry over issues like "net metering". 

The plan would enable homeowners who might not otherwise qualify for solar leasing from third parties to have solar installed on their homes, although they would apparently still receive their electricity through the meter from the grid, rather than mainly from the rooftop installation. That's a very different model from most DG approaches, though under current market conditions the net benefit to consumers reportedly would match or exceed that from solar leasing.

Exelon's announcement seems aimed at a different segment of the market, and based on a very different technology. The company would finance the installation of 21 MW of Bloom Energy's fuel cell generators at businesses in several states, including California. Bloom made quite a splash when it introduced its "energy servers", including a popular segment on "60 Minutes" in 2010.

Bloom's devices, which come in models producing either 100 kW or 200 kW, are built around solid oxide fuel cells.  At that scale they are too large for individual homes but suitable for many businesses. And because they are modular, they can be combined to meet the energy needs of larger offices or commercial facilities such as data centers. Unlike the fuel cells being deployed in limited numbers of automobiles, they do not require a source of hydrogen gas. Instead they run directly on natural gas from which hydrogen is extracted ("auto-reformed") inside the box.

In that respect, despite their novel technology, Bloom's servers are much closer than rooftop solar to traditional distributed energy, in which a customer owns or leases a small generator to which it supplies fuel. The advantages of Bloom's model are that its servers are designed for highly efficient 24x7 operation, without the expensive energy storage necessary to turn solar into 24x7 power, and with much lower greenhouse gas emissions and local pollution than a diesel generator.

In order to qualify as true zero-emission energy, these installations would need to be connected to a source of biogas, e.g., landfill gas, which effectively creates a closed emissions loop or recycles emissions that would have occurred elsewhere.  Even running on ordinary natural gas, the stated emissions of Bloom's energy servers are roughly a third less than the average emissions for US grid electricity, or 20% lower than the average for other natural gas generation. However, their emissions are over 10% higher than the 2012 average for California's grid.

I find it interesting that Exelon, the largest nuclear power operator in the US and owner of a full array of utility-scale gas, coal, hydro, wind and solar power, would make a high-profile investment in a technology that could ultimately slash the demand for its large central power plants. The company has invested in utility-scale solar and wind power, and as the press release indicated, is already involved in "onsite solar, emergency generation and cogeneration" via its Constellation subsidiary. In fact, it has apparently already achieved its goal of eliminating the equivalent of its 2001 carbon footprint.  However, the press release hints that something else might have attracted them to this deal.

Consider all the changes in store for the power grid. Baseload coal power is declining due to the combination of economic forces and strong emissions regulations such as the EPA's Clean Power Plan. Even some nuclear power plants, which have been the workhorses of the fleet for the last several decades, are facing premature retirement for non-operational reasons. At the same time, grid operators must integrate steadily growing proportions of intermittent renewable energy (wind and solar), along with increasingly sophisticated tools like demand response and energy storage. If any of this goes wrong, electric reliability will likely suffer.

From that perspective, Exelon's small--for them--step into DG also looks like a bet on the future value of reliability--"non-intermittent...reliable, resilient and distributed power." That's a bet even an old oil trader can understand: Uncertainty creates volatility, and volatility creates opportunities. I will be very interested to see how this turns out. 

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Wednesday, August 06, 2014

The Missing Oil Crisis of 2014

  • While the full impact of the surge in US "tight oil" may be masked by problems elsewhere, it is on the same scale--but opposite direction--as key factors that led to the 2007-8 oil price spike.
  • In that light it does not seem like hyperbole to credit the recent revival of US oil output with averting another global oil crisis.
Several speakers at last month's annual EIA Energy Conference in Washington, DC reminded the audience that energy security extends beyond oil, starting with Maria van der Hoeven, Executive Director of the International Energy Agency (IEA). In her keynote remarks Monday morning she was quick to point out that it also encompasses electricity, sustainability, and energy's effects on the climate and vice versa. Still, the comment that got my wheels turning came from Dan Yergin, author and Vice Chairman of IHS. During his lunch keynote he suggested that without US tight oil production, this year's conference would have been dominated by another oil crisis.

Although shale energy development certainly deserves to be called revolutionary, crediting it with averting an oil crisis calls for a bit of "show me." Yet with problems in Libya, Nigeria and Iraq, while Iranian oil remains under sanctions and oil demand picks up again, even at first glance Mr. Yergin's assertion looks like more than a casual, lunch-speech sound-bite.

Start with current US tight oil (LTO) production of over 3 million barrels per day (MBD) and estimates of future LTO production rising to as much as 8 MBD--also the subject of much discussion at the conference. As recently as 2008 total US crude oil output had fallen to just 5 MBD and was only expected to recover to around 6 MBD by 2014, with minimal contribution from unconventional oil. Instead, the US is on track to beat 2013's 22-year record of 7.4 MBD, perhaps by as much as another million bbl/day.

With conventional production in Alaska and California declining or at best flat, and with Gulf of Mexico output just starting to recover from the post-Deepwater Horizon drilling moratorium and subsequent "permitorium", the net increase in US crude production attributable to LTO today is in the range of 2.5-3.5 MBD and growing, thanks to soaring output in North Dakota, Texas and other states.

That might not sound like much in a global oil market of over 90 MBD, but it brackets the IEA's latest estimate of OPEC's effective unused production capacity of 3.3 MBD. Spare capacity and changes in inventory are key measures of how much slack the oil market has at any time. When OPEC spare capacity fell below 2 MBD in 2007-8, oil prices rose sharply from around $70 per barrel to their all-time nominal high of $145 per barrel. It took a global recession and financial crisis to extinguish that price spike, and high oil prices were likely a major contributor to the recession.

Global oil inventories are now a little below their seasonal average for this time of the year. Compensating for the absence of over 3 MBD of US tight oil would require higher production elsewhere, lower demand, or a drain on those inventories that would by itself push prices steadily higher.

Concerning production, if the US tight oil boom hadn't happened, more investment might have flowed to other exploration and production opportunities. However, for non-LTO production to have grown by an extra 3 MBD, companies would have had to invest--starting in the middle of the last decade--in the projects necessary to deliver that oil now. Were that many deepwater and conventional onshore projects deferred or canceled because companies anticipated today's level of LTO production more than 5 years ago? And would Iraq, Libya and Nigeria be more reliable suppliers today if US companies hadn't been drilling thousands of wells in shale formations for the last several years? Both propositions seem doubtful.

As for adjustments in demand, US petroleum consumption is  already over 8% less than in 2007. And as we learned in the run-up to 2008, much of the oil demand in the developing world, where it has grown fastest, is less sensitive to changes in oil prices than demand in developed countries, due to high levels of consumer petroleum subsidies in the former. Petroleum product prices in the latter must increase significantly in order to get consumers there to cut their usage by enough to balance tight global supplies. That dynamic played an important role in oil prices coming very close to $150 per barrel six years ago, when average retail unleaded regular in the US peaked at $4.11 per gallon, equivalent to nearly $4.50 per gallon today.

So to summarize, if the US tight oil boom hadn't happened, it's unlikely that other non-OPEC production would have increased by a similar amount in the meantime, or that OPEC would have the capability or inclination to make up the resulting shortfall versus current demand out of its spare capacity. Demand would have had to adjust lower, and that only happens when oil and product prices rise significantly. With oil already at $100 per barrel, it's not hard to imagine such a scenario adding at least $40 to oil prices--just over half the 2007-8 spike. Combined with higher net oil imports, that would have expanded this year's US trade deficit by around $230 billion. US gasoline prices today would average near $4.60 per gallon, instead of $3.54, taking an extra $140 billion a year out of consumers' pockets.

We can never be certain about what would have happened without the current surge in US tight oil, but for a reminder of how a similar situation was characterized just a few years ago, please Google "2008 oil crisis".  If we found ourselves in similar circumstances today, then the heated Congressional hearings and angry consumers to which Mr. Yergin alluded in his remarks would almost certainly have been major topics at EIA's 2014 conference, instead of the realistic prospect of legalized US oil exports.

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Tuesday, July 29, 2014

Bakken Shale Gas Flaring Highlights Global Problem

  • High rates of natural gas flaring in the Bakken shale formation are symptomatic of infrastructure limitations that prevent this gas from reaching a market.
  • Although various technical options could reduce flaring from high-output well sites, none matches the benefits of developing large-scale outlets for the gas.
The Wall St. Journal recently reported on the high rate at which excess natural gas from wells in North Dakota's Bakken shale formation is burned off, or "flared."  The Journal cited state data indicating 10.3 billion cubic feet (BCF) of gas were flared there during April 2014. That represented 30% of total gas production in the state for the month.

North Dakota's governor attributed the high volume of gas flared in his state to the great speed at which the Bakken shale has been developed, outpacing gas recovery efforts. Oil output ramped up from 200,000 barrels per day five years ago to just over a million today, in a region lacking the dense oil and gas infrastructure of Texas and other states with a legacy of high production.

Nor is this situation unique to the Bakken. The World Bank has estimated that around 14 BCF of gas is flared every day, globally. Such flaring is a problem for more than governments and other mineral-rights owners that worry about missing potential royalties.  Aside from our natural aversion to waste, flaring natural gas has environmental consequences.

The tight oil produced from the Bakken shale is quite low in sulfur, and so is most of the associated gas, but some of it contains relatively high percentages of hydrogen sulfide (H2S). When that gas is flared, rather than processed, the resulting SOx emissions can affect local or even regional air quality.

Gas flaring also contributes to the greenhouse gas emissions implicated in global warming, although it must be noted that flaring is 28-84 times less climate-altering, pound for pound, than venting the same quantity of methane to the atmosphere.  When annualized, and assuming complete combustion of the gas, North Dakota's recent level of flaring equates to around 6.7 million metric tons of CO2 emissions, or nearly a fifth of total estimated US CO2 emissions from natural gas systems in 2012. That means this one source accounts for around 0.1% of total US greenhouse gas emissions, or somewhat less than US ammonia production.

Why would anyone flare gas in the first place? As the Journal pointed out, the oil produced from Bakken wells is worth significantly more than the gas, although the energy-equivalent price ratio favors oil by more like 4:1 than the 20:1 cited in the article. Still, the economics of Bakken drilling are mainly driven by oil that can be sold at the lease and delivered by pipeline or rail, and not by the associated gas, particularly after tallying the cost of capturing and processing it, and then hoping capacity will be available to deliver it to a market that in the case of the Bakken might be hundreds or thousands of miles away. The characteristics of shale wells, with their steep decline curves, raise this hurdle even higher: Shale gas infrastructure at the well must pay for itself quickly, before output tails off.

There is no shortage of technical options for putting this gas to use, instead of flaring it. An industry conference in Bismarck, ND this spring featured an excellent presentation on this subject from the Energy & Environmental Research Center (EERC) of the University of North Dakota. Among the options listed by the presenter were onsite removal of gas liquids (NGLs), using gas to displace diesel fuel in drilling operations, and compressing it for use by local trucking or delivery to fleet fueling locations. However,  contrary to the intuition of the rancher interviewed by the Journal, none of these options would reduce high-volume flaring by more than a fraction, despite investment costs in the tens or hundreds of thousands of dollars per site.

Even in the case of the most technically interesting option, small-scale gas-to-liquids conversion to produce synthetic diesel or high-quality synthetic crude, EERC estimated this would divert only 8% of the output from a multi-well site flaring 300 million cubic feet per day, while requiring an investment of $250 million. And to make this option yet more challenging to implement, of the 200-plus such locations EERC identified in the state, fewer than two dozen flared consistently at that level over a six-month period. The problem moves around as older wells tail off and new ones are drilled.

Significantly reducing or eliminating natural gas flaring ultimately requires a large-scale market for the hydrocarbons being burned off. That's as true in North Dakota as in Nigeria. While various technical options could incrementally reduce gas flaring from Bakken wells, the highest-impact solutions would be those that promote market creation. That would include fast-tracking long-distance gas pipeline projects or building gas-fired power plants nearby. Absent large new customers for Bakken gas, additional regulations on flaring will either be ineffective or impede the region's strategically important oil output.

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Friday, July 18, 2014

Condensate Pries Open the Oil Export Lid

  •  A US ruling to allow limited exports of condensate, a light hydrocarbon mix similar to light crude oil, has implications for both producers and refiners, though not consumers.

  • Whether or not it leads to wider US exports of condensate and crude, it signals just how much the US energy situation has changed since the oil export ban was first imposed.

Last month we learned that the US Commerce Department gave two US companies permission to export condensate that would otherwise be trapped here under a 1970s-vintage ban on US oil exports. This validates the view, as described in a white paper from the office of Senator Lisa Murkowski (R-AK) earlier this year, that the administration has the statutory authority necessary to allow such exports. An entire session at this week's annual EIA Energy Conference was devoted to the details of this ruling, and whether it paves the way for broader exports of a growing US surplus of condensate and light sweet crude oil.

Over the past several decades US refineries invested an estimated $100 billion to enable them to process the increasingly heavy and sour crude oil types available for import. As a result, most US refineries, particularly on the Gulf and west coasts, are no longer equipped to run large volumes of the extremely light condensates and oils now coming from onshore shale deposits. Allowing producers to achieve world-market prices for their output should boost the economy and raise tax receipts, yet is unlikely to harm consumers.

Condensates are a class of hydrocarbons distinct from crude oil, though they share enough oil-like characteristics frequently to be lumped in with the latter, as in US export regulations. The technical definition of condensates encompasses both the “natural gasoline” extracted during the processing of natural gas produced from oil fields (“associated gas”,) as well as the heaviest liquids separated from “non-associated” gas, i.e. from gas fields, rather than oil fields.

The condensate being exported in this case comes mainly from liquids-rich shale deposits like the Eagle Ford in Texas, which produces varying proportions of dry gas, “wet” gas containing NGLs and condensate, and crude oil, depending on well location. Condensate apparently accounts for around 20-40% of Eagle Ford “tight oil” output.

Condensate mainly consists of natural gas liquids like ethane, propane and butane, along with substantial quantities of naphtha, a low-octane mix of hydrocarbons that boils in the gasoline range, plus much smaller proportions of diesel and heavier “gas oils” than would be typical of crude oil. The naphtha in condensate can sometimes be blended into gasoline, depending on its specific qualities, or processed in a refinery to yield higher-quality gasoline components.

Subsequent to the phase-out of tetraethyl lead, most gasoline from US refineries has been a blend of higher-octane naphtha produced by catalytic cracking units and the “reformate” from catalytic reforming units, with provision for further blending during distribution with up to 10% ethanol. Last month US refineries set an all-time record for gasoline production, at over 10 million barrels per day. They are unlikely to miss the naphtha exported in condensate.

Historically, the global market for condensate has had important distinctions from the broader crude oil market, based on the inherent characteristics of these liquids and the end-users seeking them. Refiners running mainly heavy oils sometimes buy condensate for blending, to lighten their average inputs and fill gaps in their processing capacities.

With the Gulf Coast now drowning in light “tight oil” from shale, this is becoming too much of a good thing, as refiners increasingly have more light material in their feedstock than their facilities can easily handle. One presenter at the EIA conference described the situation as building toward a "day of reckoning", when the discounts required to induce US refiners to process excess light crude instead of imported heavier crude would reach the level at which producers must throttle back oil production. Another expert with whom I spoke was adamant that that day of reckoning has already arrived. One result is investment in new facilities to provide minimal processing–really just distillation–for condensate.

By contrast, petrochemical producers, particularly in Asia, are expected to import growing volumes of condensate for use in the production of olefins like ethylene and propylene, and aromatics like toluene and benzene, from which to make plastics, solvents and other petrochemicals. In that market, US condensate will compete with condensate from other gas producing nations, and with exports of refinery naphtha from Europe and elsewhere. This looks like a good opportunity for US producers.

Some advocates of lifting the ban on crude oil exports see the Commerce Department’s ruling as a precedent for allowing exports of all types of oil, or at least a good first step. However, other reports have focused on this ruling as an end-run around the export rules by redefining minimally processed condensates as a petroleum product, and thus exempt from the ban. In that view, the resulting precedent from condensates for exports of true crude oil may be weaker than that from ongoing, permitted oil exports to Canada.

Either way, allowing condensate exports is a smart move that, if continued, should ease crude congestion on the Gulf Coast and reduce the discounts that could make domestic oil less economical to produce, to the benefit of foreign suppliers. It might even push the problem beyond the current election year and enable Congress to consider normalizing all oil exports without the inhibiting effect of populist pressures at the polls. In the meantime, you can bet these condensate exports will be closely scrutinized for any noticeable effects, good or bad.

A different version of this posting was previously published on Energy Trends Insider.

Wednesday, July 09, 2014

ISIS Threatens Iraq's Oil Upside

  • Even if its threat to Iraq's oil exports can be contained, the newly asserted "Islamic State of Iraq and Syria" has altered the political risk of projects there.
  • That could hamper future production that was expected to be a major factor in meeting growing oil demand later this decade.
Last month's blitzkrieg advance of Al Qaeda spinoff ISIS in northwestern Iraq rattled global oil markets and politicians. Oil prices have risen by only a few dollars, reflecting the remoteness of the current threat from Iraq's main producing region and validating OPEC's recent characterization of the global oil market as "adequately supplied." Yet even as the rebel offensive appears to stall, the escalation of risk in Iraq and its neighbors could affect geopolitics, oil supplies and fuel prices for the rest of the decade.

Iraq currently exports around 2.7 million barrels per day (MBD) of oil, or 7% of global oil exports. It is effectively the number two producer in OPEC. Having recovered beyond pre-war levels, Iraq's oil industry has been growing, while Iran's exports are constrained by international sanctions and Libya's output has become highly erratic following that country's revolution.

In the International Energy Agency's latest Medium-Term Oil Market Report Iraq accounts for 60% of OPEC's incremental production capacity through 2019 (see chart below) and nearly a fifth of all new barrels expected to come to market in that period. This is a more conservative view of Iraq's growth potential than in previous scenarios, but it still leaves Iraqi oil, together with " tight oil" in the US and elsewhere, as the bright spots of the IEA's supply forecast.

Following ISIS's capture of Mosul in northern Iraq, the Heard on the Street column in the Wall St. Journal painted a stark picture of how the destabilization of Iraq could limit investment in the country's oil industry, truncating its expansion. That would increase longer-term oil price volatility and make investments elsewhere more attractive, not just in North American tight oil but also in energy efficiency and alternatives to oil.

Warning signs seem ample. The "Islamic State in Iraq and Syria" might never capture Baghdad or directly threaten the giant oil fields of southern Iraq that are reviving with help from international firms like BP, ExxonMobil and Shell. However, ISIS's actions in the territory they now control, and the fears they incite across a much larger swath of Iraq, are sparking renewed sectarian violence and prompting foreign companies to evacuate personnel. This undermines the IEA's medium-term forecast, which despite being "laden with downside risk" will apparently not be revised in light of recent events. It also raises the potential for jumps in nearer-term oil and petroleum product prices.

It is noteworthy that oil prices haven't gone up significantly, as they did when Libya's revolution began. From February 15 to April 15, 2011 the price of UK Brent Crude jumped 22%.  Iraq's troubles added about 5% to the Brent price, some of which has already dissipated. However, average US gasoline prices are $0.21 per gallon ahead of their level for the same week last year, in part because tensions in Iraq and elsewhere have forestalled the typical post-Memorial Day price drop.

The market's relatively muted response could change abruptly if the Iraqi military suffered further setbacks at the hands of ISIS and its allies, or if ISIS turned its attention to the oil infrastructure of central and southern Iraq. They attacked the country's largest refinery at Baiji, north of Baghdad, and I have seen conflicting reports of its current status.

As several analysts have noted, anything that threatened the country's oil exports, most of which pass through the Gulf port of Basra, could send oil prices substantially higher. That's because other supply outages have reduced usable spare production capacity elsewhere--oil that isn't now being produced but could ramp up quickly--to less than 4 MBD, a narrower margin than in several years. Even if lost Iraqi output were made up by Saudi Arabia and the UAE, the further contraction of spare capacity would drastically increase price volatility and boost oil prices from today's level, until Iraq's exports--or Iran's--were restored.

Nor would booming domestic oil and gas-liquids production, which is surely helping to hold down global oil prices, insulate US consumers from increases at the gas pump. The oil that US refineries process and the products they sell are still priced based on the global market. If Brent crude spikes, so will US gasoline and diesel. That would have less impact on the US economy than in the past, when imports made up a much higher share of supply, but shifting money from the pockets of consumers to those of oil company shareholders is rarely popular.

An Iraq-driven oil price spike would affect politics and geopolitics, too. An unstable Iraq makes it more difficult to maintain the sanctions pressure on Iran, particularly if the US and Iran ended up coordinating their responses  in Iraq. It's even harder to envision a consensus on keeping  more than 1 MBD of Iran's oil bottled up if oil prices returned to $150/bbl.

That could also complicate the debate over exporting US crude oil, already a tough sell for politicians who came up during the era of energy scarcity. As a practical matter, if exports began while prices were rising sharply for other reasons, convincing US voters that the two factors were unrelated would be challenging. A full-blown oil crisis in Iraq or the wider Middle East would likely result in the idea being tabled for an extended period.

It's tempting to view the success of ISIS in seizing territory on both sides of the Iraq/Syria border as a temporary outgrowth of Syria's civil war. If that were the case, the situation might revert to the status quo ante, once the Iraqi army--with some outside help--mopped up ISIS.

Even if this genie could be rebottled, however, the aftermath of the Iraq War and the "Arab Spring" revolutions is exerting  great stresses on the post-World War I regional order, overlaid on 13 centuries of animosity between Sunnis and Shi'ites.  An accident of history and geology has made this area home to much of the world's undeveloped conventional onshore oil reserves. Can its stability be restored with a few deft military and diplomatic moves, or might that require a complete rethinking of boundaries and nations, as recently suggested by the foreign affairs columnist of the Washington Post?

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Monday, June 30, 2014

EPA's CO2 Rule and the Back Door to Cap & Trade

  • Significant differences in EPA's proposed state CO2 targets for the power sector are reviving interest in cap & trade as a way to reduce compliance costs.
  • This compounds the EPA plan's controversy and raises serious concerns about how the resulting revenue would be used.
Earlier this month the US Environmental Protection Agency released for comment its proposal for regulating the CO2 emissions from existing power plants. It follows EPA’s emissions rule for new power plants published late last year but takes a different, more expansive approach.  If implemented, the “Clean Power Plan” would reduce US emissions in the utility sector by around 25% by 2020 and 30% by 2030.

One of its most surprising features is that instead of setting emissions standards for each type of power plant or mandating a single, across-the-board emissions-reduction percentage, it imposes distinct emissions targets on each state. Based on analysis by Bloomberg New Energy Finance, some states could actually increase emissions, while others would be required to make deep cuts. The resulting disparities have apparently triggered new interest in state and regional emissions trading as a means of managing the rule’s cost.

Although emissions trading has become more controversial in recent years, it proved its worth in holding down the cost of implementing previous environmental regulations, such as the effort to reduce sulfur pollution associated with acid rain. It works by enabling facilities or companies with lower-than-average abatement costs to profit from maximizing their reductions and then selling their excess reductions to others with higher costs. The desired overall reductions are thus achieved at a lower cost to the economy than if each company or facility were required to reduce its emissions by the same amount.

Although the Clean Power Plan doesn’t require that states establish such emissions trading markets, its lengthy preamble includes a discussion of existing state greenhouse gas “cap-and-trade” markets in California and the Northeast. It also points out that measures to comply with the new rule may generate benefits in the markets for conventional pollutants, including those for the recent cross-state pollution rule. Administrator McCarthy also mentioned the benefits of multi-state markets in her speech announcing the new rule.

A patchwork of cap and trade markets across the US, including the addition of new states to mechanisms like the Regional Greenhouse Gas Initiative (RGGI), might help mitigate some of the cost of complying with 50 different CO2 targets. However, it would still be a far cry from the kind of economy-wide, comprehensive CO2 cap-and-trade system once contemplated by the US Congress.

Cap and trade was an idea that had gained significant momentum and even begun to appear inevitable, prior to the onset of the financial crisis in 2008. To supporters, it looked like a better way to limit and eventually cut greenhouse gas emissions than through command-and-control regulations. And the price it would establish for emissions would be based on the cost of achieving a desired level of reductions, rather than being set arbitrarily, as a carbon tax would be, without any guarantee of actual emissions reductions. Opponents viewed it as an unnecessary or unnecessarily complicated drag on the economy and a tax by another name, coining the pejorative term “cap-and-tax”.

Although early US cap-and-trade bills were bipartisan, including one co-sponsored by Senator McCain, the 2008 Republican Presidential nominee, the debate over cap and trade took on an increasingly partisan tone in a period of widening polarization on most major issues. The Waxman-Markey climate bill, with cap and trade as a major provision, was narrowly passed when Democrats controlled the House of Representatives in 2009, but various Senate versions failed to attract sufficient support, even when Democrats held a filibuster-proof supermajority in that body. The chances of enacting cap and trade legislation effectively died when a Republican won the vacant Senate seat for Massachusetts in January 2010. However, viewing this as a purely partisan divide is simplistic, at best.

Aside from opposition by key Senate Democrats, including one whose campaign included a vivid demonstration of his stand against Waxman-Markey, the versions of “cap and trade” debated in 2009 and 2010 bore little resemblance to the original idea. Waxman-Markey was a 1400-page monstrosity, laden with extraneous provisions and pork. Its embedded allocation of free allowances strongly favored the same electricity sector now being targeted by EPA’s Clean Power Plan, at the expense of transportation energy, for which low-carbon options remain fewer and more costly. It would have created a de facto gasoline tax, while yielding fewer net emissions reductions than a system with a level playing field. Subsequent bills, such as the Kerry-Lieberman bill in 2010, took this a step farther, removing transportation fuels from cap and trade and effectively taxing them at a rate based on the price of emissions credits.

Along the way, national CO2 cap-and-trade legislation evolved from a fairly straightforward way to harness market forces to deliver the cheapest emissions cuts available, to a mechanism for raising and redistributing large sums of money outside the tax code. In some cases that would have been done directly, such as in the gratifyingly brief Cantwell-Collins “cap-and-dividend” bill, or as indirectly and inefficiently as in Waxman-Markey. It’s no wonder the whole idea became toxic at the federal level.

Although emissions trading for greenhouse gas reduction came up short in the US Congress, it took hold elsewhere. The EU’s Emissions Trading System (ETS) is an outgrowth of the Kyoto Protocol’s emissions trading mechanism, which was included largely at the urging of the US delegation to the Kyoto climate conference in 1997. The ETS is focused on the industrial and power sectors and covers 43% of EU emissions. It has experienced significant ups and downs over the sale and allocation of emissions credits.

Cap and trade also emerged as a preferred approach for some US states seeking to reduce their emissions. California’s emissions market was established via a provision of the 2006 Climate Solutions Act (A.B. 32), and RGGI currently facilitates trading among 9 mostly northeastern states. The relatively low prices of emissions allowances in these systems–particularly in RGGI, which has traded in the range of $3-$5/ton of CO2–suggests that they may still be capturing low-hanging fruit in the early phases of steadily declining emissions caps. Their effectiveness at facilitating future low-cost emissions cuts is hard to gauge, because they also don’t exist in a vacuum.

Except for Vermont, all of the states involved have renewable electricity mandates that by their nature deliver more prescriptive emissions cuts. These markets have also been implemented in a generally weak US economy, which has constrained energy demand, and against the backdrop of the shale revolution, which has yielded significant non-mandated emissions reductions. Nor have these state and regional approaches to cap and trade entirely avoided the debates over how to spend their substantial proceeds that plagued federal cap-and-trade legislation.

For many years my view of cap and trade was that if we needed to put a price on GHG emissions, this was a better, more efficient option than an arbitrary carbon tax, or other top-down method. My experience analyzing more recent “cap-and-trade” legislation left me with serious doubts about our ability to implement a fair and effective national cap-and-trade market for CO2 and other greenhouse gases within the current political environment. Whether on a unified basis or in aggregate across many smaller systems, the enormous sums it could eventually generate are simply too tempting to expect our legislators and government agencies to administer even-handedly.

Whatever its potential benefits and pitfalls, I can’t help seeing cap and trade as a distraction in the context of the EPA’s proposed Clean Power Plan. Even at its most efficient, cap and trade couldn’t render painless the wide disparities of a plan that would require Arizona to cut emissions per megawatt-hour by more than half, and states like Texas and Oklahoma to cut by 36-38%, while Kansas, Kentucky, Missouri, Montana and even California cut by less than a quarter–and under some scenarios might even increase their overall emissions. Cap and trade would merely be a footnote on the scale of transformation the EPA’s plan envisions for the US electricity sector.

A different version of this posting was previously published on Energy Trends Insider.

Thursday, June 19, 2014

EPA's New CO2 Rules Create Opportunities for Natural Gas, for Now

  • EPA's proposed rule for reducing CO2 emissions from power plants could increase natural gas demand in the utility sector by as much as 50%, at the expense of coal.
  • Cutting emissions by regulation rather than legislation entails legal and political uncertainties that could hamper the investment necessary to meet EPA's targets.
Earlier this month the Environmental Protection Agency announced its proposal for regulating the greenhouse gas emissions from all currently operating US power plants. Unsurprisingly, initial assessments suggested it favors the renewable energy, energy efficiency and nuclear power industries--and especially natural gas--all at the expense of coal. However, the longer-term outcome is subject to significant uncertainties, because of the way this policy is being implemented.

EPA's proposed "Clean Power Plan" regulation would reduce CO2 emissions from the US electric power sector by 25% by 2020 and 30% by 2030, compared to 2005. Although it does not specify that the annual reduction of over 700 million metric tons of CO2--half of which had already been achieved by 2012--must all come from coal-burning power plants, such plants accounted for 75% of 2012 emissions from power generation.

It's worth recalling how we got here. In the last decade the US Congress made several attempts to enact comprehensive climate legislation, based on an economy-wide cap on CO2 and a system of trading emissions allowances: "cap and trade." In 2009 the House of Representatives passed the Waxman-Markey bill, with its rather distorted version of cap and trade. It died in the US Senate, where the President's party briefly held a filibuster-proof supermajority.

The Clean Power Plan is the culmination of the administration's efforts to regulate the major CO2 sources in the US economy, in the absence of comprehensive climate legislation. Although Administrator McCarthy touted the flexibility of the plan in her enthusiastic rollout speech and suggested that its implementation might include state or regional cap and trade markets for emissions, the net result will look very different than an economy-wide approach.

For starters, there won't be a cap on overall emissions, but rather a set of state-level performance targets for emissions per megawatt-hour generated in 2020 and 2030. If electricity demand grew 29% by 2040, as recently forecast by the Energy Information Administration of the US Department of Energy, the CO2 savings in the EPA plan might even be largely negated. EPA is banking on the widespread adoption of energy efficiency measures to avoid such an outcome.

Since we have many technologies for generating electricity, with varying emissions all the way down to nearly zero, many different future generating mixes could achieve the plan's goals, though not at equal cost or reliability. Ironically, since coal's share of power generation has declined from 50%  in 2005 to 39% as of last year, it could be done by replacing all the older coal-fired power plants in the US with state of the art plants using either ultra-supercritical pulverized coal combustion (USC ) or integrated gasification combined cycle (IGCC). 

That won't happen for a variety of reasons, not least of which is EPA's "New Source Performance Standards" published last November. That rule effectively requires new coal-fired power plants to emit around a third less CO2 than today's most efficient coal plant designs. That's only possibly if they capture and sequester (CCS) at least some of their emissions, a feature found in only a couple of power plants now under construction globally.

It's also questionable how the capital required to upgrade the entire US coal generating fleet could be raised. Returns on such facilities have fallen, due to competition from shale gas and from renewables like wind power with very low marginal costs--sometimes negative after factoring in tax credits. Some are interpreting EPA's aggressive CO2 target for 2020 and relatively milder 2030 step as an indication that the latter target could be made much more stringent, later.

So while coal is likely to remain an important  part of the US power mix in 2030, as the EPA's administrator noted, meeting these goals in the real world will likely entail a significant shift from coal to gas and renewable energy sources, while preserving roughly the current nuclear generating fleet, including those units now under construction.

If the entire burden of the shift fell to gas, it would entail increasing the utilization of existing natural gas combined cycle power plants (NGCC) and likely building new units in some states. In the documentation of its draft rules, EPA cited average 2012 NGCC utilization of 46%. Increasing utilization up to 75% would deliver over 600 million additional MWh from gas annually--a 56% increase over total 2013 gas-fired generation, exceeding the output of all US renewables last year--at an emissions reduction of around 340 million metric tons vs. coal. That would be just sufficient to meet the 30% emissions reduction target for the electricity demand and generating mix we had in 2013.

The incremental natural gas required to produce this extra power works out to about 4.4 trillion cubic feet (TCF) per year. That would increase gas consumption in the power sector by just over half, compared to 2013, and boost total US gas demand by 17%. To put that in perspective, US dry natural gas production has grown by 4.1 TCF/y since 2008.

EPA apparently anticipates power sector gas consumption increasing by just 1.2 TCF/y by 2020, and falling thereafter as end-use efficiency improves.  Fuel-switching is only one of the four Best System of Emission Reduction "building blocks" EPA envisions states using, including efficiency improvements at existing power plants, increased penetration of renewable generation, and demand-side efficiency measures. The ultimate mix will vary by state and be influenced by changes in gas, coal and power prices.

I mentioned uncertainties at the beginning of this post. Aside from the inevitable legal challenges to EPA's regulation of power plant CO2 under the 1990 Clean Air Act, its imposition by executive authority, rather than legislation, leaves future administrations free to strengthen, weaken, or even abandon this approach.

Since EPA's planned emission reductions from the power sector are large on a national scale (10% of total US 2005 emissions) but still small on a global scale (2% of 2013 world emissions) their long-term political sustainability may depend on the extent to which they succeed in prompting the large developing countries to follow suit in reducing their growing emissions.

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.