A new forecast of global oil production by the end of the decade attracted a fair amount of attention this week. The study, from Harvard's Kennedy School of Government, indicates that oil production could expand by about 20% by 2020 from current levels. The Wall St. Journal's Heard on the Street column cited this in support of the view that the influence of "peak oil" on the market has itself peaked and fallen into decline. I was particularly intrigued by a scenario suggested in MIT's Technology Review that this wave of new oil supplies could trigger an oil price collapse similar to the one in the mid-1980s that helped roll back the renewable energy programs that were started during the oil crises of the 1970s. That's possible, though I'm not sure this should be the biggest worry that manufacturers of wind turbines and solar panels have today.
The Harvard forecast is based on a detailed, risked country-by-country assessment of production potential, with the bulk of the projected net increase in capacity from today's level of around 93 million barrels per day (MBD) to just over 110 MBD coming from four countries: Iraq, the US, Canada and Brazil. However, the study's lead author, former Eni executive Leonardo Maugeri, sees broad capacity growth in nearly all of today's producing countries, except for Iran, Mexico, Norway and the UK. Although this is certainly a diametrically opposed view of oil's trajectory than the one promoted by advocates of the peak oil viewpoint, it is accompanied by the customary caveats about political and other risks, along with new concerns about environmental push-back. The latter point is particularly important, since much of the expansion is based on what Mr. Maugeri refers to as the "de-conventionalization of oil supplies", based on the expansion of unconventional output from heavy oil, oil sands, Brazil's "pre-salt" oil, and the "tight oil" that has reversed the US production decline.
Although this de-conventionalization trend is very real, it's one thing to envision a shift to an environment in which oil supplies could accommodate, rather than constrain global economic growth; it's another to see these new supplies bringing about an oil price collapse. It's helpful in this regard to consider the three previous oil-price collapses that we've experienced in the last several decades. The mid-1980s collapse is the one that Kevin Bullis of Technology Review seems to have latched onto, because much like today's expansion of unconventional oil, the wave of new non-OPEC production that broke OPEC's hold on the market was the direct result of the sharp oil price increases of the previous decade, after allowing for inherent development time lags. The analogy to this period looks even more interesting if the new Administrator of the Energy Information Agency of the Department of Energy is correct in speculating that the US government might be willing to allow exports of light sweet crude from the Bakken, Eagle Ford and other shale plays, to enable Gulf Coast refineries to continue to run the imported heavy crudes for which they have been optimized at great expense. That could dramatically alter the dynamics of the global oil market.
However, I see two significant differences in the circumstances of the 1980s price collapse, compared to today. First, oil consumption was then dominated by a small number of industrialized countries, the economies of which were still much more reliant on oil for economic growth than they are today. Second, these economies were already emerging from the major recession of the late-1970s and early '80s--a downturn in which the 1970s' energy price spikes played a leading role. For example, US GDP grew at an annual rate of 7.2% in 1984, the year before oil prices began their slide from the high $20s to mid-teens per barrel. So when new supplies from the North Slope and North Sea came onstream, the market was ready and eager to use them. Lower, relatively stable oil prices persisted for more than a decade.
Current global economic conditions have much more in common with either the late-1990s Asian Economic Crisis or the combined recession and financial crisis from which we're still emerging. Each of these situations included a short-lived global oil price collapse that ended when OPEC constrained output and the economy moved past the point of sharpest contraction. The late-90s oil price collapse looks especially relevant for today, because increased production contributed to it.
A new factor that would tend to make any oil-price slump due to unconventional oil self-limiting is its relatively high cost. Mr. Maugeri makes it clear that his output forecast depends on prices remaining generally above $70/bbl, and that any drop below $50-60/bbl would result in curtailed investment and slower expansion. The picture that this paints for me is one in which new oil supplies would be there if we need them to meet growing demand but not otherwise. That should narrow the implications of such an expansion for renewable energy.
As Mr. Bullis reminds his readers, the connection between oil and renewable energy is much more tenuous than many of the latter's proponents imagine. The US gets less than 1% of its electricity supply from burning oil, so technologies like wind and solar power simply have no bearing on oil consumption, and vice versa. That is less true outside the US, but the trends there are also moving in this direction. So other than for biofuels, a steep drop in oil prices for any reason would have little impact on the rationale for renewables, except perhaps psychologically. The two factors on which renewable energy investors and manufacturers should stay focused are the economy and the price of natural gas, against which renewables actually do compete and have generally been losing the battle, recently.
Time will tell whether the Harvard oil production forecast turns out to be more accurate than other, more pessimistic views. Yet while a drop in oil prices due to expanding supply wouldn't do any good for renewables, the single biggest risk the latter face is the same one that would be likeliest to trigger a major oil price collapse: not surging unconventional oil output, the impact of which OPEC will strive hard to manage, but a return to the kind of weak economy and frozen credit that we should all be able to recall vividly. If anything, the consequences for renewables from that risk look much bigger today than a couple of years ago, because of the global overcapacity in wind turbine and solar panel manufacturing that built up as the industry responded to policy-induced irrational exuberance in several key markets.
Providing useful insights and making the complex world of energy more accessible, from an experienced industry professional. A service of GSW Strategy Group, LLC.
Friday, June 29, 2012
Wednesday, June 27, 2012
Does All-of-the-Above Energy Include Long Shots?
An article in Tuesday's Washington Post described the current funding woes of US research into nuclear fusion, focused on anticipated budget and job cuts at the Princeton Plasma Physics Laboratory, MIT and several other sites. Aside from the general challenge of funding all of the Department of Energy's programs at a time of huge federal deficits and ballooning debt, it appears that domestic fusion research is being cut mainly to meet our commitments to the International Thermonuclear Experimental Reactor (ITER) being built in France. The article goes on to suggest that fusion has been excluded from the list of "all-of-the-above" energy technologies that the administration has embraced. That raises questions that would merit attention at any time but seem particularly relevant in an election year.
Before discussing its proper priority in US federal energy research and planning, it's important to recognize, as the article does, that fusion is very much a long-shot bet. We know that nuclear fusion works, because it's the process that powers our sun and all the stars. However, that doesn't guarantee that we can successfully harness it safely here on earth for our own purposes. I've heard plenty of energy experts who think that the only fusion reactor we need is the one 93 million miles away, which remains the ultimate source of nearly all the BTUs and kilowatt-hours of energy we use, except for those from nuclear (fission) power plants and geothermal energy.
Unfortunately, the challenges of harnessing the sun's energy bounty in real time, rather than via the geologically slow processes that produced fossil fuels or the faster but still ponderous growing cycles of biofuels, are distinctly non-trivial--hence the debate about whether and how to overcome the intermittency and cyclicality of wind and solar power through optimized dispersal, clever use of Smart Grid technology, or with energy storage that requires its own breakthroughs if it is to be an economical enabler of wind or solar. A working fusion reactor would provide an end-run around all those problems and fit neatly into our current centralized power grid, with what is expected to be negligible emissions or long-term waste. Who wouldn't want that?
Of course fusion power isn't easy, either; it's the definition of difficult. Scientists around the world have been chasing it for at least five decades. I recall eagerly reading about its potential when I was in my early teens. Then, it was seen to be 30-40 years from becoming commercial, and that's still a reasonable estimate, despite significant progress in the intervening decades. I admit I don't follow fusion research nearly as closely as I used to, in all its permutations of stellarators, tokamaks, laser bombardment chambers and other competing designs, all pursuing the elusive goal of "net energy"--getting more energy back than you must put into achieving the temperatures and pressures necessary to fuse the chosen hydrogen isotopes.
So where does a high-risk, high-reward investment like fusion fit into the concept of all-of-the-above energy that now dominates the energy debate on both sides of the political aisle, and in the trade-offs that must accompany any serious energy strategy or plan for the US? After all, "all of the above" is an attempt to recognize the widely differing states of readiness of our various energy options, the time lags inherent in replacing one set of sources with another, and the need to continue to supply and consume fossil fuels during our (long) transition away from them. While I've never seen an official list of what's in and what's out, my own sense of all of the above is that it's composed of technologies that are either commercial today or that have left the laboratory but still require improvement and scaling up to become commercial. In contrast, fusion hasn't left the lab and it's not clear when or if it will, at least on a timescale that's meaningful either for energy security or climate change mitigation. No one can tell us when the first fusion power plant could be plugged into the grid, and every attempt at predicting that has slipped, badly.
Fusion wasn't mentioned once in the Secretary of Energy's remarks to Congress concerning the fiscal 2013 Energy Department Budget, and it was only shown as a line item in his latest budget presentation. Yet I can't think of any other new technology that's customarily included in all of the above that has even a fraction of fusion's potential for delivering clean energy in large, centralized increments comparable to today's coal or nuclear power plants. We could spend all day arguing whether that's as desirable now (or in the future) as it was just a few years ago, but from my perspective it contributes to the option value of fusion. No one would suggest fusion as a practical near-term alternative, but with the prospect of a shale-gas bridge for the next several decades, it might be an important part of what we could be bridging towards.
Overall, the DOE has budgeted just under $400 million for fusion R&D in fiscal 2013, out of a total budget request of $27 billion. That's not insignificant, and devoting 1.5% of the federal energy budget to fusion might be about the right proportion for such a long-term endeavor that is decades from deployment, relative to funding for medium-term efforts like advanced fission reactors and near-term R&D on renewables and efficiency. The problem is that DOE is cutting deeply into US fusion capabilities, not just at Princeton but also at Lawrence Berkeley Laboratory, Livermore, Los Alamos and Sandia, in order to boost US funding for ITER from $105 million to $150 million next year. Only the fusion budgets for Oak Ridge Laboratory, which is managing the US role in ITER, and for the D.C. HQ grew.
I'm certainly not against international cooperation in science, which has become increasingly important as the costs of "big science" projects expand. However, even if ITER represented the very best chance to take fusion to the next level on its long path to deployment, the long-term implications of these cuts for US fusion science capabilities look significant. As with the space program, once the highly trained and experienced fusion workforce and teams are laid off and broken up, it becomes enormously difficult to reconstitute them, if needed. This is particularly true of those with advanced degrees in fields that have declined in popularity at US universities, or for which the majority of current graduates are non-US students who will return to their countries of origin in search of better opportunities. I wouldn't support keeping these programs going just to provide guaranteed employment for physicists, but we had better be sure that we won't need them later. I am skeptical that we can be sufficiently certain today of the likely deployment pathways for fusion to be able to make such an irreversible decision with confidence.
I understand that in times like these we must make tough choices; that's the essence of budgeting. I'm also sympathetic to those who might think that fusion researchers have had ample time and support to deliver the goods, already. Yet I can't help being struck by the contradiction of a DOE budget in which US R&D for such a long-term, high-potential technology is cut, at the same time that Secretary Chu and the President are pushing hard for multi-billion dollar commitments to extend the Production Tax Credit for renewable energy and reinstate the expired 1603 renewable energy cash grant program, a substantial portion of the past benefits from which went to non-US manufacturers and project developers. The total 2013 budget cuts for the US fusion labs are equivalent to the tax credits for a single 90 MW wind farm, which would contribute less than 0.01% of annual US power generation. Although we clearly can't fund every R&D idea to the extent researchers might wish, I believe it is a mistake to funnel so much money--about 40% of which must be borrowed--into perpetual support for the deployment of relatively low-impact and essentially mature technologies like onshore wind, when the same dollars would go much farther on R&D.
Before discussing its proper priority in US federal energy research and planning, it's important to recognize, as the article does, that fusion is very much a long-shot bet. We know that nuclear fusion works, because it's the process that powers our sun and all the stars. However, that doesn't guarantee that we can successfully harness it safely here on earth for our own purposes. I've heard plenty of energy experts who think that the only fusion reactor we need is the one 93 million miles away, which remains the ultimate source of nearly all the BTUs and kilowatt-hours of energy we use, except for those from nuclear (fission) power plants and geothermal energy.
Unfortunately, the challenges of harnessing the sun's energy bounty in real time, rather than via the geologically slow processes that produced fossil fuels or the faster but still ponderous growing cycles of biofuels, are distinctly non-trivial--hence the debate about whether and how to overcome the intermittency and cyclicality of wind and solar power through optimized dispersal, clever use of Smart Grid technology, or with energy storage that requires its own breakthroughs if it is to be an economical enabler of wind or solar. A working fusion reactor would provide an end-run around all those problems and fit neatly into our current centralized power grid, with what is expected to be negligible emissions or long-term waste. Who wouldn't want that?
Of course fusion power isn't easy, either; it's the definition of difficult. Scientists around the world have been chasing it for at least five decades. I recall eagerly reading about its potential when I was in my early teens. Then, it was seen to be 30-40 years from becoming commercial, and that's still a reasonable estimate, despite significant progress in the intervening decades. I admit I don't follow fusion research nearly as closely as I used to, in all its permutations of stellarators, tokamaks, laser bombardment chambers and other competing designs, all pursuing the elusive goal of "net energy"--getting more energy back than you must put into achieving the temperatures and pressures necessary to fuse the chosen hydrogen isotopes.
So where does a high-risk, high-reward investment like fusion fit into the concept of all-of-the-above energy that now dominates the energy debate on both sides of the political aisle, and in the trade-offs that must accompany any serious energy strategy or plan for the US? After all, "all of the above" is an attempt to recognize the widely differing states of readiness of our various energy options, the time lags inherent in replacing one set of sources with another, and the need to continue to supply and consume fossil fuels during our (long) transition away from them. While I've never seen an official list of what's in and what's out, my own sense of all of the above is that it's composed of technologies that are either commercial today or that have left the laboratory but still require improvement and scaling up to become commercial. In contrast, fusion hasn't left the lab and it's not clear when or if it will, at least on a timescale that's meaningful either for energy security or climate change mitigation. No one can tell us when the first fusion power plant could be plugged into the grid, and every attempt at predicting that has slipped, badly.
Fusion wasn't mentioned once in the Secretary of Energy's remarks to Congress concerning the fiscal 2013 Energy Department Budget, and it was only shown as a line item in his latest budget presentation. Yet I can't think of any other new technology that's customarily included in all of the above that has even a fraction of fusion's potential for delivering clean energy in large, centralized increments comparable to today's coal or nuclear power plants. We could spend all day arguing whether that's as desirable now (or in the future) as it was just a few years ago, but from my perspective it contributes to the option value of fusion. No one would suggest fusion as a practical near-term alternative, but with the prospect of a shale-gas bridge for the next several decades, it might be an important part of what we could be bridging towards.
Overall, the DOE has budgeted just under $400 million for fusion R&D in fiscal 2013, out of a total budget request of $27 billion. That's not insignificant, and devoting 1.5% of the federal energy budget to fusion might be about the right proportion for such a long-term endeavor that is decades from deployment, relative to funding for medium-term efforts like advanced fission reactors and near-term R&D on renewables and efficiency. The problem is that DOE is cutting deeply into US fusion capabilities, not just at Princeton but also at Lawrence Berkeley Laboratory, Livermore, Los Alamos and Sandia, in order to boost US funding for ITER from $105 million to $150 million next year. Only the fusion budgets for Oak Ridge Laboratory, which is managing the US role in ITER, and for the D.C. HQ grew.
I'm certainly not against international cooperation in science, which has become increasingly important as the costs of "big science" projects expand. However, even if ITER represented the very best chance to take fusion to the next level on its long path to deployment, the long-term implications of these cuts for US fusion science capabilities look significant. As with the space program, once the highly trained and experienced fusion workforce and teams are laid off and broken up, it becomes enormously difficult to reconstitute them, if needed. This is particularly true of those with advanced degrees in fields that have declined in popularity at US universities, or for which the majority of current graduates are non-US students who will return to their countries of origin in search of better opportunities. I wouldn't support keeping these programs going just to provide guaranteed employment for physicists, but we had better be sure that we won't need them later. I am skeptical that we can be sufficiently certain today of the likely deployment pathways for fusion to be able to make such an irreversible decision with confidence.
I understand that in times like these we must make tough choices; that's the essence of budgeting. I'm also sympathetic to those who might think that fusion researchers have had ample time and support to deliver the goods, already. Yet I can't help being struck by the contradiction of a DOE budget in which US R&D for such a long-term, high-potential technology is cut, at the same time that Secretary Chu and the President are pushing hard for multi-billion dollar commitments to extend the Production Tax Credit for renewable energy and reinstate the expired 1603 renewable energy cash grant program, a substantial portion of the past benefits from which went to non-US manufacturers and project developers. The total 2013 budget cuts for the US fusion labs are equivalent to the tax credits for a single 90 MW wind farm, which would contribute less than 0.01% of annual US power generation. Although we clearly can't fund every R&D idea to the extent researchers might wish, I believe it is a mistake to funnel so much money--about 40% of which must be borrowed--into perpetual support for the deployment of relatively low-impact and essentially mature technologies like onshore wind, when the same dollars would go much farther on R&D.
Wednesday, June 20, 2012
Does Energy-Related Drilling Trigger Earthquakes?
Last week the National Research Council published a comprehensive study of the seismic hazards and risks of a variety of energy-related drilling activities. Despite widely publicized reports of drilling-related quakes in Ohio and Arkansas, the report concluded that such events are very rare, compared to both the total number of wells drilled and to naturally occurring earthquakes. Nor are the technologies with the highest rates of induced seismicity necessarily the ones that come first to mind. Rather than ignoring these risks because of their rarity, the committee of university and industry experts that produced the report recommended the development of new protocols for monitoring and managing these risks, as well as further research into the potential for induced seismicity from emerging technologies like carbon capture and storage (CCS.)
The study encompassed four categories of energy-related drilling, including oil & gas exploration and production, geothermal energy, liquid disposal wells, and CCS. Within oil & gas, they looked at conventional production and "enhanced recovery", along with hydraulic fracturing or "fracking". The latter two techniques involve pumping water or some other fluid into a reservoir to stimulate production. For geothermal, they considered conventional geothermal, both liquid- and vapor-dominated reservoirs, and "enhanced" or engineered geothermal systems, which pump fluid into hot, dry rock to extract useful heat. They found recorded seismic events in all categories and sub-categories, though again the numbers are small, particularly for quakes large enough to cause damage: Fewer than 160 recorded events globally over magnitude 2.0 within a period of about 30 years from a well population in the millions, and against a natural annual background of 1.4 million small earthquakes of 2.0 or greater and more than 14,000 larger quakes of 4.0 or greater.
In assessing the incidence of seismic events attributed to or suspected to have been caused by energy activities, the committee set a threshold for what they called "felt seismic events". This is crucial, because all of these technologies routinely cause minuscule events--"microseisms"--that can be detected by a seismometer in close proximity, but would go unnoticed by anyone standing on the surface. Magnitude 2.0 seems to be the lowest level event likely to be felt by an observer in the vicinity, while an event of 4.0 would be accompanied by more shaking over a larger area, and thus felt by many more people. Having grown up in earthquake country, I can attest to this. Anything below about 4.0 would often be mistaken for a train or large truck passing by, while most damage was due to quakes of 5.0 or greater. For comparison, last year's quake in Mineral, VA that affected the Washington Monument and National Cathedral registered 5.8. Only about a dozen of the induced seismic events included in the study were larger than that.
It's important to note that the mechanisms by which various energy-related drilling and injection processes trigger felt seismic events are fairly well understood. Scientists and engineers have known since the 1920s that human activities can trigger quakes, and the geosciences have advanced enormously since then. The main contributing factors identified in the report were the effect of fluid injection on increasing the pressure in the pores of subsurface rocks near faults, along with the "net fluid balance", which they defined as the "total balance of fluid introduced into or removed from the subsurface." As a result of these factors, drilling approaches in which the net fluid balance isn't materially altered, such as in waterflood enhanced oil recovery, or for which the changes are short-lived, as in hydraulic fracturing, tend to have very low rates of inducing felt seismic events. In particular, the study found only one documented felt seismic event, of magnitude 2.8, attributable to shale fracking, out of 35,000 fracked shale gas wells.
By contrast, liquid disposal wells, which steadily increase subsurface pore pressure over time, along with several types of geothermal production, exhibit somewhat higher rates of felt seismic events, though these are still relatively rare and generally minor in impact. At least theoretically, CCS seems to have a somewhat higher potential for causing seismic events, although this has apparently not been manifested in the substantial number of wells injecting CO2 for enhanced oil recovery--cited in the report as 13,000 as of 2007 and many more today. Surprisingly, the largest quakes attributed to human activities were associated with conventional oil production, including a couple of 6+ quakes in California and one measuring 7.3 in Uzbekistan.
One of the most interesting findings in the report was that there is no single government agency in the US with jurisdiction over induced seismic events associated with energy production. Responsibility--and capabilities--appear to straddle the Environmental Protection Agency, US Geological Survey, Forest Service and Bureau of Land Management, along with various state agencies. The committee proposed the development of new coordination mechanisms to address these events, as distinct from the ad hoc cooperation that has taken place to date.
I'm not sure what policy makers--the report was commissioned by the Chairman of the Senate Energy and Natural Resources Committee--and the public will make of these findings. At least from a statistical perspective the technologies assessed here look safe in terms of their seismic risks, and it would be hard to justify sweeping new regulations on the basis of this report. (I don't know how practical the "traffic light" monitoring system the authors propose would be.) On the other hand, with the exception of a few people in naturally quake-prone areas--including one neighbor back in California who thinks they are "fun"--earthquakes are fear-inducing, in both anticipation and experience. Arriving at a consensus on how low a risk of felt seismic events is acceptable might not be easy, especially where natural earthquakes are rare. Although the public's appetite for reassurance seems to be fairly low these days, it's clear that the National Research Council, an arm of the private, non-profit National Academies chartered by Congress during the Lincoln administration, sees no reason to panic about the seismic hazards and risks entailed in energy-related drilling.
The study encompassed four categories of energy-related drilling, including oil & gas exploration and production, geothermal energy, liquid disposal wells, and CCS. Within oil & gas, they looked at conventional production and "enhanced recovery", along with hydraulic fracturing or "fracking". The latter two techniques involve pumping water or some other fluid into a reservoir to stimulate production. For geothermal, they considered conventional geothermal, both liquid- and vapor-dominated reservoirs, and "enhanced" or engineered geothermal systems, which pump fluid into hot, dry rock to extract useful heat. They found recorded seismic events in all categories and sub-categories, though again the numbers are small, particularly for quakes large enough to cause damage: Fewer than 160 recorded events globally over magnitude 2.0 within a period of about 30 years from a well population in the millions, and against a natural annual background of 1.4 million small earthquakes of 2.0 or greater and more than 14,000 larger quakes of 4.0 or greater.
In assessing the incidence of seismic events attributed to or suspected to have been caused by energy activities, the committee set a threshold for what they called "felt seismic events". This is crucial, because all of these technologies routinely cause minuscule events--"microseisms"--that can be detected by a seismometer in close proximity, but would go unnoticed by anyone standing on the surface. Magnitude 2.0 seems to be the lowest level event likely to be felt by an observer in the vicinity, while an event of 4.0 would be accompanied by more shaking over a larger area, and thus felt by many more people. Having grown up in earthquake country, I can attest to this. Anything below about 4.0 would often be mistaken for a train or large truck passing by, while most damage was due to quakes of 5.0 or greater. For comparison, last year's quake in Mineral, VA that affected the Washington Monument and National Cathedral registered 5.8. Only about a dozen of the induced seismic events included in the study were larger than that.
It's important to note that the mechanisms by which various energy-related drilling and injection processes trigger felt seismic events are fairly well understood. Scientists and engineers have known since the 1920s that human activities can trigger quakes, and the geosciences have advanced enormously since then. The main contributing factors identified in the report were the effect of fluid injection on increasing the pressure in the pores of subsurface rocks near faults, along with the "net fluid balance", which they defined as the "total balance of fluid introduced into or removed from the subsurface." As a result of these factors, drilling approaches in which the net fluid balance isn't materially altered, such as in waterflood enhanced oil recovery, or for which the changes are short-lived, as in hydraulic fracturing, tend to have very low rates of inducing felt seismic events. In particular, the study found only one documented felt seismic event, of magnitude 2.8, attributable to shale fracking, out of 35,000 fracked shale gas wells.
By contrast, liquid disposal wells, which steadily increase subsurface pore pressure over time, along with several types of geothermal production, exhibit somewhat higher rates of felt seismic events, though these are still relatively rare and generally minor in impact. At least theoretically, CCS seems to have a somewhat higher potential for causing seismic events, although this has apparently not been manifested in the substantial number of wells injecting CO2 for enhanced oil recovery--cited in the report as 13,000 as of 2007 and many more today. Surprisingly, the largest quakes attributed to human activities were associated with conventional oil production, including a couple of 6+ quakes in California and one measuring 7.3 in Uzbekistan.
One of the most interesting findings in the report was that there is no single government agency in the US with jurisdiction over induced seismic events associated with energy production. Responsibility--and capabilities--appear to straddle the Environmental Protection Agency, US Geological Survey, Forest Service and Bureau of Land Management, along with various state agencies. The committee proposed the development of new coordination mechanisms to address these events, as distinct from the ad hoc cooperation that has taken place to date.
I'm not sure what policy makers--the report was commissioned by the Chairman of the Senate Energy and Natural Resources Committee--and the public will make of these findings. At least from a statistical perspective the technologies assessed here look safe in terms of their seismic risks, and it would be hard to justify sweeping new regulations on the basis of this report. (I don't know how practical the "traffic light" monitoring system the authors propose would be.) On the other hand, with the exception of a few people in naturally quake-prone areas--including one neighbor back in California who thinks they are "fun"--earthquakes are fear-inducing, in both anticipation and experience. Arriving at a consensus on how low a risk of felt seismic events is acceptable might not be easy, especially where natural earthquakes are rare. Although the public's appetite for reassurance seems to be fairly low these days, it's clear that the National Research Council, an arm of the private, non-profit National Academies chartered by Congress during the Lincoln administration, sees no reason to panic about the seismic hazards and risks entailed in energy-related drilling.
Friday, June 15, 2012
Politics and The Global Cleantech Shakeout
For all the enthusiastic comparisons of the cleantech sector to infotech or microelectronics that we've encountered in the last decade, one rarely employed analogy is turning out to be more apt than the rest: Cleantech seems just as capable as dot-coms and chip makers of undergoing an industry shakeout and consolidation at the same time it experiences growth rates that most other industries would envy. US and European solar firms continue to fall by the wayside, and this week saw the sale by the world's leading wind turbine manufacturer, Vestas, of one of its Danish plants to a China-based competitor. Because the cleantech industry has been driven mainly by policy rather than market forces, and has thus been deeply intertwined with politics, the global shakeout now underway will continue to have political repercussions. Should Europe's monetary problems unleash a new financial crisis, then both the cleantech shakeout and its political fallout could expand.
The strained comparisons this week between the failures of Solyndra and Konarka, a much smaller solar panel maker, likely won't be the last example of this that we'll see this year. Although I can understand the temptation to link these two situations, the contrast between an award-winning company that took more than eight years to go bankrupt in an economic and competitive environment vastly different than the one in which it was launched, and a business that was already doomed on the day that its half-billion dollar federal loan was inked should have dissuaded anyone from raising this issue. The analogy looks even worse when you realize that Solyndra was only able to undertake the massive expansion that drove it into bankruptcy as a result of serious deficiencies in the DOE's due diligence process, which failed to spot the crashing price of polysilicon, the previous spike in which had underpinned Solyndra's business model.
Past shakeouts have left other industries in excellent shape, despite the pain they entailed. Numerous US automakers went out of business during the Great Depression, which was also a period of great innovation that set up the survivors to become a pillar of the US economy for the next half-century. It's premature to write the epitaph of US cleantech, which could yet emerge much stronger. At the same time, have we ever experienced such a shakeout in an industry so dominated by government subsidies and industrial policy, against the backdrop of globalized competition with similarly supported industries in Europe and Asia? The ultimate outcome looks highly uncertain.
In the long run, the administration's investments in cleantech will either look farsighted and courageous or tragically mistaken, rooted in a "green jobs" fallacy that emerged as an expedient Plan B after successive failures to legislate a price on CO2 and other greenhouse gas emissions. Of course this year's election won't take place with the benefit of history's verdict. Its energy aspects are likely to be dominated by the behavior of oil and gasoline prices and a potential string of further high-profile cleantech bankruptcies, if the economy remains weak. (The list of DOE loan guarantee recipients doesn't lack for candidates.) Is it due to defects in our system or merely human nature that such events seem destined to overshadow the positive energy visions that both sides will present to voters?
The strained comparisons this week between the failures of Solyndra and Konarka, a much smaller solar panel maker, likely won't be the last example of this that we'll see this year. Although I can understand the temptation to link these two situations, the contrast between an award-winning company that took more than eight years to go bankrupt in an economic and competitive environment vastly different than the one in which it was launched, and a business that was already doomed on the day that its half-billion dollar federal loan was inked should have dissuaded anyone from raising this issue. The analogy looks even worse when you realize that Solyndra was only able to undertake the massive expansion that drove it into bankruptcy as a result of serious deficiencies in the DOE's due diligence process, which failed to spot the crashing price of polysilicon, the previous spike in which had underpinned Solyndra's business model.
Past shakeouts have left other industries in excellent shape, despite the pain they entailed. Numerous US automakers went out of business during the Great Depression, which was also a period of great innovation that set up the survivors to become a pillar of the US economy for the next half-century. It's premature to write the epitaph of US cleantech, which could yet emerge much stronger. At the same time, have we ever experienced such a shakeout in an industry so dominated by government subsidies and industrial policy, against the backdrop of globalized competition with similarly supported industries in Europe and Asia? The ultimate outcome looks highly uncertain.
In the long run, the administration's investments in cleantech will either look farsighted and courageous or tragically mistaken, rooted in a "green jobs" fallacy that emerged as an expedient Plan B after successive failures to legislate a price on CO2 and other greenhouse gas emissions. Of course this year's election won't take place with the benefit of history's verdict. Its energy aspects are likely to be dominated by the behavior of oil and gasoline prices and a potential string of further high-profile cleantech bankruptcies, if the economy remains weak. (The list of DOE loan guarantee recipients doesn't lack for candidates.) Is it due to defects in our system or merely human nature that such events seem destined to overshadow the positive energy visions that both sides will present to voters?
Wednesday, June 13, 2012
The Summer Oil Slump
Instead of US consumers facing $5 gasoline this summer, as some analysts had predicted, we now find prices slipping well below $4 per gallon as oil prices respond to weakening demand, a stronger dollar, and steady supply growth. Yet as welcome as this is, it's largely the result of a mountain of bad news: Not only does financial turmoil threaten the very existence of the European Monetary Union and its currency, the Euro, but economic growth in the large emerging economies is also slowing, at least partly in response to the weakness in the developed countries that constitute their primary export markets. The engine of global growth for the next year or two just isn't obvious. That's the backdrop for this week's OPEC meeting in Vienna.
Before we become too enthusiastic about the prospect of a period of cheaper oil, we should first put "cheap" in context. Even ignoring West Texas Intermediate (WTI), the doldrums of which I've discussed at length, the world's most representative current crude oil price, for UK Brent, has fallen consistently below $100 per barrel for the first time since the beginning of the Arab Spring in 2011. Yet even if it fell another $10/bbl, to about where WTI is currently trading, it would still exceed its annual average for every year save 2008 and 2011. So while oil might be less of a drag on the economy at $90/bbl than at $120, that's still short of the kind of drop that would be necessary for it to provide a substantial positive stimulus, particularly when much of the drop reflects buyers around the world tightening their belts.
The US is in a somewhat better position, thanks to surging production of "tight oil" in North Dakota and onshore Texas. This has more than made up for the inevitable slide in output from the deepwater Gulf of Mexico, two years after Deepwater Horizon and the ensuing drilling moratorium. With much of the new production trapped on the wrong side of some temporary pipeline bottlenecks, parts of the country are benefiting from oil prices that are $10-15/bbl below world prices, although short-term gains are a poor reason to perpetuate those bottlenecks, rather than resolving them and allowing North American production to reach its full potential.
Then there's the issue of speculation, which some politicians blamed for the recent spike in oil prices. To whatever extent that was true--and I remain skeptical that the impact was nearly as large as claimed--we could be about to see what happens when the dominant direction of speculation flips from "long" to "short"--bullish to bearish--as noted in today's Wall St. Journal. Since the main effect of speculation is to increase volatility, we could see oil prices temporarily drop even further than today's weak fundamentals would suggest they should.
All of this will be on the minds of the OPEC ministers meeting in Vienna Thursday, along with the usual dynamics between OPEC's price doves and hawks. The pressures on the latter have intensified as Iran copes with tighter sanctions on its exports and Venezuela's ailing caudillo faces a serious election challenge. OPEC meetings are rarely as dramatic as last June's session, but the global context ensures a keenly interested audience for this one. Given the impact of gas prices on US voters, both presidential campaigns should be watching events in Vienna as closely as any traders. $3.00 per gallon by November isn't beyond the realm of possibility. It would only require a sustained dip below $80/bbl.
Before we become too enthusiastic about the prospect of a period of cheaper oil, we should first put "cheap" in context. Even ignoring West Texas Intermediate (WTI), the doldrums of which I've discussed at length, the world's most representative current crude oil price, for UK Brent, has fallen consistently below $100 per barrel for the first time since the beginning of the Arab Spring in 2011. Yet even if it fell another $10/bbl, to about where WTI is currently trading, it would still exceed its annual average for every year save 2008 and 2011. So while oil might be less of a drag on the economy at $90/bbl than at $120, that's still short of the kind of drop that would be necessary for it to provide a substantial positive stimulus, particularly when much of the drop reflects buyers around the world tightening their belts.
The US is in a somewhat better position, thanks to surging production of "tight oil" in North Dakota and onshore Texas. This has more than made up for the inevitable slide in output from the deepwater Gulf of Mexico, two years after Deepwater Horizon and the ensuing drilling moratorium. With much of the new production trapped on the wrong side of some temporary pipeline bottlenecks, parts of the country are benefiting from oil prices that are $10-15/bbl below world prices, although short-term gains are a poor reason to perpetuate those bottlenecks, rather than resolving them and allowing North American production to reach its full potential.
Then there's the issue of speculation, which some politicians blamed for the recent spike in oil prices. To whatever extent that was true--and I remain skeptical that the impact was nearly as large as claimed--we could be about to see what happens when the dominant direction of speculation flips from "long" to "short"--bullish to bearish--as noted in today's Wall St. Journal. Since the main effect of speculation is to increase volatility, we could see oil prices temporarily drop even further than today's weak fundamentals would suggest they should.
All of this will be on the minds of the OPEC ministers meeting in Vienna Thursday, along with the usual dynamics between OPEC's price doves and hawks. The pressures on the latter have intensified as Iran copes with tighter sanctions on its exports and Venezuela's ailing caudillo faces a serious election challenge. OPEC meetings are rarely as dramatic as last June's session, but the global context ensures a keenly interested audience for this one. Given the impact of gas prices on US voters, both presidential campaigns should be watching events in Vienna as closely as any traders. $3.00 per gallon by November isn't beyond the realm of possibility. It would only require a sustained dip below $80/bbl.
Thursday, June 07, 2012
Five Stars for Robert Rapier's "Power Plays"
It's a pleasure to have the opportunity to recommend a new book by a fellow energy blogger, especially when the blogger in question has the kind of deep, hands-on industry experience that makes Robert Rapier's work so authoritative. Robert has been communicating about a variety of energy-related topics for years, first at his own "R-Squared Energy" site, where I encountered him in about 2006, and lately at Consumer Energy Report and at The Energy Collective. You should not assume from the book's title, "Power Plays: Energy Options in the Age of Peak Oil" or the image on its cover that it is just another in a long line of recent bestsellers proclaiming an imminent and permanent global oil crisis. Robert's description of the risks of peak oil is nuanced and balanced, as is his assessment of the many other timely subjects included in the book. The chapter on "Investing in Cleantech" is worth the price of the entire book for would-be inventors and investors, as well as for those setting or administering government renewable energy policies and programs.
In some respects this is the hardest kind of book for me to review. It covers much of the same territory as my own writing, drawing on similar educational and career experiences, so I'm hardly representative of its intended or ideal audience. It is also very close to the book that I've long been tempted to write, myself, after well over a thousand blog posts on the same set of topics and issues. With those caveats, I enjoyed reading "Power Plays", mainly because despite superficial similarities, our perspectives are still different enough that I found it thought-provoking. I even picked up a few new facts. And I should make it very clear that although the book certainly reflects the large body of writing Robert has produced over the last half-dozen years or so, it does not read like a collection of recycled blog posts. It is also as up-to-date as any project like this could be, including assessments of the Keystone XL pipeline controversy, the Fukushima nuclear disaster, and other recent events.
"Power Plays" is structured as an overview of the complex set of energy sources and applications in use today, including their intimate connection to domestic and geopolitics. (The book includes a sobering, non-partisan analysis of the efforts of eight US presidents to promote energy independence.) It is also based on an explicit point of view about the need to reduce our dependence on fossil fuels and to attempt to mitigate human influence on climate change, while being exceptionally realistic about our available options and likely success. Robert has definite ideas on energy policies that would be useful, particularly in guiding our long transition away from oil. I don't agree with all of them, but they're well-reasoned and well-articulated.
The book is also very sound on the facts. I didn't spot any notable errors, with the possible exception of a brief explanation of why hybrid cars are more efficient than conventional cars--in my understanding this derives from the optimization of engine output and the recycling of energy otherwise lost in braking, rather than from inherent differences in energy conversion efficiencies between electric and combustion motors. Otherwise, aside from the natural differences of interpretation one would expect, Robert delivers 250 pages of straight talk about energy.
One word of warning along those lines: If you come to this book as a firm and uncritical advocate of any particular energy technology to the exclusion of most others, you should prepare either to have your feathers ruffled or find yourself questioning some of your beliefs. That is particularly true for renewable energy and biofuels, which constitute Robert's current main focus as Chief Technology Officer of a forestry and renewable energy company. On the other hand, if you'd like to learn more about why fuels like corn ethanol are less-than-ideal substitutes for oil, and why cellulosic biofuel is more challenging to produce and scale up than the promoters of many start-up companies would like you to think, this is a great place to start. And in addition to the obligatory assessment of vehicle electrification and electric trains, his chapter on oil-free transportation features a serious discussion of bicycling and walking, something it might never have occurred to me to include. All of this is handled with rigor, ample references, and a leavening of tables and graphs that shouldn't overwhelm those who are more comfortable with words than numbers or data.
I highly recommended "Power Plays" for my readers. It is available in print and e-book formats from Barnes & Noble and Amazon, where it has garnered exclusively five-star ratings at this point. I intend to post my own five-star review there when time permits.
In some respects this is the hardest kind of book for me to review. It covers much of the same territory as my own writing, drawing on similar educational and career experiences, so I'm hardly representative of its intended or ideal audience. It is also very close to the book that I've long been tempted to write, myself, after well over a thousand blog posts on the same set of topics and issues. With those caveats, I enjoyed reading "Power Plays", mainly because despite superficial similarities, our perspectives are still different enough that I found it thought-provoking. I even picked up a few new facts. And I should make it very clear that although the book certainly reflects the large body of writing Robert has produced over the last half-dozen years or so, it does not read like a collection of recycled blog posts. It is also as up-to-date as any project like this could be, including assessments of the Keystone XL pipeline controversy, the Fukushima nuclear disaster, and other recent events.
"Power Plays" is structured as an overview of the complex set of energy sources and applications in use today, including their intimate connection to domestic and geopolitics. (The book includes a sobering, non-partisan analysis of the efforts of eight US presidents to promote energy independence.) It is also based on an explicit point of view about the need to reduce our dependence on fossil fuels and to attempt to mitigate human influence on climate change, while being exceptionally realistic about our available options and likely success. Robert has definite ideas on energy policies that would be useful, particularly in guiding our long transition away from oil. I don't agree with all of them, but they're well-reasoned and well-articulated.
The book is also very sound on the facts. I didn't spot any notable errors, with the possible exception of a brief explanation of why hybrid cars are more efficient than conventional cars--in my understanding this derives from the optimization of engine output and the recycling of energy otherwise lost in braking, rather than from inherent differences in energy conversion efficiencies between electric and combustion motors. Otherwise, aside from the natural differences of interpretation one would expect, Robert delivers 250 pages of straight talk about energy.
One word of warning along those lines: If you come to this book as a firm and uncritical advocate of any particular energy technology to the exclusion of most others, you should prepare either to have your feathers ruffled or find yourself questioning some of your beliefs. That is particularly true for renewable energy and biofuels, which constitute Robert's current main focus as Chief Technology Officer of a forestry and renewable energy company. On the other hand, if you'd like to learn more about why fuels like corn ethanol are less-than-ideal substitutes for oil, and why cellulosic biofuel is more challenging to produce and scale up than the promoters of many start-up companies would like you to think, this is a great place to start. And in addition to the obligatory assessment of vehicle electrification and electric trains, his chapter on oil-free transportation features a serious discussion of bicycling and walking, something it might never have occurred to me to include. All of this is handled with rigor, ample references, and a leavening of tables and graphs that shouldn't overwhelm those who are more comfortable with words than numbers or data.
I highly recommended "Power Plays" for my readers. It is available in print and e-book formats from Barnes & Noble and Amazon, where it has garnered exclusively five-star ratings at this point. I intend to post my own five-star review there when time permits.
Monday, June 04, 2012
Does A Golden Age of Gas Depend on Golden Rules for Gas?
Last Friday I was in Washington, DC for the presentation of the International Energy Agency's latest report on natural gas, "Golden Rules for A Golden Age of Gas." It is a follow-up to last year's IEA scenario describing the enormous gas potential now being unlocked by new combinations of technology. According to IEA's chief economist, Fatih Birol, who was the lead speaker at the event at the Carnegie Endowment for International Peace, the new report addresses the key uncertainty in delivering on that potential, including the potential of new gas supplies to "fracture established balances in the world energy system." In IEA's view the resources and technologies are in place, but environmental and social challenges represent serious potential roadblocks; overcoming those obstacles calls for a new set of principles along the lines of the ones included in the report. Fundamentally, as Dr. Birol put it, the industry must focus on its "social license to operate", if it is to develop the massive global resources of shale and other unconventional gas to the extent now being envisioned. I believe many in the industry would agree that that license can't be taken for granted.
The report spells out seven principles that IEA sees as prerequisites for securing the necessary concurrence from governments and publics. While several of them merely enunciate common sense, others will likely be controversial on one side or the other--if not in theory then in their implementation. IEA's description of these principles can be found in the report's executive summary. I would paraphrase them as:
1. Operate with transparency
2. Choose appropriate sites
3. Contain potential contaminants
4. Be vigilant with water!
5. Control emissions
6. Recognize scale
7. Regulate carefully
None of these is likely to startle my regular readers, since I've been writing about shale gas extraction and its potential economic, environmental and geopolitical consequences for several years. The aspect of these principles that got my attention during Friday's presentation concerned IEA's admonition to "Be ready to think big." Dr. Birol cited statistics indicating that there are currently around 100,000 unconventional wells in the US today--a figure that might include unconventional oil wells. Supplying the levels of shale gas forecasted by IEA and other agencies would require on the order of one million wells. That compares to a total US well population of roughly a half-million. Drilling on that scale requires that we get it right, because if we don't, even small consequences could compound. However, Dr. Birol also made it very clear that in the view of the IEA, the industry is entirely capable of getting it right.
The findings of this report--at least the high-level findings--have been widely embraced across the environmental and business spectrum. Among groups embracing the report are the American Petroleum Institute and the Investor Environmental Health Network, while EPA Assistant Administrator Gina McCarthy, who was also on Friday's panel, seemed to place her agency's regulatory approach to shale gas in the context of IEA's principles. The harshest criticism I've seen so far is that while it acknowledges the industry's work on best practices, it fails to recognize that much of this is already standard practice, at least in the US. Along those lines, API and the American Natural Gas Alliance (ANGA) are jointly issuing a new report on methane emissions from hydraulically fractured ("fracked") gas wells today.
IEA's report and the early reactions to it clearly illustrate that despite the many thousands of unconventional gas wells that have already been drilled, and the dramatic impact of shale gas on both natural prices and gas-dependent industries, we are still in the early days of a possible global energy revolution. The extent of that revolution hasn't yet been determined, and it will be shaped as much by the reactions of numerous stakeholders as by the investment plans of producers. Whether you see that as a good or bad thing, it's an indisputable feature of the world in which we now live. However, I don't think it's appropriate to view IEA's seven principles exclusively as a set of rules to be imposed on a reluctant industry; they're as much about getting the rest of society comfortable with an energy resource that could provide enormous economic and environmental benefits, globally, particularly with regard to greenhouse gas emissions. Although Dr. Birol emphasized that unleashing all this shale gas won't be sufficient to solve the climate problem, he also demonstrated that without it, our chances of reining in emissions look even worse, because the main trade-off globally is not gas vs. renewables, but gas vs. coal. Getting this right is crucial for many reasons, and the IEA's report looks like a helpful contribution to the dialogue that must take place.
The report spells out seven principles that IEA sees as prerequisites for securing the necessary concurrence from governments and publics. While several of them merely enunciate common sense, others will likely be controversial on one side or the other--if not in theory then in their implementation. IEA's description of these principles can be found in the report's executive summary. I would paraphrase them as:
1. Operate with transparency
2. Choose appropriate sites
3. Contain potential contaminants
4. Be vigilant with water!
5. Control emissions
6. Recognize scale
7. Regulate carefully
None of these is likely to startle my regular readers, since I've been writing about shale gas extraction and its potential economic, environmental and geopolitical consequences for several years. The aspect of these principles that got my attention during Friday's presentation concerned IEA's admonition to "Be ready to think big." Dr. Birol cited statistics indicating that there are currently around 100,000 unconventional wells in the US today--a figure that might include unconventional oil wells. Supplying the levels of shale gas forecasted by IEA and other agencies would require on the order of one million wells. That compares to a total US well population of roughly a half-million. Drilling on that scale requires that we get it right, because if we don't, even small consequences could compound. However, Dr. Birol also made it very clear that in the view of the IEA, the industry is entirely capable of getting it right.
The findings of this report--at least the high-level findings--have been widely embraced across the environmental and business spectrum. Among groups embracing the report are the American Petroleum Institute and the Investor Environmental Health Network, while EPA Assistant Administrator Gina McCarthy, who was also on Friday's panel, seemed to place her agency's regulatory approach to shale gas in the context of IEA's principles. The harshest criticism I've seen so far is that while it acknowledges the industry's work on best practices, it fails to recognize that much of this is already standard practice, at least in the US. Along those lines, API and the American Natural Gas Alliance (ANGA) are jointly issuing a new report on methane emissions from hydraulically fractured ("fracked") gas wells today.
IEA's report and the early reactions to it clearly illustrate that despite the many thousands of unconventional gas wells that have already been drilled, and the dramatic impact of shale gas on both natural prices and gas-dependent industries, we are still in the early days of a possible global energy revolution. The extent of that revolution hasn't yet been determined, and it will be shaped as much by the reactions of numerous stakeholders as by the investment plans of producers. Whether you see that as a good or bad thing, it's an indisputable feature of the world in which we now live. However, I don't think it's appropriate to view IEA's seven principles exclusively as a set of rules to be imposed on a reluctant industry; they're as much about getting the rest of society comfortable with an energy resource that could provide enormous economic and environmental benefits, globally, particularly with regard to greenhouse gas emissions. Although Dr. Birol emphasized that unleashing all this shale gas won't be sufficient to solve the climate problem, he also demonstrated that without it, our chances of reining in emissions look even worse, because the main trade-off globally is not gas vs. renewables, but gas vs. coal. Getting this right is crucial for many reasons, and the IEA's report looks like a helpful contribution to the dialogue that must take place.