Our energy problems would be sufficiently challenging, if a majority of Americans weren't laboring under a set of unhelpful misconceptions about the structure and functioning of the energy industry. If that sounds like a paranoid statement, check out the poll that was just conducted on behalf of the API, the trade association representing most US oil and gas companies, gauging the country's "Energy IQ." Before looking at the results, you might want to take the quiz yourself. While the answers shouldn't surprise many of my readers, I hope you will be as dismayed as I am at the general lack of knowledge about this key sector, with crucial policy decisions imminent. We will only make good decisions about energy, personally and nationally, if we know the facts.
Consider the distorted view of energy that the typical responses reflect. If the poll is representative, most people apparently see the US as being much more reliant on a handful of unstable foreign governments than we actually are, but at the same time, they greatly inflate the importance of US companies in a global industry increasingly dominated by large, national players--as was just demonstrated in Venezuela, where the top US firms just lost billions of dollars of assets to Sr. Chavez's Bolivarian Revolution. Without diminishing the importance of energy security, the fact that most Americans mistakenly think we get more oil from the Middle East than from our NAFTA partners, combined with a belief that alternative energy will soon supply a large fraction of our needs, might explain the current appeal of the unrealistic notion of energy independence, despite its elusiveness for three decades. Reality is more complicated, and as the head of the API suggested during Wednesday's blogger conference call on the survey, "There is room for all fuel sources to be part of the energy equation."
Without excusing anyone for not being better informed, I think I understand how we got to this point. If oil companies were ever widely trusted, it was in the period when they were personified by polite, uniformed service station attendants who cleaned your windows and checked your oil, while your tank filled with 30 ¢ gas. Throughout my adult life, oil companies have been viewed with disdain or hostility, depending on the current price at the pump. Their credibility has suffered from a historical lack of responsiveness and transparency and from the identification of their products and processes with pollution, compounded by the misdeeds of firms like Enron. When the people who arguably know the most about a subject are among the least likely to be believed, the resulting void looks like the Internet: a sprinkling of fact surrounded by opinion, rumor, and deliberate misinformation.
There might be some avoidance of cognitive dissonance here, as well. If oil companies are just doing their job as large businesses, in the same way as the companies that produce our food, deliver our packages, or make our iPods, then maybe they're not to blame for the high prices at the gas pump. Perhaps we bear some of that responsibility, either through the aggregation of all our consumption choices, or through our support for policies restricting access to natural resources and the construction of energy infrastructure. That insight could either empower or paralyze us.
The point of all this is not to hold a pity party for the oil industry, which is doing quite well and will likely continue to prosper in spite of--and sometimes because of--the regulations we throw their way. Many of the misconceptions highlighted by the API's poll are not harmless, however, and the interests at stake are not just the industry's, but the country's. We face complex problems, and our attempted solutions will be more successful if they are grounded in reality, not distortion. It will be interesting to see whether blogs like this one can help to close that gap over time, by sharing informed but independent perspectives on energy.
Providing useful insights and making the complex world of energy more accessible, from an experienced industry professional. A service of GSW Strategy Group, LLC.
Friday, June 29, 2007
Thursday, June 28, 2007
The Real CAFE Debate
Reform of US vehicle fuel economy standards has passed two out of three hurdles. The President proposed a 4% per year improvement in Corporate Average Fuel Economy targets in his State of the Union address, and last week the Senate passed a bill that would increase the CAFE standard to 35 miles per gallon by 2022. The last piece of the puzzle must come from the House of Representatives, which may defer consideration of CAFE until it takes up greenhouse gas limits later this year. In the interim, it's worth looking at two aspects of the CAFE system to get a better sense for what implementation of stricter standards might actually mean.
Last weekend, at a cocktail party at a friend's home, I chatted with some auto industry folks about the energy bill that had just passed. They were genuinely concerned about the ability of the US industry to deliver 35 mpg across the entire fleet, and I didn't get the impression this was just a question of whether they could do it and still make money--though that's not a trivial consideration, either. But this is not just a matter of technical feasibility. As I pointed out in a posting earlier this year, the European brands of two of the Big Three already meet the proposed target, with Ford's 2005 models averaging 36.3 mpg and GM's Opel and Vauxhall models coming in at 35.2 (after converting from grams of CO2 per km to mpg.) When you look at how they do it, two factors are obvious: size and fuel choice. Europeans drive smaller cars, with smaller engines, and more than half of new cars burn diesel fuel, which provides about a 30% boost in fuel economy versus a comparable gas engine.
Dropping a diesel into a big SUV would only take you from 17 mpg to 23. Hybridization of the gas model gets you to roughly the same point. Increasing overall fleet fuel economy by 10 mpg will certainly require Detroit to produce more models with more efficient power trains, but it will also require US consumers to shift their preferences towards smaller, lighter vehicles, probably sacrificing some performance in the process. Unless gas prices keep rising, I have my doubts about how rapidly that shift will take place. After all, if everyone wanted a Toyota Camry Hybrid, we could meet the new target next year.
And that brings me to the aspect of CAFE that I haven't heard discussed at all, enforcement. What happens if carmakers miss these new mpg targets? Under the current system, they pay fines, and the record of fine collection makes interesting reading. When you compare the fines assessed with the fuel economy of the vehicles sold by each firm, you see that falling short by one or two mpg hasn't been very expensive; averaged across a company's entire sales, it works out to under $100 per vehicle. Even for Ferrari, which is routinely 40% below target, the cost is around $700 per car. If the new targets are implemented under the existing system of penalties, then there's little need for additional "offramps" to shelter manufacturers. However, if the penalties are strengthened, US carmakers could find themselves caught between the vise jaws of consumer inertia and regulatory pressure, despite having designed cars that would meet the target. I can understand why the Big Three might be uncomfortable about a national debate on fuel economy that avoids talking about the carrots and sticks that might nudge consumers towards the car choices that constitute an essential ingredient of meeting these goals.
Last weekend, at a cocktail party at a friend's home, I chatted with some auto industry folks about the energy bill that had just passed. They were genuinely concerned about the ability of the US industry to deliver 35 mpg across the entire fleet, and I didn't get the impression this was just a question of whether they could do it and still make money--though that's not a trivial consideration, either. But this is not just a matter of technical feasibility. As I pointed out in a posting earlier this year, the European brands of two of the Big Three already meet the proposed target, with Ford's 2005 models averaging 36.3 mpg and GM's Opel and Vauxhall models coming in at 35.2 (after converting from grams of CO2 per km to mpg.) When you look at how they do it, two factors are obvious: size and fuel choice. Europeans drive smaller cars, with smaller engines, and more than half of new cars burn diesel fuel, which provides about a 30% boost in fuel economy versus a comparable gas engine.
Dropping a diesel into a big SUV would only take you from 17 mpg to 23. Hybridization of the gas model gets you to roughly the same point. Increasing overall fleet fuel economy by 10 mpg will certainly require Detroit to produce more models with more efficient power trains, but it will also require US consumers to shift their preferences towards smaller, lighter vehicles, probably sacrificing some performance in the process. Unless gas prices keep rising, I have my doubts about how rapidly that shift will take place. After all, if everyone wanted a Toyota Camry Hybrid, we could meet the new target next year.
And that brings me to the aspect of CAFE that I haven't heard discussed at all, enforcement. What happens if carmakers miss these new mpg targets? Under the current system, they pay fines, and the record of fine collection makes interesting reading. When you compare the fines assessed with the fuel economy of the vehicles sold by each firm, you see that falling short by one or two mpg hasn't been very expensive; averaged across a company's entire sales, it works out to under $100 per vehicle. Even for Ferrari, which is routinely 40% below target, the cost is around $700 per car. If the new targets are implemented under the existing system of penalties, then there's little need for additional "offramps" to shelter manufacturers. However, if the penalties are strengthened, US carmakers could find themselves caught between the vise jaws of consumer inertia and regulatory pressure, despite having designed cars that would meet the target. I can understand why the Big Three might be uncomfortable about a national debate on fuel economy that avoids talking about the carrots and sticks that might nudge consumers towards the car choices that constitute an essential ingredient of meeting these goals.
Wednesday, June 27, 2007
Mandate To Nowhere?
Of all the provisions of the Senate energy bill that were debated over the last several weeks, the greatly expanded mandate for renewable fuels was probably the least controversial. It increases the amount of renewable fuel--chiefly ethanol--that fuel marketers must sell annually from the previous target of 7.5 billion gallons by 2012 to 36 billion gallons by 2022, or the equivalent of about 1.6 million barrels of oil per day. It's easy to understand the appeal of this from an energy security perspective, with ethanol providing modest climate change benefits, as well. And with 21 billion of the 36 billion gallons slated to come from cellulosic ethanol, rather than from corn ethanol that competes with food, it sounds like a blueprint for a better energy future for the country. But are we placing too large a bet on the relatively unproven technology of producing ethanol from converted crop waste and non-food energy crops? An article in Slate raises some worrying questions about this strategy. After reading it, I came up with a few concerns of my own.
The author's critique of cellulosic ethanol focuses on the resources and the rapid pace of development that would be necessary to produce the volumes mandated by the Senate. He also questions the net energy benefits of cellulosic ethanol, based on a forthcoming study from the University of Colorado. Finally, he expresses doubts about whether the total impact of ethanol will be worth its cost. The article is worth reading, and all of these issues ought to be addressed rigorously before the House of Representatives takes up this proposal in a few weeks.
Like many people, I've generally accepted most of the claims about the potential of cellulosic ethanol to provide a useful petroleum substitute that will be bigger, better, and ultimately cheaper than that the corn-based variety, about which I have had serious concerns for 25 years. Chalk it up to the natural American enthusiasm for new technology. But after reading the Slate piece, I started thinking about the implications of a biofuels mandate this large. If gasoline demand continues growing at 1%/year, 36 billion gallons would cover about 21% of the total gasoline consumed in that year, after adjusting for ethanol's lower energy content. That compares to about 3% last year. And if conservation and the new CAFE standards actually manage to slow the growth in demand, the proportion would be even higher. That makes this a high-stakes gamble, indeed.
How reasonable is it to expect the current ethanol industry, the oil industry, or a new set of players--including a number of high-tech types--to grow a 21 billion gallon per year cellulosic ethanol business from zero in the next 15 years? I'm less skeptical on this point than Slate. Even if cellulosic ethanol plants cost twice as much to build as the traditional kind, the total cumulative investment involved would be on the order of $40 billion. If there's one thing American business can do, it's raise money. Of course, I don't hear many people talking about just what would attract that money: profits. The only way you're going to build this many ethanol plants is if someone thinks there are big profits to be made, either on the wholesale price of ethanol or from government subsidies. If one of my clients asked me, I'd tell them to focus on price, because investing that kind of money on the prospect of indefinite federal largess looks pretty risky. Bottom line, don't expect cellulosic ethanol to be much cheaper than gasoline, or it won't happen.
So if the economics work, and if the plants get built, what else could go wrong? Ignoring the basic technology risk, there are a couple of potentially serious constraints that could bite. First, consider logistics. The supply chain for cellulosic ethanol looks much more logistics-intensive than for gasoline. Producers will have to haul large quantities of low-grade plant matter to their facilities and then ship large quantities of ethanol out by rail and truck, because it can't share pipeline space with oil products. The feedstock alone would comprise about 200 million tons per year of additional haulage. Developers will also face the old dilemma of locating near their raw materials or their markets. Whether that will be dictated by transportation limitations or the economies of manufacturing scale remains to be seen.
Then we have the problem of consumption, which hardly sounds like a problem at all, until you realize that 98% of the cars on the road today can't handle ethanol in concentrations over 10% of gasoline without modifications. The new 15 billion gallon corn ethanol mandate is sufficient to bring every gallon of gasoline sold today up to that fraction. Effectively, cellulosic ethanol producers would either have to displace corn ethanol from standard gasoline or create a large enough market for E-85 to absorb it. That means growing the current fleet of 5 million flexible fuel vehicles (FFVs) to around 44 million within 15 years. Doing that will require that 20% of all cars sold from here on out be FFVs, to the tune of 3 million per year.
And if that weren't enough to get the Congress just a little bit worried about the risks of forcing that much ethanol into the system, we come to the key vulnerability of the whole plan. An FFV is just that: flexible. No one can ensure that every FFV will fill up with E-85 every time, even if the fuel were readily available across the country, as the Senate bill requires. Sooner or later, drivers will figure out that unless E-85 is consistently 25% cheaper than gasoline, they will get more miles per dollar on the latter. But that discount can only exist if wholesale ethanol is cheaper than wholesale gasoline everywhere, or if the subsidy is large enough. And if wholesale ethanol is always cheaper than gasoline, the economics of rapid ethanol capacity expansion start to look shaky. The net result of this feedback loop is that the lower the FFV E-85 usage rate is, the more FFVs we will need, in order to burn 21 billion gallons/year.
Cellulosic ethanol could still turn out to be a wonderful boon, overcoming all of these obstacles and going beyond to truly begin to wean the US off imported oil. But what we're asking of this untested technology is analogous to the growth of the automobile and its fuel infrastructure in the first third of the 20th century, telescoped down to 15 years. After a little reflection, I find that I'm less comfortable with the idea of this transformation happening by fiat, without a demonstration of competitive superiority in the marketplace.
The author's critique of cellulosic ethanol focuses on the resources and the rapid pace of development that would be necessary to produce the volumes mandated by the Senate. He also questions the net energy benefits of cellulosic ethanol, based on a forthcoming study from the University of Colorado. Finally, he expresses doubts about whether the total impact of ethanol will be worth its cost. The article is worth reading, and all of these issues ought to be addressed rigorously before the House of Representatives takes up this proposal in a few weeks.
Like many people, I've generally accepted most of the claims about the potential of cellulosic ethanol to provide a useful petroleum substitute that will be bigger, better, and ultimately cheaper than that the corn-based variety, about which I have had serious concerns for 25 years. Chalk it up to the natural American enthusiasm for new technology. But after reading the Slate piece, I started thinking about the implications of a biofuels mandate this large. If gasoline demand continues growing at 1%/year, 36 billion gallons would cover about 21% of the total gasoline consumed in that year, after adjusting for ethanol's lower energy content. That compares to about 3% last year. And if conservation and the new CAFE standards actually manage to slow the growth in demand, the proportion would be even higher. That makes this a high-stakes gamble, indeed.
How reasonable is it to expect the current ethanol industry, the oil industry, or a new set of players--including a number of high-tech types--to grow a 21 billion gallon per year cellulosic ethanol business from zero in the next 15 years? I'm less skeptical on this point than Slate. Even if cellulosic ethanol plants cost twice as much to build as the traditional kind, the total cumulative investment involved would be on the order of $40 billion. If there's one thing American business can do, it's raise money. Of course, I don't hear many people talking about just what would attract that money: profits. The only way you're going to build this many ethanol plants is if someone thinks there are big profits to be made, either on the wholesale price of ethanol or from government subsidies. If one of my clients asked me, I'd tell them to focus on price, because investing that kind of money on the prospect of indefinite federal largess looks pretty risky. Bottom line, don't expect cellulosic ethanol to be much cheaper than gasoline, or it won't happen.
So if the economics work, and if the plants get built, what else could go wrong? Ignoring the basic technology risk, there are a couple of potentially serious constraints that could bite. First, consider logistics. The supply chain for cellulosic ethanol looks much more logistics-intensive than for gasoline. Producers will have to haul large quantities of low-grade plant matter to their facilities and then ship large quantities of ethanol out by rail and truck, because it can't share pipeline space with oil products. The feedstock alone would comprise about 200 million tons per year of additional haulage. Developers will also face the old dilemma of locating near their raw materials or their markets. Whether that will be dictated by transportation limitations or the economies of manufacturing scale remains to be seen.
Then we have the problem of consumption, which hardly sounds like a problem at all, until you realize that 98% of the cars on the road today can't handle ethanol in concentrations over 10% of gasoline without modifications. The new 15 billion gallon corn ethanol mandate is sufficient to bring every gallon of gasoline sold today up to that fraction. Effectively, cellulosic ethanol producers would either have to displace corn ethanol from standard gasoline or create a large enough market for E-85 to absorb it. That means growing the current fleet of 5 million flexible fuel vehicles (FFVs) to around 44 million within 15 years. Doing that will require that 20% of all cars sold from here on out be FFVs, to the tune of 3 million per year.
And if that weren't enough to get the Congress just a little bit worried about the risks of forcing that much ethanol into the system, we come to the key vulnerability of the whole plan. An FFV is just that: flexible. No one can ensure that every FFV will fill up with E-85 every time, even if the fuel were readily available across the country, as the Senate bill requires. Sooner or later, drivers will figure out that unless E-85 is consistently 25% cheaper than gasoline, they will get more miles per dollar on the latter. But that discount can only exist if wholesale ethanol is cheaper than wholesale gasoline everywhere, or if the subsidy is large enough. And if wholesale ethanol is always cheaper than gasoline, the economics of rapid ethanol capacity expansion start to look shaky. The net result of this feedback loop is that the lower the FFV E-85 usage rate is, the more FFVs we will need, in order to burn 21 billion gallons/year.
Cellulosic ethanol could still turn out to be a wonderful boon, overcoming all of these obstacles and going beyond to truly begin to wean the US off imported oil. But what we're asking of this untested technology is analogous to the growth of the automobile and its fuel infrastructure in the first third of the 20th century, telescoped down to 15 years. After a little reflection, I find that I'm less comfortable with the idea of this transformation happening by fiat, without a demonstration of competitive superiority in the marketplace.
Tuesday, June 26, 2007
Aligned Interests
I know it seems counter-intuitive to suggest that the interests of big oil companies and American consumers and voters might be aligned, particularly in light of the strained mutual dependence manifested at the gas pump these days. However, there is at least one aspect of the ongoing Congressional energy debate in which we all should be rooting for the oil company lobbyists to be successful, as they were--at least temporarily--last week. The issue in question relates to the imposition of a new severance tax on domestic oil and gas production from the federal waters of the Gulf of Mexico, in order to fund alternative fuels incentives and projects. This amounts to a larger-scale recycling of California's failed Proposition 87 . You don't have to like Big Oil or be skeptical of alternative energy to see the flaws in this approach.
The logic behind the proposal seems compelling. Oil companies are making record profits, some of which accrue from Gulf Coast leases on which they were granted relief from federal royalties, when oil prices were under $20 per barrel in the late 1990s. At the same time, Congress would like to encourage the production of alternative fuels and the adoption of more efficient technology. Where better to find the money for that than from funds to which the government would have been entitled, absent royalty relief? Getting oil companies to pay for alternative energy sounds like a smart and popular notion. Unfortunately, the consequences of that simple logic turn out to be counterproductive, at least if the overall goal of the legislation is to reduce America's dependence on imported energy.
US oil production peaked in 1970 and has been in steady decline since the late 1980s. Geology had a lot to do with that, but it is not a coincidence that real-dollar oil prices had started falling in the early 1980s, a trend that only reversed in the last few years. For a free-market producer like the US, oil production is intimately related to its expected profits. Compounding this problem, US natural gas production has apparently reached a plateau. The more of both commodities we must import, the higher their domestic prices will go. The more oil and gas we can produce here, the less we will import, and the less we will compete for supplies with the rapidly growing economies of Asia.
In the meantime, we are reaching a national consensus on the importance of reducing our oil consumption by conserving, by improving vehicle efficiency, and by expanding our production of alternatives, at least those with the potential to become competitive without large, permanent subsidies. But that does not mean that it makes sense to pit alternative energy against domestic oil, particularly when in the process we stand to reduce oil and gas production by more than the net energy contribution of our current alternative fuels efforts. And that's the crux of the problem, here. By making the largest remaining accessible oil and gas resources in the US less attractive, a severance tax could actually shrink our overall energy supplies, particularly if the alternative energy and efficiency projects do not contribute as much or as quickly as the foregone oil and gas production. That would increase our oil and gas imports.
This artificial dilemma can be solved easily. If the $29 billion worth of alternative energy incentives and projects targeted by the Congress have merit, then we should fund them from a source that doesn't treat domestic energy production as a zero-sum game. For example, over the 10 years in question, a surtax of less than 2 cents per gallon of gasoline would do the trick, while providing a small incremental incentive for conservation. But wherever we find the money, it doesn't make sense to take it from companies that invested billions of dollars of their shareholders' capital to increase US oil output, at a time when that looked very risky.
The logic behind the proposal seems compelling. Oil companies are making record profits, some of which accrue from Gulf Coast leases on which they were granted relief from federal royalties, when oil prices were under $20 per barrel in the late 1990s. At the same time, Congress would like to encourage the production of alternative fuels and the adoption of more efficient technology. Where better to find the money for that than from funds to which the government would have been entitled, absent royalty relief? Getting oil companies to pay for alternative energy sounds like a smart and popular notion. Unfortunately, the consequences of that simple logic turn out to be counterproductive, at least if the overall goal of the legislation is to reduce America's dependence on imported energy.
US oil production peaked in 1970 and has been in steady decline since the late 1980s. Geology had a lot to do with that, but it is not a coincidence that real-dollar oil prices had started falling in the early 1980s, a trend that only reversed in the last few years. For a free-market producer like the US, oil production is intimately related to its expected profits. Compounding this problem, US natural gas production has apparently reached a plateau. The more of both commodities we must import, the higher their domestic prices will go. The more oil and gas we can produce here, the less we will import, and the less we will compete for supplies with the rapidly growing economies of Asia.
In the meantime, we are reaching a national consensus on the importance of reducing our oil consumption by conserving, by improving vehicle efficiency, and by expanding our production of alternatives, at least those with the potential to become competitive without large, permanent subsidies. But that does not mean that it makes sense to pit alternative energy against domestic oil, particularly when in the process we stand to reduce oil and gas production by more than the net energy contribution of our current alternative fuels efforts. And that's the crux of the problem, here. By making the largest remaining accessible oil and gas resources in the US less attractive, a severance tax could actually shrink our overall energy supplies, particularly if the alternative energy and efficiency projects do not contribute as much or as quickly as the foregone oil and gas production. That would increase our oil and gas imports.
This artificial dilemma can be solved easily. If the $29 billion worth of alternative energy incentives and projects targeted by the Congress have merit, then we should fund them from a source that doesn't treat domestic energy production as a zero-sum game. For example, over the 10 years in question, a surtax of less than 2 cents per gallon of gasoline would do the trick, while providing a small incremental incentive for conservation. But wherever we find the money, it doesn't make sense to take it from companies that invested billions of dollars of their shareholders' capital to increase US oil output, at a time when that looked very risky.
Monday, June 25, 2007
Peak Preparation
Following on from Friday's posting on the uncertainty about how close we are to a peak in global oil production, I want to focus on a question I think is actually more important: However close we are to a peak--whether it is already here, or 5, 10, or even 20 years away--are we doing enough to prepare for the possibility of one? The short answer is no, but that doesn't mean we aren't doing anything. In fact, many of our strategies for addressing climate change and energy security also provide some insurance against the consequences of Peak Oil or its forerunner, a sustained period in which liquid fuel supply doesn't grow as fast as potential demand, and the oil-market discontinuity that would trigger.
On a basic level, oil is important for two main reasons. It is the source of most of our transportation fuels and many useful petrochemicals and lubricants, and it also accounts for 35% of the world's primary energy production. Preparing for a gap between oil supply and demand requires addressing both of these aspects of oil's value to the economy, and in that regard it dovetails neatly with the concerns about global warming and energy security that are prompting big changes in our energy policies.
For example, while improved energy efficiency is a primary strategy for countering climate change and reducing oil imports, it looks equally important in preparing for a future oil shortfall and price spike. Peak Oil worries could lend urgency to the debate over CAFE and appliance energy standards. At the same time, efforts to expand biofuels production and bridge electricity into transportation via plug-in hybrids and electric vehicles, though driven by emissions and energy security calculations, are also excellent prescriptions for mitigating Peak Oil's impact and even delaying its onset.
There are a few areas in which this one-size-fits-all logic fails. The conversion of solid and gaseous hydrocarbons into liquid fuels--CTL and GTL--looks quite useful from a Peak Oil perspective. Viewed through a climate change lens, however, it looks like an expensive diversion or downright counterproductive. And while natural gas has oddly fallen from favor with those most concerned about climate change, despite its relatively low CO2 emissions, improving our access to gas (imported and domestic) looks like another key leg of the energy security/Peak Oil axis. If Peak Oil is a significant risk, we would certainly not want to face it in the midst of an emerging natural gas crisis.
For me, all of this boils down to effective large-scale risk management. For the next decade Peak Oil remains a big uncertainty, not a given, but prudent planning must take it into account. Where it reinforces other concerns, it may prompt accelerated timetables. Where it conflicts, as on some aspects of climate change, we need a candid debate about which problem looms larger, and which consequences would be most damaging or costly. At a minimum, we should improve our monitoring capabilities, including the means of auditing global production and reserves data for all liquid fuels, not just the conventional oil on which most Peak Oil predictions are focused.
On a basic level, oil is important for two main reasons. It is the source of most of our transportation fuels and many useful petrochemicals and lubricants, and it also accounts for 35% of the world's primary energy production. Preparing for a gap between oil supply and demand requires addressing both of these aspects of oil's value to the economy, and in that regard it dovetails neatly with the concerns about global warming and energy security that are prompting big changes in our energy policies.
For example, while improved energy efficiency is a primary strategy for countering climate change and reducing oil imports, it looks equally important in preparing for a future oil shortfall and price spike. Peak Oil worries could lend urgency to the debate over CAFE and appliance energy standards. At the same time, efforts to expand biofuels production and bridge electricity into transportation via plug-in hybrids and electric vehicles, though driven by emissions and energy security calculations, are also excellent prescriptions for mitigating Peak Oil's impact and even delaying its onset.
There are a few areas in which this one-size-fits-all logic fails. The conversion of solid and gaseous hydrocarbons into liquid fuels--CTL and GTL--looks quite useful from a Peak Oil perspective. Viewed through a climate change lens, however, it looks like an expensive diversion or downright counterproductive. And while natural gas has oddly fallen from favor with those most concerned about climate change, despite its relatively low CO2 emissions, improving our access to gas (imported and domestic) looks like another key leg of the energy security/Peak Oil axis. If Peak Oil is a significant risk, we would certainly not want to face it in the midst of an emerging natural gas crisis.
For me, all of this boils down to effective large-scale risk management. For the next decade Peak Oil remains a big uncertainty, not a given, but prudent planning must take it into account. Where it reinforces other concerns, it may prompt accelerated timetables. Where it conflicts, as on some aspects of climate change, we need a candid debate about which problem looms larger, and which consequences would be most damaging or costly. At a minimum, we should improve our monitoring capabilities, including the means of auditing global production and reserves data for all liquid fuels, not just the conventional oil on which most Peak Oil predictions are focused.
Friday, June 22, 2007
How Near Is the End?
Although Peak Oil has faded somewhat as a "front page" issue this year, after a couple of years in the limelight, yesterday I received a question suggesting that a peak was either imminent or already upon us. That prompted a quick review of global oil production data to see whether there had been any changes that might support that view. I'm generally agnostic on the whole idea of an imminent geologically-driven peak in production, as distinct from one that might occur as a result of OPEC policy or problems queuing up the necessary drilling kit, personnel and investments to keep production rising ahead of demand. As complex as this issue is, however, there is one statistic that I think provides a pretty good barometer for the proximity of a peak; based on that measure, at least, we're not there yet.
Without going through the whole Peak Oil argument again, technical and otherwise, I want to focus on one aspect of peak oil that ought to be fairly non-controversial, among both peak adherents and peak skeptics. The global distributions of oil reserves and current production are remarkably different, as a function of the upside-down economics of the oil industry, in which the low-cost producers constrain their output and the high-cost producers go flat out. OPEC countries (excluding the newest member, Angola) hold 60% of the world's proved reserves but account for only 40% of production. Fundamentally, if there is a geologically-based peak in oil production waiting for us, OPEC is much farther from it than the rest of us, so it must manifest first in non-OPEC production.
So what do the numbers tell us? Has non-OPEC production stalled or gone into decline, as many expect? After looking at the most recent data available from the Energy Information Agency (EIA) of the US Department of Energy, the International Energy Agency (IEA), and the just-released BP Statistical Review, the clear answer seems to be no. Between 2004 and 2006 non-OPEC production grew by an average of 0.5%/year, and the IEA expects growth >1% this year, in a predictably lagged response to four years of sustained oil high prices. I don't see how that would be possible if we were as close to a global peak as pessimists believe.
There are two important caveats about the above figures, and if I didn't mention them, I know my readers would keep me honest. If you subtract from non-OPEC production the contribution of Canadian oil sands projects and the rising output of Angola, the residual trend looks like a plateau, at least over the last three years. But it no longer makes sense to look at non-OPEC supply without including oil sands--which are now a fact of life--just as we routinely include natural gas liquids. For that matter, anyone looking at peak oil ought to be counting the growing contribution of biofuels and any CTL or GTL that comes along, because what matters to the market is total liquid fuel supply, not just conventional oil. As to the change in Angola's status, it highlights OPEC's recent cleverness and reinforces the significantshift in market power that is underway.
The net result of all this leaves us just as uncertain as we were before about the timing of a future peak in "oil" production, but increasingly vulnerable to OPEC's production decisions. While much of that vulnerability is the inescapable result of the maturity of the producing basins in North America and Europe, some of it is self-imposed, and we ought to be doing some serious soul-searching about the consequences of that choice. Improved fuel economy and more biofuels will help, but we could dig our way out of this hole faster with some help from the oil we've chosen to place off-limits to development.
Without going through the whole Peak Oil argument again, technical and otherwise, I want to focus on one aspect of peak oil that ought to be fairly non-controversial, among both peak adherents and peak skeptics. The global distributions of oil reserves and current production are remarkably different, as a function of the upside-down economics of the oil industry, in which the low-cost producers constrain their output and the high-cost producers go flat out. OPEC countries (excluding the newest member, Angola) hold 60% of the world's proved reserves but account for only 40% of production. Fundamentally, if there is a geologically-based peak in oil production waiting for us, OPEC is much farther from it than the rest of us, so it must manifest first in non-OPEC production.
So what do the numbers tell us? Has non-OPEC production stalled or gone into decline, as many expect? After looking at the most recent data available from the Energy Information Agency (EIA) of the US Department of Energy, the International Energy Agency (IEA), and the just-released BP Statistical Review, the clear answer seems to be no. Between 2004 and 2006 non-OPEC production grew by an average of 0.5%/year, and the IEA expects growth >1% this year, in a predictably lagged response to four years of sustained oil high prices. I don't see how that would be possible if we were as close to a global peak as pessimists believe.
There are two important caveats about the above figures, and if I didn't mention them, I know my readers would keep me honest. If you subtract from non-OPEC production the contribution of Canadian oil sands projects and the rising output of Angola, the residual trend looks like a plateau, at least over the last three years. But it no longer makes sense to look at non-OPEC supply without including oil sands--which are now a fact of life--just as we routinely include natural gas liquids. For that matter, anyone looking at peak oil ought to be counting the growing contribution of biofuels and any CTL or GTL that comes along, because what matters to the market is total liquid fuel supply, not just conventional oil. As to the change in Angola's status, it highlights OPEC's recent cleverness and reinforces the significantshift in market power that is underway.
The net result of all this leaves us just as uncertain as we were before about the timing of a future peak in "oil" production, but increasingly vulnerable to OPEC's production decisions. While much of that vulnerability is the inescapable result of the maturity of the producing basins in North America and Europe, some of it is self-imposed, and we ought to be doing some serious soul-searching about the consequences of that choice. Improved fuel economy and more biofuels will help, but we could dig our way out of this hole faster with some help from the oil we've chosen to place off-limits to development.
Thursday, June 21, 2007
In the Meantime
For all the discussion on Capitol Hill about energy legislation--the subject of my last two postings--I have yet to hear a disclaimer that the impact of these proceedings on actual energy supply, demand, or prices in the next five years will be minimal. And yet, when you consider the time lags associated with the activities covered by these policies, whether relating to the expansion of alternative energy production, improvements in fuel economy, or even the easing of some restrictions on offshore oil and gas drilling, that must be true. It's not obvious that the public has figured that out, yet, but if they are expecting help from Congress at the gas pump any time soon, they are probably going to be disappointed. Unless they are warned in advance, that could have political consequences.
Consider the proposed new fuel economy standards. The bill by Senators Feinstein and Durbin, the "Ten-in-Ten Fuel Economy Act", would increase the CAFE standard for new cars to 35 mpg by 2020 and close the "SUV loophole" treating SUVs differently from passenger cars by 2013. The bill doesn't prescribe how this target would be phased in, but even if new car fuel economy were increased by 1 mpg/year starting next year, by 2012 the total improvement in the entire US car fleet of 243 million vehicles would only be about 3%. That's the equivalent of roughly 300,000 barrels per day (bpd) out of a gasoline market that exceeds 9.3 million bpd.
Now add in the biofuel mandates in Senator Dorgan's bill, S-875, the "SAFE Energy Act of 2007." Looking at its effect between now and 2012, we see the quantity of ethanol in the gasoline pool increasing from 4.7 billion gallons to 13.2 billion gallons per year. Ignoring concerns about the potential impact on food prices, and after adjusting for ethanol's lower energy content, that incremental supply works out to just under 400,000 bpd, or about 4% of current consumption.
Together, these two measures, which appear to offer the largest near-term fuel price impact of any of the provisions under consideration, would displace about 7% of our current gasoline consumption within five years, through a combination of efficiency gains and alternative supply. If everything else stood still, that would almost certainly be enough to exert significant downward pressure on gasoline prices--barring any new taxes that might be imposed in the interim. But how likely is it that everything will stand still? The recent price spike only slowed gasoline demand growth to about 1% per year. That means that over five years, the underlying growth in the car population and in total miles driven could erode all but 1-2% of the benefit of the new energy policies. That would reduce the impact on fuel prices to a level indistinguishable from the background noise.
That doesn't mean that changes in energy policy are unnecessary or futile. After all, most of the measures under discussion are designed to have their biggest effect after 2017. What it does mean, though, is that for at least the next few years gasoline prices will continue to be influenced primarily by the same things that have affected them in the past. That's why factors such as the expansion of US refinery capacity and the diversity of our sources of crude oil and refined product imports remain highly relevant, even as our efforts to reduce their importance in the future make these issues much more complicated. Rather than creating high expectations that can't be met in the near term, we ought to acknowledge the complexity and potential volatility of the transition period ahead.
Consider the proposed new fuel economy standards. The bill by Senators Feinstein and Durbin, the "Ten-in-Ten Fuel Economy Act", would increase the CAFE standard for new cars to 35 mpg by 2020 and close the "SUV loophole" treating SUVs differently from passenger cars by 2013. The bill doesn't prescribe how this target would be phased in, but even if new car fuel economy were increased by 1 mpg/year starting next year, by 2012 the total improvement in the entire US car fleet of 243 million vehicles would only be about 3%. That's the equivalent of roughly 300,000 barrels per day (bpd) out of a gasoline market that exceeds 9.3 million bpd.
Now add in the biofuel mandates in Senator Dorgan's bill, S-875, the "SAFE Energy Act of 2007." Looking at its effect between now and 2012, we see the quantity of ethanol in the gasoline pool increasing from 4.7 billion gallons to 13.2 billion gallons per year. Ignoring concerns about the potential impact on food prices, and after adjusting for ethanol's lower energy content, that incremental supply works out to just under 400,000 bpd, or about 4% of current consumption.
Together, these two measures, which appear to offer the largest near-term fuel price impact of any of the provisions under consideration, would displace about 7% of our current gasoline consumption within five years, through a combination of efficiency gains and alternative supply. If everything else stood still, that would almost certainly be enough to exert significant downward pressure on gasoline prices--barring any new taxes that might be imposed in the interim. But how likely is it that everything will stand still? The recent price spike only slowed gasoline demand growth to about 1% per year. That means that over five years, the underlying growth in the car population and in total miles driven could erode all but 1-2% of the benefit of the new energy policies. That would reduce the impact on fuel prices to a level indistinguishable from the background noise.
That doesn't mean that changes in energy policy are unnecessary or futile. After all, most of the measures under discussion are designed to have their biggest effect after 2017. What it does mean, though, is that for at least the next few years gasoline prices will continue to be influenced primarily by the same things that have affected them in the past. That's why factors such as the expansion of US refinery capacity and the diversity of our sources of crude oil and refined product imports remain highly relevant, even as our efforts to reduce their importance in the future make these issues much more complicated. Rather than creating high expectations that can't be met in the near term, we ought to acknowledge the complexity and potential volatility of the transition period ahead.
Wednesday, June 20, 2007
Security vs. Emissions, Round I
It's probably premature to describe yesterday's Senate votes on energy as another turning point for coal in this country. Two separate amendments promoting coal-to-liquids (CTL) were voted down by healthy margins, as described in today's Washington Post. That doesn't automatically derail the industry's interest in producing liquid fuels from coal, but it seems to ensure that the final energy legislation coming out of Congress this year will include neither federal funding for CTL, nor a privileged place for its output within the liquid alternative fuels mandate of 35 or 36 billion gallons per year. While the US clearly can't ignore the energy bounty of the coal under our land, it looks increasingly likely that concerns about climate change will constrain coal's future contribution to sectors in which most of its CO2 emissions can be prevented from entering the atmosphere. That represents a real energy milestone.
In arriving at yesterday's decisions, it might appear that the Congress is expressing skepticism about the potential for Carbon Capture and Storage (CCS) technology to put CTL on an equal emissions footing with petroleum products. I don't think that's the case, because Senator Dorgan's SAFE Energy Act of 2007 (S-875), which is the centerpiece of the Senate's current debate on energy policy, spells out the importance of CCS in its charge to the Secretary of Energy to undertake R&D for CCS. If anything, the importance of CCS as an enabling technology for coal (and shale) has been elevated, at the same time that CTL has been recognized as a less attractive path towards low-emissions energy than biofuels or electricity.
I don't mean to rehash yesterday's posting, which addressed some of these same issues. Nor do I think that these votes rule out CTL entirely, because it could still emerge on a purely commercial basis. But I think it's worth noting that on its first opportunity to choose between the two main priorities that have emerged for national energy policy, enhancing energy security and reducing greenhouse gas emissions, the Congress has set the latter higher than the former. That could create a precedent that will carry beyond the current Congress and into the next Administration, regardless of who wins in November 2008, Democrat, Republican, or independent.
In arriving at yesterday's decisions, it might appear that the Congress is expressing skepticism about the potential for Carbon Capture and Storage (CCS) technology to put CTL on an equal emissions footing with petroleum products. I don't think that's the case, because Senator Dorgan's SAFE Energy Act of 2007 (S-875), which is the centerpiece of the Senate's current debate on energy policy, spells out the importance of CCS in its charge to the Secretary of Energy to undertake R&D for CCS. If anything, the importance of CCS as an enabling technology for coal (and shale) has been elevated, at the same time that CTL has been recognized as a less attractive path towards low-emissions energy than biofuels or electricity.
I don't mean to rehash yesterday's posting, which addressed some of these same issues. Nor do I think that these votes rule out CTL entirely, because it could still emerge on a purely commercial basis. But I think it's worth noting that on its first opportunity to choose between the two main priorities that have emerged for national energy policy, enhancing energy security and reducing greenhouse gas emissions, the Congress has set the latter higher than the former. That could create a precedent that will carry beyond the current Congress and into the next Administration, regardless of who wins in November 2008, Democrat, Republican, or independent.
Tuesday, June 19, 2007
Do No Harm
The Senate and House of Representatives are both feverishly working on new federal energy legislation, and it's a reasonable bet that a bill will end up on the President's desk within a few months. However, it is still anyone's guess as to precisely what provisions will survive or be added along the way, as the process converges toward an eventual conference to iron out differences between the two bodies' differing energy visions. As the final legislation takes shape, however, we can only hope that our elected representatives will see the wisdom of adopting the credo of at least doing no harm. The potential for wasteful and counterproductive energy policy is enormous, particularly in two areas: the functioning of the petroleum products market and the promotion of alternative fuels.
The API ran a full page ad in today's Washington Post with a tag line of, "It's 2007, not 1977." That echoes a theme I've expounded here for several years. Many of the measures introduced to deal with the energy crisis of the 1970s were either ineffective or downright harmful. We should have learned from that experience, and from the much more successful market-oriented approaches of the subsequent decades. While fuel prices may be high again, we have seen none of the incredibly disruptive gas lines and runouts that plagued us then. In particular, the "anti-gouging" provisions espoused by some in Congress look like standby price controls, aimed at the point in time when the ability of the market to rebalance supply and demand is most essential, as we saw after the hurricanes of 2005. This idea clearly fails the "do no harm" test.
Turning to alternative fuels, it's rare that I agree with the editors of The New York Times on energy policies, but their editorial of May 30th on the impact of coal liquefaction on energy security and climate change was spot on. "A policy designed to solve one problem should not make the other worse," they said, citing the high greenhouse gas emissions associated with coal-to-liquids (CTL) plants. A recent posting on the Clean Car Congress site provides useful supporting data from a Carnegie-Mellon study comparing CTL, conventional fuels, and plug-in hybrids.
So in this regard, it is one thing to codify a greatly increased biofuels mandate that relies on production from unproven cellulosic ethanol technology to meet its long-term goals, but quite another to turn the understandable ambitions of coal-state legislators into a national policy that would double down our bet on the equally unproven technology of carbon capture and sequestration (CCS.) Even if cellulosic ethanol didn't take off as expected, we would still end up with liquid fuels that--however costly at the pump and the supermarket--could reduce both our oil imports and our greenhouse gas emissions by modest amounts. However, if we went ahead with CTL, but CCS proved either ineffective or uneconomical, we'd end up with a synthetic fuels industry that would roughly double our greenhouse gas emissions per gallon of gasoline or diesel. That would make a farce of any national effort to reduce those emissions via cap-and-trade or some other mechanism.
If we are indeed headed for a "grand compromise" on energy that would incorporate meaningful elements of energy efficiency and conventional and alternative energy supply, then those crafting a compromise must hold firm in excluding provisions that would sabotage either the ability of the fuel marketplace to respond to sudden shocks, or our first steps toward reducing our enormous greenhouse gas emissions. In the give-and-take world of Capitol Hill that won't be easy.
The API ran a full page ad in today's Washington Post with a tag line of, "It's 2007, not 1977." That echoes a theme I've expounded here for several years. Many of the measures introduced to deal with the energy crisis of the 1970s were either ineffective or downright harmful. We should have learned from that experience, and from the much more successful market-oriented approaches of the subsequent decades. While fuel prices may be high again, we have seen none of the incredibly disruptive gas lines and runouts that plagued us then. In particular, the "anti-gouging" provisions espoused by some in Congress look like standby price controls, aimed at the point in time when the ability of the market to rebalance supply and demand is most essential, as we saw after the hurricanes of 2005. This idea clearly fails the "do no harm" test.
Turning to alternative fuels, it's rare that I agree with the editors of The New York Times on energy policies, but their editorial of May 30th on the impact of coal liquefaction on energy security and climate change was spot on. "A policy designed to solve one problem should not make the other worse," they said, citing the high greenhouse gas emissions associated with coal-to-liquids (CTL) plants. A recent posting on the Clean Car Congress site provides useful supporting data from a Carnegie-Mellon study comparing CTL, conventional fuels, and plug-in hybrids.
So in this regard, it is one thing to codify a greatly increased biofuels mandate that relies on production from unproven cellulosic ethanol technology to meet its long-term goals, but quite another to turn the understandable ambitions of coal-state legislators into a national policy that would double down our bet on the equally unproven technology of carbon capture and sequestration (CCS.) Even if cellulosic ethanol didn't take off as expected, we would still end up with liquid fuels that--however costly at the pump and the supermarket--could reduce both our oil imports and our greenhouse gas emissions by modest amounts. However, if we went ahead with CTL, but CCS proved either ineffective or uneconomical, we'd end up with a synthetic fuels industry that would roughly double our greenhouse gas emissions per gallon of gasoline or diesel. That would make a farce of any national effort to reduce those emissions via cap-and-trade or some other mechanism.
If we are indeed headed for a "grand compromise" on energy that would incorporate meaningful elements of energy efficiency and conventional and alternative energy supply, then those crafting a compromise must hold firm in excluding provisions that would sabotage either the ability of the fuel marketplace to respond to sudden shocks, or our first steps toward reducing our enormous greenhouse gas emissions. In the give-and-take world of Capitol Hill that won't be easy.
Monday, June 18, 2007
Global Energy Decarbonization
Last week I had the opportunity to hear Professor Jeffrey Sachs address a small “new energy” gathering hosted by Merrill Lynch in Manhattan. Dr. Sachs, who heads the Earth Institute at Columbia University, spoke about reducing greenhouse gas emissions, and I was impressed by his authoritative command of a subject that’s at least somewhat removed from his primary expertise in global development. But then, climate change could prove to be the ultimate global development issue, in terms of its potential impact and of the opportunities its solutions could create.
Dr. Sachs began by briefly reviewing the evidence for anthropogenic global warming and projections of its future progress. He made a strong case for the need to reduce humanity’s emissions of greenhouse gases--especially carbon dioxide--expeditiously, and for why that won’t happen without a serious and intentional global effort. The bulk of his remarks focused on the policy and technological tools required to stabilize atmospheric CO2 emissions between 450 and 560 parts per million (ppm), and preferably at the lower end of that range.
On the technology front, Professor Sachs expressed doubts that renewable energy and improved efficiency by themselves will deliver the energy sector emissions reductions that will be necessary in the next several decades. Based on his work with these countries, he sees the growth of China and India continuing to be fueled largely by coal, and for that reason regards carbon capture and sequestration (CCS) as “the indispensable technology” for achieving meaningful CO2 reductions from fossil fuels. When I expressed my concern, prompted by the recent MIT Future of Coal study, that the challenge of retrofitting CCS to existing power plants has been underestimated, he cited modeling work at the Earth Institute showing that even if applied only to new construction, CCS could reduce CO2 emissions from the power sector to very low levels by 2050, provided old plants were retired after 40 years.
The most interesting discussion, from my perspective, centered on the policy recipe for inducing sufficient emission reductions across the global economy. Dr. Sachs was optimistic about the prospect of achieving a follow-on agreement to the Kyoto Protocol among all of the large emitters, including China and the US, by 2009 or 2010. At the same time, he seemed quite skeptical about the practical aspects of a cap-and-trade mechanism, but stopped short of endorsing a simple carbon tax, because of the enormous financial transfers that would entail. He seemed to prefer a more flexible approach, tailored to each country’s situation and incorporating a mix of carbon pricing, mandates and performance standards for specific industries. (Something like that could be very compatible with the “stabilization wedge” approach proposed by Robert Socolow of Princeton.)
I thought the most reassuring element of the whole conversation wasn’t in the technical or policy details, but in Dr. Sachs’s conclusion that greenhouse gases can indeed be stabilized at a cost that won’t wreck the global economy, perhaps less than 1% of global GDP. When I hear that from climate scientists or politicians, I take it with a grain of salt. But coming from someone who has devoted his career to advancing the cause of economic growth and the extension of its benefits to the world’s poor, it gives me more confidence that taking action to retard climate change--in spite of the residual uncertainties--represents the right cost and risk trade-off.
Dr. Sachs began by briefly reviewing the evidence for anthropogenic global warming and projections of its future progress. He made a strong case for the need to reduce humanity’s emissions of greenhouse gases--especially carbon dioxide--expeditiously, and for why that won’t happen without a serious and intentional global effort. The bulk of his remarks focused on the policy and technological tools required to stabilize atmospheric CO2 emissions between 450 and 560 parts per million (ppm), and preferably at the lower end of that range.
On the technology front, Professor Sachs expressed doubts that renewable energy and improved efficiency by themselves will deliver the energy sector emissions reductions that will be necessary in the next several decades. Based on his work with these countries, he sees the growth of China and India continuing to be fueled largely by coal, and for that reason regards carbon capture and sequestration (CCS) as “the indispensable technology” for achieving meaningful CO2 reductions from fossil fuels. When I expressed my concern, prompted by the recent MIT Future of Coal study, that the challenge of retrofitting CCS to existing power plants has been underestimated, he cited modeling work at the Earth Institute showing that even if applied only to new construction, CCS could reduce CO2 emissions from the power sector to very low levels by 2050, provided old plants were retired after 40 years.
The most interesting discussion, from my perspective, centered on the policy recipe for inducing sufficient emission reductions across the global economy. Dr. Sachs was optimistic about the prospect of achieving a follow-on agreement to the Kyoto Protocol among all of the large emitters, including China and the US, by 2009 or 2010. At the same time, he seemed quite skeptical about the practical aspects of a cap-and-trade mechanism, but stopped short of endorsing a simple carbon tax, because of the enormous financial transfers that would entail. He seemed to prefer a more flexible approach, tailored to each country’s situation and incorporating a mix of carbon pricing, mandates and performance standards for specific industries. (Something like that could be very compatible with the “stabilization wedge” approach proposed by Robert Socolow of Princeton.)
I thought the most reassuring element of the whole conversation wasn’t in the technical or policy details, but in Dr. Sachs’s conclusion that greenhouse gases can indeed be stabilized at a cost that won’t wreck the global economy, perhaps less than 1% of global GDP. When I hear that from climate scientists or politicians, I take it with a grain of salt. But coming from someone who has devoted his career to advancing the cause of economic growth and the extension of its benefits to the world’s poor, it gives me more confidence that taking action to retard climate change--in spite of the residual uncertainties--represents the right cost and risk trade-off.
Friday, June 15, 2007
Conflicting Signals - Revised
Several news stories caught my attention this week. They all relate to things I've covered at length in past blogs, and together they send conflicting signals about our energy future:
Item: In conjunction with the release of its annual statistical review for energy, BP has dismissed the prospect of an imminent peak in oil output due to production constraints. They see sufficient reserves to support another 40 years at current consumption levels. I'm not sure how that view maps into a variety of forecasts that show oil demand growing by 30-50% over the next 20 years. The difference between consuming 85 million barrels per day (MBD) for forty years versus growing to 120 MBD in twenty years and sustaining that level for another twenty is about 370 billion barrels, or roughly the current proved oil reserves of Saudi Arabia plus Iraq. (This corrects the math error in the earlier version of this posting; I don't believe it changes the point I was trying to make.)
Item: Chevron is reported to have deferred drilling on the giant Jack discovery in the Gulf of Mexico until next year, because there aren't enough deepwater rigs available. (Perhaps this is what Senator Kerry was referring to yesterday, when he said that the Congress didn't need to open up more of the Gulf for drilling, because oil companies weren't drilling the areas that were already open.) Jack made headlines last fall, because it pointed to the potential of up to 15 billion barrels of unexploited reserves in the deepwater Gulf, in the "lower Tertiary" layer. While Jack's reserves might still incrementally support the BP view that there's plenty of oil left, the question of practical importance is not how much oil is still in the ground, but whether and how quickly we can extract it. Turning reserves into production is not as simple as it used to be, with the necessary access, hardware, and personnel all in short supply.
Item: The Congress is debating increasing fuel economy standards either to 35 mpg for all light-duty vehicles or to 36 for passenger cars and 30 for SUVs, depending on which version of the legislation you're looking at. There are two main ways to achieve these higher averages, either by raising the fuel economy of most vehicles by 50%, or by concentrating on converting about 15% of the new car fleet to ultra-efficient technologies, such as 100 mpg plug-in hybrids (PHEVs.) (See below.)
Item: The timing of PHEVs was dealt a blow this week, when Toyota backed away from using lithium-ion batteries in its 2008 model Prius hybrid, apparently because of the "flaming laptop" problem. Li-ion batteries, which can be cycled many times without degrading and are much lighter than competing batteries, are widely regarded as essential to providing adequate range to make PHEVs practical at an acceptable penalty in bulk and weight. This is probably not insurmountable, but if PHEVs are delayed, then auto makers may be forced to start down the harder path of making all their new models much more efficient.
So while there may be plenty of oil left in the world, the pace of global economic growth and the nature of the accessible resources are stretching the industry's capacity to expand production fast enough to meet demand. And although the US is finally approaching consensus on the need to improve automobile efficiency, there is still no single off-the-shelf technology that will achieve it painlessly, without significant tradeoffs in vehicle cost, performance, and/or weight.
Item: In conjunction with the release of its annual statistical review for energy, BP has dismissed the prospect of an imminent peak in oil output due to production constraints. They see sufficient reserves to support another 40 years at current consumption levels. I'm not sure how that view maps into a variety of forecasts that show oil demand growing by 30-50% over the next 20 years. The difference between consuming 85 million barrels per day (MBD) for forty years versus growing to 120 MBD in twenty years and sustaining that level for another twenty is about 370 billion barrels, or roughly the current proved oil reserves of Saudi Arabia plus Iraq. (This corrects the math error in the earlier version of this posting; I don't believe it changes the point I was trying to make.)
Item: Chevron is reported to have deferred drilling on the giant Jack discovery in the Gulf of Mexico until next year, because there aren't enough deepwater rigs available. (Perhaps this is what Senator Kerry was referring to yesterday, when he said that the Congress didn't need to open up more of the Gulf for drilling, because oil companies weren't drilling the areas that were already open.) Jack made headlines last fall, because it pointed to the potential of up to 15 billion barrels of unexploited reserves in the deepwater Gulf, in the "lower Tertiary" layer. While Jack's reserves might still incrementally support the BP view that there's plenty of oil left, the question of practical importance is not how much oil is still in the ground, but whether and how quickly we can extract it. Turning reserves into production is not as simple as it used to be, with the necessary access, hardware, and personnel all in short supply.
Item: The Congress is debating increasing fuel economy standards either to 35 mpg for all light-duty vehicles or to 36 for passenger cars and 30 for SUVs, depending on which version of the legislation you're looking at. There are two main ways to achieve these higher averages, either by raising the fuel economy of most vehicles by 50%, or by concentrating on converting about 15% of the new car fleet to ultra-efficient technologies, such as 100 mpg plug-in hybrids (PHEVs.) (See below.)
Item: The timing of PHEVs was dealt a blow this week, when Toyota backed away from using lithium-ion batteries in its 2008 model Prius hybrid, apparently because of the "flaming laptop" problem. Li-ion batteries, which can be cycled many times without degrading and are much lighter than competing batteries, are widely regarded as essential to providing adequate range to make PHEVs practical at an acceptable penalty in bulk and weight. This is probably not insurmountable, but if PHEVs are delayed, then auto makers may be forced to start down the harder path of making all their new models much more efficient.
So while there may be plenty of oil left in the world, the pace of global economic growth and the nature of the accessible resources are stretching the industry's capacity to expand production fast enough to meet demand. And although the US is finally approaching consensus on the need to improve automobile efficiency, there is still no single off-the-shelf technology that will achieve it painlessly, without significant tradeoffs in vehicle cost, performance, and/or weight.
Thursday, June 14, 2007
A Wave of Wind
A long train ride provided the opportunity to review the new report from the Department of Energy on the state of wind power in the US. For those who like numbers the ones in this document are fascinating. Significantly, they show that for the second year in a row, wind power capacity additions were second only to those for natural gas-fired turbines and ahead of coal, even when adjusted for “capacity factor”, the proportion of nameplate capacity actually utilized, on average. Furthermore, in at least a few states wind power is now contributing shares of total electricity sales that are comparable to the national shares from conventional sources such as hydropower and nuclear energy. At least regionally, wind is becoming mainstream, rather than niche.
The DOE report reflects the reality that practical wind power is not a national phenomenon, partly because of differences in state renewable energy policies, but also because wind resources are unevenly distributed. Twenty states account for almost 99% of all US wind power capacity, both incrementally for 2006 and cumulatively since the 1980s. Within that group, just six states account for 76% of 2006 additions and 72% of total capacity, with Texas, California, Minnesota and Washington making both lists.
In four relatively sparsely-populated states, New Mexico, Iowa, North Dakota, and Wyoming, wind made up over 5% of total electricity supply last year. That isn’t quite up to Danish levels of 21%, but it approaches the shares in Spain and Germany, the world leaders in installed wind capacity, and it greatly exceeds the US and global averages of just under 1% of total power from wind.
The data make it clear that consistent federal policy is the key to sustaining this kind of growth, and in particular avoiding the previous situation in which the Production Tax Credit for renewable power was at risk of expiring nearly every year, and periodically did. The Energy Policy Act of 2005 provided a two year extension of the PTC, and the Tax Relief and Health Care Act of 2006 tacked on an extra year, through the end of 2008. A permanent extension of the PTC, even one that included a gradual phaseout of the benefit, would put wind on an equal footing with conventional power and support the growth of domestic wind turbine manufacturing. The proliferation of state Renewable Portfolio Standards (RPSs) and the prospect of a national RPS from pending energy legislation are also helping to expand the market for wind.
The report provides some useful information on the structure of the US wind industry, which is dominated by independent power producers, including many small developers focused entirely on wind power, but which has recently seen significant consolidation and acquisitions by large, non-utility players such as oil companies and investment banks.
Some of the most interesting data concern the power price garnered by wind generators. Although this had been falling steadily throughout the decade, it appears to have turned up slightly last year. The DOE attributes this to rising turbine costs, but it must surely also reflect the higher price of the fuel for wind’s chief competitor, gas turbines, as well as the induced demand from state RPSs. However, we shouldn’t conclude from this uptick that wind is likely to be any less competitive in the future. That will depend on improvements in transmission capacity and load management, as well as the expansion of the turbine manufacturing base. It also begs the larger question of what should happen to the specific subsidies for wind under a national climate change policy that puts a price on carbon emissions from wind’s conventional competitors.
The DOE report reflects the reality that practical wind power is not a national phenomenon, partly because of differences in state renewable energy policies, but also because wind resources are unevenly distributed. Twenty states account for almost 99% of all US wind power capacity, both incrementally for 2006 and cumulatively since the 1980s. Within that group, just six states account for 76% of 2006 additions and 72% of total capacity, with Texas, California, Minnesota and Washington making both lists.
In four relatively sparsely-populated states, New Mexico, Iowa, North Dakota, and Wyoming, wind made up over 5% of total electricity supply last year. That isn’t quite up to Danish levels of 21%, but it approaches the shares in Spain and Germany, the world leaders in installed wind capacity, and it greatly exceeds the US and global averages of just under 1% of total power from wind.
The data make it clear that consistent federal policy is the key to sustaining this kind of growth, and in particular avoiding the previous situation in which the Production Tax Credit for renewable power was at risk of expiring nearly every year, and periodically did. The Energy Policy Act of 2005 provided a two year extension of the PTC, and the Tax Relief and Health Care Act of 2006 tacked on an extra year, through the end of 2008. A permanent extension of the PTC, even one that included a gradual phaseout of the benefit, would put wind on an equal footing with conventional power and support the growth of domestic wind turbine manufacturing. The proliferation of state Renewable Portfolio Standards (RPSs) and the prospect of a national RPS from pending energy legislation are also helping to expand the market for wind.
The report provides some useful information on the structure of the US wind industry, which is dominated by independent power producers, including many small developers focused entirely on wind power, but which has recently seen significant consolidation and acquisitions by large, non-utility players such as oil companies and investment banks.
Some of the most interesting data concern the power price garnered by wind generators. Although this had been falling steadily throughout the decade, it appears to have turned up slightly last year. The DOE attributes this to rising turbine costs, but it must surely also reflect the higher price of the fuel for wind’s chief competitor, gas turbines, as well as the induced demand from state RPSs. However, we shouldn’t conclude from this uptick that wind is likely to be any less competitive in the future. That will depend on improvements in transmission capacity and load management, as well as the expansion of the turbine manufacturing base. It also begs the larger question of what should happen to the specific subsidies for wind under a national climate change policy that puts a price on carbon emissions from wind’s conventional competitors.
Wednesday, June 13, 2007
Cellulosic Spindletop
Thus far, much of the interest in cellulosic ethanol has focused on the development of enzymes to promote fermentation, rather than on the provision of the crop waste or non-food crops that would be converted into fuel. An article in Monday's San Francisco Chronicle focused on the activity at the "upstream" end of this future value chain. Even if the comparison to a "Spindletop" of cellulose is hyperbole at this point, it's intriguing to consider the risks that these developers are taking on, as they push to develop and plant new energy crops in advance of the full commercialization of the process they are intended to feed.
When I first ran across the research into methods of turning cellulose into ethanol a few years ago, the Department of Energy folks who were promoting it were targeting "corn stover", the waste from corn ethanol production, as their likely initial feedstock. That still probably makes sense, since this material will already have been harvested and collected in the course of producing the feed for current-generation ethanol plants. It's not hard to imagine first and second generation facilities operating side by side, until the cellulosic process becomes cheap enough to displace the grain-based version--if ever. However, others are looking beyond corn stover to switch grass and the miscanthus hybrid mentioned in the Chronicle. The allure of this tall, rapidly growing "superweed" is understandable, although those pioneering this energy crop will have to manage two key issues.
First, they must find alternative outlets for their production, in case of delays in making cellulosic ethanol processing fully operational. That probably means lining up biopower plants to consume their biomass. This sector, which involves firing or co-firing smallish thermal power plants with wood and other biomass, has been a modest success, despite receiving much less publicity than wind or solar power. The key hurdle involved would be proximity, given the economics of transporting low-density fuels long distances.
The other issue concerns real or perceived hazard to other crops. Even if the ultimate variety of miscanthus chosen for energy cropping doesn't involve genetic modification, there's a long history in this country of foreign plants that have gotten out of control or failed to work out as intended. In California brittle, messy Eucalyptus trees were introduced from Australia as wind breaks. I'm particularly attuned to this example, since one of these ill-considered trees dropped a 9-ton limb on our house when I was growing up. The southern experience of kudzu is probably even more relevant to miscanthus.
Whatever the ultimate feedstock limit on biofuel production, it's clear that that limit will be much higher if it is based on a wide variety of hardy, low-input energy crops, rather than on a few food crops. Once the cellulosic ethanol process has been fully demonstrated at an industrial scale and the cost of its enzymes falls sufficiently to make it competitive with corn ethanol, its ultimate success will depend on the availability of high-quality, low-cost biomass. Outside the corn belt, that will mean dedicated energy crops such as miscanthus and switchgrass, if their commercial risks can be managed successfully. Perhaps the players in the different segments of this nascent value chain should start considering vertical integration or alliances.
When I first ran across the research into methods of turning cellulose into ethanol a few years ago, the Department of Energy folks who were promoting it were targeting "corn stover", the waste from corn ethanol production, as their likely initial feedstock. That still probably makes sense, since this material will already have been harvested and collected in the course of producing the feed for current-generation ethanol plants. It's not hard to imagine first and second generation facilities operating side by side, until the cellulosic process becomes cheap enough to displace the grain-based version--if ever. However, others are looking beyond corn stover to switch grass and the miscanthus hybrid mentioned in the Chronicle. The allure of this tall, rapidly growing "superweed" is understandable, although those pioneering this energy crop will have to manage two key issues.
First, they must find alternative outlets for their production, in case of delays in making cellulosic ethanol processing fully operational. That probably means lining up biopower plants to consume their biomass. This sector, which involves firing or co-firing smallish thermal power plants with wood and other biomass, has been a modest success, despite receiving much less publicity than wind or solar power. The key hurdle involved would be proximity, given the economics of transporting low-density fuels long distances.
The other issue concerns real or perceived hazard to other crops. Even if the ultimate variety of miscanthus chosen for energy cropping doesn't involve genetic modification, there's a long history in this country of foreign plants that have gotten out of control or failed to work out as intended. In California brittle, messy Eucalyptus trees were introduced from Australia as wind breaks. I'm particularly attuned to this example, since one of these ill-considered trees dropped a 9-ton limb on our house when I was growing up. The southern experience of kudzu is probably even more relevant to miscanthus.
Whatever the ultimate feedstock limit on biofuel production, it's clear that that limit will be much higher if it is based on a wide variety of hardy, low-input energy crops, rather than on a few food crops. Once the cellulosic ethanol process has been fully demonstrated at an industrial scale and the cost of its enzymes falls sufficiently to make it competitive with corn ethanol, its ultimate success will depend on the availability of high-quality, low-cost biomass. Outside the corn belt, that will mean dedicated energy crops such as miscanthus and switchgrass, if their commercial risks can be managed successfully. Perhaps the players in the different segments of this nascent value chain should start considering vertical integration or alliances.
Tuesday, June 12, 2007
A Predictable Bottleneck
Two weeks ago I looked (5/25/07) at the risk that higher biofuels mandates would deter the expansion of US refineries. Today's Wall Street Journal examines another constraint on those expansions: an industry-wide shortage of experienced personnel, along with higher construction costs. This problem is another consequence of the extended "wilderness years" from which the sector has only recently emerged. Two decades of low margins and large investments to comply with expanding environmental regulations imposed cost pressures that forced refineries to run as leanly as possible. In the meantime, an entire generation found employment in other, flashier industries. Unfortunately for consumers, the bill is coming due now.
The refining business changed in many ways after I started work at Texaco's Los Angeles Plant in 1979, right out of engineering school. Most of the large, sophisticated refineries were then owned by major oil companies, while the independents typically had smaller, simpler facilities that operated more opportunistically. But the increases in oil prices and energy costs in the late 1970s, along with the subsequent requirements to remove lead from gasoline and most of the sulfur from all petroleum products, weighed heavily on refinery economics, in a period in which capacity exceeded demand. Small, marginal refineries shut down, while larger but still unprofitable refineries were sold by the majors to the remaining independent refiners. The industry endured several such cycles, and in each one experienced workers were offered early retirement or simply made redundant.
There's no question that refinery modernization and automation reduced the minimum manpower requirements of these facilities. But in a period when refinery managers had little discretion in their investment budgets and couldn't control their fuel and feedstock costs, clever human resource strategies offered one of the few levers for "making your own margin," a mantra I recall from the late 1980s and early 1990s. Staff was reduced below the levels necessary to handle major annual maintenance, with much of this work outsourced to contractors. Some facilities went further, outsourcing even routine maintenance. That made sense, because a smaller number of shared workers could service the entire industry's needs, as long as work was scheduled carefully.
Events outside the industry undermined this strategy in ways that didn't really become apparent until the aftermath of the hurricanes of 2005, when a number of refineries needed major repairs, competing with the normal maintenance cycles. Had the US economy resembled that of the 1980s, refineries and their contracting companies would have simply staffed up again, and this aspect of those events would have been a blip on the charts. Instead, the transformation of the country during the 1990s created essentially a full-employment economy, and most of the workers with previous refining experience had found good jobs elsewhere, ones that didn't involve such loud, noisy, smelly and potentially hazardous facilities. At the same time, the whole refining sector is facing the same demographic challenge as other mature industries, with many of its most experienced workers approaching retirement age within the next five years. It is no longer hypocritical to say that people are these companies' most important assets.
Throw in all the other shifts that we see, and you have a scenario in which the prospect of ever expanding US refineries by enough to cover all of domestic demand looks not only unlikely, but perhaps even unattainable at any reasonable cost. Some level of expansion--or even a brand new refinery or two--looks desirable, because it would make us less vulnerable to events in other markets. However, our present reliance on gasoline imports creates an opportunity to expand biofuel production and introduce large numbers of diesel automobiles without causing further dislocations within the existing domestic refining industry.
The refining business changed in many ways after I started work at Texaco's Los Angeles Plant in 1979, right out of engineering school. Most of the large, sophisticated refineries were then owned by major oil companies, while the independents typically had smaller, simpler facilities that operated more opportunistically. But the increases in oil prices and energy costs in the late 1970s, along with the subsequent requirements to remove lead from gasoline and most of the sulfur from all petroleum products, weighed heavily on refinery economics, in a period in which capacity exceeded demand. Small, marginal refineries shut down, while larger but still unprofitable refineries were sold by the majors to the remaining independent refiners. The industry endured several such cycles, and in each one experienced workers were offered early retirement or simply made redundant.
There's no question that refinery modernization and automation reduced the minimum manpower requirements of these facilities. But in a period when refinery managers had little discretion in their investment budgets and couldn't control their fuel and feedstock costs, clever human resource strategies offered one of the few levers for "making your own margin," a mantra I recall from the late 1980s and early 1990s. Staff was reduced below the levels necessary to handle major annual maintenance, with much of this work outsourced to contractors. Some facilities went further, outsourcing even routine maintenance. That made sense, because a smaller number of shared workers could service the entire industry's needs, as long as work was scheduled carefully.
Events outside the industry undermined this strategy in ways that didn't really become apparent until the aftermath of the hurricanes of 2005, when a number of refineries needed major repairs, competing with the normal maintenance cycles. Had the US economy resembled that of the 1980s, refineries and their contracting companies would have simply staffed up again, and this aspect of those events would have been a blip on the charts. Instead, the transformation of the country during the 1990s created essentially a full-employment economy, and most of the workers with previous refining experience had found good jobs elsewhere, ones that didn't involve such loud, noisy, smelly and potentially hazardous facilities. At the same time, the whole refining sector is facing the same demographic challenge as other mature industries, with many of its most experienced workers approaching retirement age within the next five years. It is no longer hypocritical to say that people are these companies' most important assets.
Throw in all the other shifts that we see, and you have a scenario in which the prospect of ever expanding US refineries by enough to cover all of domestic demand looks not only unlikely, but perhaps even unattainable at any reasonable cost. Some level of expansion--or even a brand new refinery or two--looks desirable, because it would make us less vulnerable to events in other markets. However, our present reliance on gasoline imports creates an opportunity to expand biofuel production and introduce large numbers of diesel automobiles without causing further dislocations within the existing domestic refining industry.
Monday, June 11, 2007
Trash and Energy
Over the weekend I read an interesting article in the Economist on the merits of recycling municipal waste (MSW.) Normally "premium content", the article appears to be accessible through a sponsored link. It got me thinking about the future of trash, and its two main energy implications. The Economist provides some interesting statistics on the energy and greenhouse gas savings associated with recycling, but largely ignores the potential conversion of non-recyclable waste into energy streams, either electricity or fuels. Such efforts have gotten significant media attention recently, including a high-profile project of ConocoPhillips and Tyson Foods. Given the volume of waste generated globally, the energy benefits of getting smarter about its management are too big to ignore.
The Economist starts with the question of whether recycling is truly beneficial and quickly confirms this, based on an analysis of life-cycle studies. I was struck by the potential to improve recycling rates in the US, which they cite at about 30%, to levels comparable to Germany's more than 50%. The example of San Francisco, which apparently recycles 69% of its waste, shows what is possible, though it might be hard to achieve this nationally. Even at current rates, recycling apparently saves 49 million tons of CO2 emissions here, the equivalent of about 0.7% of our total emissions. Doubling that won't get us off the hook for reducing emissions from other sectors, but together with making better use of the energy content of the non-recyclable waste, it could prove to be one of the cheaper sources of emissions reductions.
The article gives some hints about how recycling rates could be increased, and better technology seems to be the key. Much of our current recycling effort depends on households and businesses separating recyclable material from other waste, and sorting it into categories of plastic, glass, metal, paper, etc. The "single-stream" recycling technology mentioned in the article eliminates the pre-sorting but still requires separation of recyclables from non-recyclables. That step might benefit from some financial incentives, such as crediting can and bottle deposits at the point of trash collection, rather than returning them to the point of purchase, generating additional fuel and handling savings in the process. Finally, it ought to be possible to increase the scale of the sorting equipment at trash facilities to eliminate entirely the need for pre-sorting into categories by waste producers. With the entire MSW stream separated into recyclables, energy feedstocks, and residual non-recyclables, only a small fraction would still need to be landfilled. The environmental advantages of that are obvious, but the energy benefits should be quite meaningful, as well.
The Economist starts with the question of whether recycling is truly beneficial and quickly confirms this, based on an analysis of life-cycle studies. I was struck by the potential to improve recycling rates in the US, which they cite at about 30%, to levels comparable to Germany's more than 50%. The example of San Francisco, which apparently recycles 69% of its waste, shows what is possible, though it might be hard to achieve this nationally. Even at current rates, recycling apparently saves 49 million tons of CO2 emissions here, the equivalent of about 0.7% of our total emissions. Doubling that won't get us off the hook for reducing emissions from other sectors, but together with making better use of the energy content of the non-recyclable waste, it could prove to be one of the cheaper sources of emissions reductions.
The article gives some hints about how recycling rates could be increased, and better technology seems to be the key. Much of our current recycling effort depends on households and businesses separating recyclable material from other waste, and sorting it into categories of plastic, glass, metal, paper, etc. The "single-stream" recycling technology mentioned in the article eliminates the pre-sorting but still requires separation of recyclables from non-recyclables. That step might benefit from some financial incentives, such as crediting can and bottle deposits at the point of trash collection, rather than returning them to the point of purchase, generating additional fuel and handling savings in the process. Finally, it ought to be possible to increase the scale of the sorting equipment at trash facilities to eliminate entirely the need for pre-sorting into categories by waste producers. With the entire MSW stream separated into recyclables, energy feedstocks, and residual non-recyclables, only a small fraction would still need to be landfilled. The environmental advantages of that are obvious, but the energy benefits should be quite meaningful, as well.
Friday, June 08, 2007
CAFE Reform
As I was reading the accounts yesterday of the Congressional meeting with the heads of Detroit's big three automakers on the subject of fuel economy, I was struck by Senator Dorgan's comment to Mr. Wagoner of GM, "I think this issue is over." While it's clear that he intended that as an admonition against more foot-dragging, I hope it isn't literally true. What we need is not just higher fuel economy standards, but a reform of the whole CAFE system, to make it more compatible with the other energy and environmental policies that will be forthcoming in the next few years. It's also important that we limit its potential for creating further unintended consequences.
In my blogging, I suspect I have seemed fairly negative about CAFE in general, because of the problems it has created, and because by itself it remains an incomplete mechanism for saving fuel or reducing emissions. The current system is a legacy of the first energy crisis, and while it has been updated somewhat, its main features were crafted in a world that worried much more about the gas going into a car's tank than the gases coming out of its tailpipe. Lets imagine that Congress could start with a blank piece of paper, rather than merely boosting what's already in place by 10 mpg or so. Here are some key attributes I believe a new CAFE system should have, in order to provide the maximum benefit for both energy security and climate change:
In my blogging, I suspect I have seemed fairly negative about CAFE in general, because of the problems it has created, and because by itself it remains an incomplete mechanism for saving fuel or reducing emissions. The current system is a legacy of the first energy crisis, and while it has been updated somewhat, its main features were crafted in a world that worried much more about the gas going into a car's tank than the gases coming out of its tailpipe. Lets imagine that Congress could start with a blank piece of paper, rather than merely boosting what's already in place by 10 mpg or so. Here are some key attributes I believe a new CAFE system should have, in order to provide the maximum benefit for both energy security and climate change:
- SUV loophole phaseout - While the current system is morphing into a footprint-based scale, rather than two simple SUV and passenger car fuel economy targets, this is still a sop to Detroit's current advantage in big, heavy vehicles. By some date certain, say 2015, all vehicles intended for consumers should be treated equally under CAFE. Let's recognize that this will hit US carmakers disproportionally and give them some other benefit to offset it that doesn't just perpetuate this disparity.
- Shared penalty burden - Car companies have long argued that they can only make the cars that consumers will buy. In addition to penalties on manufacturers for missing their CAFE target, the gas guzzler tax on new vehicles should be increased and applied to all cars falling short of the CAFE for their class. Furthermore, this tax should be collected every time a vehicle is sold, not just on new cars. This would align consumers' and manufacturers' interests in cars that will meet the tougher standards.
- Fuel neutrality - If our goal is reducing energy consumption, it ought not matter what fuel a car uses. An SUV getting 15 mpg on E-85 may be achieve an effective 100 mpg on gasoline, but it is still guzzling ethanol that could displace gasoline in other cars. A "flexible fuel vehicle" uses the same amount of energy as the conventional model, and the time to close this wasteful loophole is now, before there are 50 million FFVs on the road. In the same way, the electricity consumption of plug-in hybrids, once available, should be counted along with their gasoline consumption. Electricity takes energy to generate, and 70% of the US supply produces greenhouse gas emissions. Ditto for hydrogen cars.
- Compatibility with CO2 targets - Even if CAFE does not eventually shift to tallying pounds of CO2/mile, rather than miles/gallon, it ought to incorporate a backup standard that addresses greenhouse gases directly, rather than indirectly through fuels with widely varying specific emissions.
I don't imagine for a moment that changing our fuel economy standards along these lines would be easy, politically, but considering the scale of the problems created by the old system, it's worth taking the time to design a new CAFE system properly, especially if it's going to be one of our primary tools for getting oil imports and greenhouse gas emissions under control. Building a new CAFE program along these lines will also send important signals to the market and to consumers, in advance of the enactment of pricing greenhouse gases, via cap-and-trade or a carbon tax.
Thursday, June 07, 2007
OPEC vs. Biofuel?
I've devoted a lot of space this year to "food vs. fuel" concerns about biofuel production, and last month I looked at the possible impact of expanded biofuel mandates on US refinery additions. Now we see OPEC warning that biofuels put their oil expansion plans at risk and coining a lovely new phrase, "security of demand." While it's not hard to see that subsidized ethanol competes directly with the output of expensive new refinery units here, how likely is it that the national oil companies (NOCs) of major producing countries would scale back their oil exploration and production investments out of fear that biofuel will limit their market or profitability?
On the face of it, this notion sounds silly, given the relative scale and economics involved. Total US ethanol production this year is less than the equivalent output of OPEC's smallest crude oil producer, Qatar, even without adjusting for BTU content or energy inputs. And while not all OPEC countries are low-cost producers, there still seems to be a lot of oil remaining in the Middle East that could be developed at costs far lower than any biofuel, including cellulosic ethanol. And even at our targeted 35 billion gallons of biofuel per year, the extra 1.4 million barrels per day of equivalent supply would only cover a fraction of the expected global demand growth over the next couple of decades. There appears to be plenty of room for biofuel and OPEC fuel, unless the global economy slows, or the whole world gets really serious about climate change.
Aside from the obvious PR element of such a comment, there might also be something deeper here that requires a little historical context to interpret. The last time oil prices were this high for this long, things ended badly for OPEC. The combination of price-driven conservation and price-stimulated non-OPEC production started to squeeze OPEC's barrels out of the market and set off a fight for market share that turned a slide into a collapse. Remember "netback pricing?" OPEC's members were suddenly thrown into a world in which access to markets could no longer be taken for granted, and that led to some interesting joint ventures and acquisitions. "Security of demand" evokes that period, and it might be a signal to international companies trying to gain access to OPEC reserves that the price might include guaranteed long-term outlets. That view is reinforced when you see the same OPEC official suggesting the cartel will need to attract foreign capital in order to expand output. The NOCs have many competing calls on their cash flow, besides reinvestment.
If the figures in the Financial Times article are correct, adding another 4 million barrels per day of OPEC production capacity through 2012 at a cost of $130 billion works out to about $2.50/barrel of reserves. At that rate OPEC has much less to fear from competition from biofuel than biofuel investors do from oil price risk. Either OPEC is sending a different message between the lines, or they have tired of being the brunt of all our talk of energy independence.
On the face of it, this notion sounds silly, given the relative scale and economics involved. Total US ethanol production this year is less than the equivalent output of OPEC's smallest crude oil producer, Qatar, even without adjusting for BTU content or energy inputs. And while not all OPEC countries are low-cost producers, there still seems to be a lot of oil remaining in the Middle East that could be developed at costs far lower than any biofuel, including cellulosic ethanol. And even at our targeted 35 billion gallons of biofuel per year, the extra 1.4 million barrels per day of equivalent supply would only cover a fraction of the expected global demand growth over the next couple of decades. There appears to be plenty of room for biofuel and OPEC fuel, unless the global economy slows, or the whole world gets really serious about climate change.
Aside from the obvious PR element of such a comment, there might also be something deeper here that requires a little historical context to interpret. The last time oil prices were this high for this long, things ended badly for OPEC. The combination of price-driven conservation and price-stimulated non-OPEC production started to squeeze OPEC's barrels out of the market and set off a fight for market share that turned a slide into a collapse. Remember "netback pricing?" OPEC's members were suddenly thrown into a world in which access to markets could no longer be taken for granted, and that led to some interesting joint ventures and acquisitions. "Security of demand" evokes that period, and it might be a signal to international companies trying to gain access to OPEC reserves that the price might include guaranteed long-term outlets. That view is reinforced when you see the same OPEC official suggesting the cartel will need to attract foreign capital in order to expand output. The NOCs have many competing calls on their cash flow, besides reinvestment.
If the figures in the Financial Times article are correct, adding another 4 million barrels per day of OPEC production capacity through 2012 at a cost of $130 billion works out to about $2.50/barrel of reserves. At that rate OPEC has much less to fear from competition from biofuel than biofuel investors do from oil price risk. Either OPEC is sending a different message between the lines, or they have tired of being the brunt of all our talk of energy independence.
Wednesday, June 06, 2007
Warming Targets
Climate change policy is very much in the news these days, and it's been interesting to observe the intense focus by the media and commentators on the differences among various parties on this issue: the US, EU and China; the Administration and Congress; and the states and the federal government. As a practical matter, these differences must be bridged, or at least addressed, in order for meaningful action on climate change to occur. But while these gaps can't be ignored, they shouldn't blind us to the remarkable convergence that has emerged on the broader aspects of the issue--something that seemed nearly unattainable only a few years ago. It's also worth applying a little common sense to how we regard the remaining points of disagreement.
Although last week's announcement on climate change by President Bush is seen by many as too little, too late, it is noteworthy that he now embraces the idea of a global goal on reducing greenhouse gas emissions, encompassing not just the current signatories to the Kyoto Protocol but all the world's large emitters. Unless the US and the large developing economies participate in a global effort to reduce emissions, the Kyoto process is doomed to fail, as the countries bound by it account for an ever smaller fraction of future emissions. Even the EU, which has implemented sweeping measures to manage greenhouse gases, will not meet its target to reduce emissions to 8% below 1990 levels without significant new steps. So while cynics might see the President's proposal as a gambit to undermine the mandate of the UN Framework Convention on Climate Change--under the auspices of which Kyoto was established--it is clear that the focus must now be on the post-2012 period not covered by Kyoto, and on bringing the US, China and India to the table with the EU, in whatever venue they can all convene.
The differences between the US administration and the EU are magnified by the current G-8 meeting underway in Germany. The host government, supported by other EU members, has proposed a target of reducing emissions to a level that would prevent exceeding a 2 degree Celsius increase in temperature versus pre-industrial levels. Neither the US nor China will accept such a limit, for reasons that I find eminently sensible. While the appeal of the 2 deg. target is understandable, the inertia of our past emissions and present capital stock make achieving it look like a real stretch. Even at an atmospheric CO2 concentration of 400-475 ppm, a level we look almost certain to reach within the next several decades, a variety of climate models suggest we could exceed the 2 deg. goal. That goal has other problems, not the least being that we're already about halfway there. For that matter, it's not entirely clear that it makes sense to use as a baseline a period when temperatures were rebounding from the "Little Ice Age," a real phenomenon that was not invented by deniers of global warming. A goal based either on an absolute level of emissions or atmospheric CO2 concentration seems much more useful, given the range of uncertainties in the correlation between either of those and actual temperature.
Meanwhile, a draft House bill on climate change would derail efforts by states to limit their own emissions independently of a federal target, and China continues to point to inequities in per-capita emissions as a "get out of jail free" argument on emissions reductions. Call me naive, but I don't see how we are going to get past all these differences unless we start from where we concur: the planet is warming, due in large part to our emissions, and this is a global, not a regional or local problem. That much agreement seems finally to be in place, and we ought to build on it in whatever venue will move the ball down the field.
Although last week's announcement on climate change by President Bush is seen by many as too little, too late, it is noteworthy that he now embraces the idea of a global goal on reducing greenhouse gas emissions, encompassing not just the current signatories to the Kyoto Protocol but all the world's large emitters. Unless the US and the large developing economies participate in a global effort to reduce emissions, the Kyoto process is doomed to fail, as the countries bound by it account for an ever smaller fraction of future emissions. Even the EU, which has implemented sweeping measures to manage greenhouse gases, will not meet its target to reduce emissions to 8% below 1990 levels without significant new steps. So while cynics might see the President's proposal as a gambit to undermine the mandate of the UN Framework Convention on Climate Change--under the auspices of which Kyoto was established--it is clear that the focus must now be on the post-2012 period not covered by Kyoto, and on bringing the US, China and India to the table with the EU, in whatever venue they can all convene.
The differences between the US administration and the EU are magnified by the current G-8 meeting underway in Germany. The host government, supported by other EU members, has proposed a target of reducing emissions to a level that would prevent exceeding a 2 degree Celsius increase in temperature versus pre-industrial levels. Neither the US nor China will accept such a limit, for reasons that I find eminently sensible. While the appeal of the 2 deg. target is understandable, the inertia of our past emissions and present capital stock make achieving it look like a real stretch. Even at an atmospheric CO2 concentration of 400-475 ppm, a level we look almost certain to reach within the next several decades, a variety of climate models suggest we could exceed the 2 deg. goal. That goal has other problems, not the least being that we're already about halfway there. For that matter, it's not entirely clear that it makes sense to use as a baseline a period when temperatures were rebounding from the "Little Ice Age," a real phenomenon that was not invented by deniers of global warming. A goal based either on an absolute level of emissions or atmospheric CO2 concentration seems much more useful, given the range of uncertainties in the correlation between either of those and actual temperature.
Meanwhile, a draft House bill on climate change would derail efforts by states to limit their own emissions independently of a federal target, and China continues to point to inequities in per-capita emissions as a "get out of jail free" argument on emissions reductions. Call me naive, but I don't see how we are going to get past all these differences unless we start from where we concur: the planet is warming, due in large part to our emissions, and this is a global, not a regional or local problem. That much agreement seems finally to be in place, and we ought to build on it in whatever venue will move the ball down the field.
Tuesday, June 05, 2007
How Elastic?
Having just returned from my second visit in two months to the state with the highest gasoline prices in the lower 48, I am frustrated that a vital dimension is missing from the national debate about gasoline prices. For all of the renewed focus on raising the corporate average fuel economy (CAFE) of the new vehicle fleet, most of us still seem to behave as if the only influence we had on fuel economy was in our choice of a new vehicle, every 4-7 years. In fact, Americans make billions of choices every day that affect both their personal expenses and our aggregate fuel demand. If $3.50 gasoline isn't enough to force us to rethink jack-rabbit starts and 75 mph highway speeds, then what price would it take to actually reduce gasoline consumption year after year for decades?
The Energy Information Agency put out some analysis last week suggesting that the public has reacted to higher prices to some degree. As prices have climbed this spring, the seasonally-adjusted growth in gasoline demand has apparently fallen from well over 2%, but it's still over 1%. So it took a 40% increase in prices from January to May to reduce demand by 1%. While high prices--whether from the market or from taxes--send the right long-term signal to encourage conservation and smarter vehicle technology, at least in the short term they may not deliver the persistent changes in consumption that will be necessary to reverse the trends on our oil imports and greenhouse gas emissions from transportation.
That ought to be of more than academic interest to legislators contemplating some form of carbon tax and/or emissions cap-and-trade. The relatively inelastic demand for gasoline is the Achilles heel of these policies, when applied to the transportation sector. If consumers don't respond by consuming less fossil fuels, carbon taxes will be fruitless for the environment and a dead weight on the economy. In order to be effective, a carbon tax would have to be delivered with a strong message about altering our mindsets on personal transportation. For the 95% of us who won't buy a new car this year, that means changing when and how we drive, rather than waiting for expensive new technology to deliver a car we like with the fuel economy we could have had if we had simply chosen a smaller/lighter/less powerful conventional car in the first place.
As long as Americans persist in seeing high gas prices driven by conspiracy, rather than supply and demand, we will feel divorced from the responsibility for our own actions in this area, and consumption and emissions will continue to grow. Witch hunts on "gouging" only reinforce that dysfunction. If consumers won't claim the effective 15-cent or so per gallon price cut that a little restraint on acceleration and top speed can provide on their next fill-up, they may just shrug off the 30-50 cents per gallon of carbon externality that various Senators are proposing to add to their gas bill. Where would that leave the levers of energy and climate change policy?
The Energy Information Agency put out some analysis last week suggesting that the public has reacted to higher prices to some degree. As prices have climbed this spring, the seasonally-adjusted growth in gasoline demand has apparently fallen from well over 2%, but it's still over 1%. So it took a 40% increase in prices from January to May to reduce demand by 1%. While high prices--whether from the market or from taxes--send the right long-term signal to encourage conservation and smarter vehicle technology, at least in the short term they may not deliver the persistent changes in consumption that will be necessary to reverse the trends on our oil imports and greenhouse gas emissions from transportation.
That ought to be of more than academic interest to legislators contemplating some form of carbon tax and/or emissions cap-and-trade. The relatively inelastic demand for gasoline is the Achilles heel of these policies, when applied to the transportation sector. If consumers don't respond by consuming less fossil fuels, carbon taxes will be fruitless for the environment and a dead weight on the economy. In order to be effective, a carbon tax would have to be delivered with a strong message about altering our mindsets on personal transportation. For the 95% of us who won't buy a new car this year, that means changing when and how we drive, rather than waiting for expensive new technology to deliver a car we like with the fuel economy we could have had if we had simply chosen a smaller/lighter/less powerful conventional car in the first place.
As long as Americans persist in seeing high gas prices driven by conspiracy, rather than supply and demand, we will feel divorced from the responsibility for our own actions in this area, and consumption and emissions will continue to grow. Witch hunts on "gouging" only reinforce that dysfunction. If consumers won't claim the effective 15-cent or so per gallon price cut that a little restraint on acceleration and top speed can provide on their next fill-up, they may just shrug off the 30-50 cents per gallon of carbon externality that various Senators are proposing to add to their gas bill. Where would that leave the levers of energy and climate change policy?
Monday, June 04, 2007
Soft Targets?
I was flying back from the west coast when news of the plot to blow up the JFK airport jet fuel system broke. Other than noting the distinctly different coverage by the two cable news channels available on Jet Blue, I had plenty of time to think about the implications of such an attack. As the authorities have stressed repeatedly, it's highly unlikely that the plotters could have produced the carnage they desired, though they might have disrupted air travel across the eastern seaboard for days. This story serves as a useful reminder that energy infrastructure remains an attractive, and in many cases "soft" target for terrorists. While noting the success of law enforcement at unraveling this particular scheme, it's equally important to ensure that the operators of such infrastructure have contingency plans in place to deal with the consequences of an actual attack.
As the story in the Washington Post notes, the built-in safeguards of the airport's fuel distribution system would have prevented any explosion or fire in the pipeline or tank farm from spreading to the terminal area. Like many large airports, JFK has a hydrant system, which delivers jet fuel from an underground pipeline directly "into wing", minimizing the number of trucks carrying fuel on airport premises. Instead, refueling trucks just carry a pump and the requisite hoses to connect aircraft to the hydrant valve. However, while this approach reduces the risk of fuel-related accidents on the tarmac, it also increases the system's vulnerability to disruptions at the airport's tank farm.
Destroying all the fuel at JFK--an act apparently beyond the competence of this daft-sounding group--would have dried up the hydrants and forced suppliers to bring both fuel and refueling trucks in from other nearby airports. Doing that on short notice would be a lot more effective if someone has spent time in advance identifying where all those trucks are, who owns them, and what else would be required to pull it off. I'd like to think all this was done right after 9/11, but I'll bet there's a flurry of activity this week, as Homeland Security officials, airport authorities and jet fuel providers review their procedures, in light of this scheme.
The most worrying aspect of this story is its reminder of the human element in infrastructure security. The key to this plot was a former airport employee with knowledge of the fuel system at JFK. The petroleum products industry alone has many thousands of current and former employees who possess this kind of information. Add the natural gas and utility industries, and even if only a small fraction of these personnel have the motivation and inclination to turn what they know to mischief, the number of potential threats is large enough to be worrisome. At a minimum, facility operators should make sure that flow diagrams, plot plans, and aerial photography that would be useful to anyone planning an infrastructure attack are not widely available.
As the story in the Washington Post notes, the built-in safeguards of the airport's fuel distribution system would have prevented any explosion or fire in the pipeline or tank farm from spreading to the terminal area. Like many large airports, JFK has a hydrant system, which delivers jet fuel from an underground pipeline directly "into wing", minimizing the number of trucks carrying fuel on airport premises. Instead, refueling trucks just carry a pump and the requisite hoses to connect aircraft to the hydrant valve. However, while this approach reduces the risk of fuel-related accidents on the tarmac, it also increases the system's vulnerability to disruptions at the airport's tank farm.
Destroying all the fuel at JFK--an act apparently beyond the competence of this daft-sounding group--would have dried up the hydrants and forced suppliers to bring both fuel and refueling trucks in from other nearby airports. Doing that on short notice would be a lot more effective if someone has spent time in advance identifying where all those trucks are, who owns them, and what else would be required to pull it off. I'd like to think all this was done right after 9/11, but I'll bet there's a flurry of activity this week, as Homeland Security officials, airport authorities and jet fuel providers review their procedures, in light of this scheme.
The most worrying aspect of this story is its reminder of the human element in infrastructure security. The key to this plot was a former airport employee with knowledge of the fuel system at JFK. The petroleum products industry alone has many thousands of current and former employees who possess this kind of information. Add the natural gas and utility industries, and even if only a small fraction of these personnel have the motivation and inclination to turn what they know to mischief, the number of potential threats is large enough to be worrisome. At a minimum, facility operators should make sure that flow diagrams, plot plans, and aerial photography that would be useful to anyone planning an infrastructure attack are not widely available.