Horizontal Scale- Numbering Up

An article written while I worked at Zeton may be instructive in relation to this topic as a backgrounder:


Once we reach a certain maximum practical vertical scale for a particular piece of equipment, it becomes impractical to build a bigger unit, or to transport and erect it once it’s built, as noted in the previous article. At that point, we shed a few tears, because we know that the continually decreasing capital cost per unit of value created is now more or less over for that unit. But generally, this is not encountered in every unit on a plant at the same scale. When we’ve got the largest pump, filter, reactor etc. that can be made in practical terms, the next step isn’t to build two complete twin plants on the same site, much less two complete plants on two different sites to save on distribution. Rather, we will usually “number up” the largest practical thing, running several of them in parallel.

Sometimes we do that even though we’re below the maximum practical scale. Some plants have “trains” of duplicated units which run in parallel, such that one train can be shut down for maintenance, or because the market dries up. Common infrastructure is maintained, and that gives us some of the benefit of vertical scale- just not to the same extent as if we made each unit bigger.

An example is the Shell Pearl GTL project pictured in the 1st article.

(Shell Pearl GTL- a mammoth gas to liquids plant installed in Qatar- photo credit, Shell)

That project produces liquid hydrocarbons ranging from LPG to waxes, starting with fossil gas. The process, known as Fischer Tropsch, takes CH4 apart to CO and H2 (and lots of CO2) and then back-hydrogenates CO to -CH2- and H2O. It is so inefficient, as a result of wasting all that hydrogen “un-burning” CO to make water, that the only way it can make money is:

  1. You must do it at positively mammoth scale to drop the marginal capital cost of the overall plant to the lowest practical level
  2. You must pair it with a gas source which is both enormous and basically free
  3. You must be able to dispose of fossil CO2 to the atmosphere, again basically for free

Accordingly Pearl GTL is positively mammoth- it must be to make money for its owners, at prices of products that the market are willing to pay. Capital cost was on the order of $20 billion.

It is in fact so huge that it “numbers up” its reactors.

Each reactor is a giant pressure vessel- weighing 1,200 tonnes- containing 29,000 catalyst tubes in a common shell from which the heat is removed. Each reactor is as big as Shell could make them, without what Shell considered to be excessive “heroics”.

There are two trains of reactors. And in each train, there are 12 reactors operating in physical parallel.

Shell Pearl’s reactors, installed. Photo Credit www.oilandgasmideast.com

This is basically an object lesson in “numbering up”. Each catalyst tube is as big as it can be without making the wrong products. As many such tubes are put into a single pressure vessel as practical, to make the cost of physical paralleling of these tubes as low as practical. And then, large numbers of these pressure vessels are installed again in physical parallel, arranged in trains.

The rest of the plant is similarly divided up into single or multiple units in accordance with the maximum practical scale at which the process itself can be carried out. If I recall correctly, it has two of the largest air separation plants ever built, a huge autothermal reformer etc.

Shell didn’t pursue this giant scale for fun and games. Apparently they looked at this project numerous times over more than a decade, before deciding to pursue it. They ran the numbers and convinced themselves, quite correctly, that the only way to make money from Fischer Tropsch, even with a nearly free gas supply, is to go big- so big in fact that numbering up the reactors was the only practical option.

Numbering Up” to Minimize Scale-Up Risk

There’s another reason some people give for pursuing a “numbering up” rather than scaling up strategy. Sometimes, making the bigger unit with higher production capacity is easy- but sometimes, it’s very risky indeed. The larger unit we design might not work at all, or might make a different product, or undesired byproducts etc.- even if you retain experts to help you make the best stab at building the larger unit that you can. The risk here isn’t just money, it’s time. A development project for a larger unit, can take considerable time. And if customers are knocking down your door already today, you may not want to wait. Under those circumstances, numbering up small units which have already been proven to work, seems a tempting option.

The Downsides of Numbering Up

Unfortunately, for every unit we have to run in physical parallel (because we had to number up rather than scaling up), we now have multiple devices to procure, install, connect, control and test. That means more valves, more switches, more wires, more instruments and controls, more installation labour, quality control, testing etc.- and more cost. It also means more likelihood of an individual failure of some kind, even though the failure of one unit might only reduce production by a small amount of the total (that’s an advantage of numbering up in “trains” as noted above).

If making a larger unit is possible, even with some risk, it’s likely worth doing- if the market can support that much product. What we are unlikely to get away with is to instead build multiple identical smaller units and operate them in physical parallel- especially if the individual units are a small fraction of the total production that the larger plant could produce.

Of course if we take “numbering up” or “horizontal scale” to its logical conclusion, we see some of the many things we use on a daily basis- articles that are mass produced in plants which themselves take advantage of as much vertical scale as possible, so that each commodity product item (computer, solar panel, car etc.) itself is as cheap as it can be.

This is where the proponents of certain schemes, fall into the ditch. They frequently confuse the apparatus making the commodity goods, with the commodity goods themselves!

Why “Mass Production” – of the Means of Production- Can’t Win

But surely if we mass-produce entire plants to make our commodity, those plants will get cheaper and their capital cost per unit of production will drop?

No, they won’t. Because S^0.6 is an exponential function. It positively destroys any benefit which mass production of the plant itself could possibly generate.

Furthermore, the sorts of things we’re talking about here aren’t suitable to true mass production.

Modular Construction

Sure, you can build a complete chemical plant in a factory, in pieces of a size and weight suitable to be shipped by whatever means you like. Such construction is referred to as “modular”, and I was in the business of designing and building small modular chemical plants used as pilot and demonstration and small commercial units for over two decades. Modular construction offers many advantages- faster schedule, better build quality, and higher labour productivity among others, when it’s done right.


And despite this, I can say, unequivocally, on the basis of that considerable experience, that nobody would ever achieve lower cost per unit of production by getting ten modular plants of identical design, built at the same time as modular projects, if building a 10x larger plant on site (referred to as a “stick built” rather than modular plant) was a practical alternative. Whereas a modular design/build operation might be able to offer ten plants of unit cost 1 for 10^0.9 = ~ 8.1x the cost of the first one, the 10x larger plant- including the extra cost associated with “stick building” the parts of it that were too big to modularize, would cost only 10^0.6 = 4x. These figures are, of course, very rough, but they give you the basic idea.

If you want real mass production, order 10,000 of them at the same time…but how likely are you to do that?

Where Horizontal Scale is Your Only Choice

The first article in this series examined the conditions under which the economy of vertical scale was valid. If any of those conditions are violated, horizontal scale may be your only choice.

If your product is unstable, i.e. ozone, you have no choice but to put an ozone generator on every site that needs ozone. That those ozone generators are mass produced, however, does not make ozone a cheap chemical! Its cost is high not just because it is energy inefficient to make, but also because the need to make it on site in tiny ozone plants makes inefficient use of capital, even though the ozone units themselves are built in factories. That inefficiently used capital cost makes every kg of ozone that much more expensive.

If your feed or product can’t be distributed readily, you may also have no choice but to go for horizontal scaling, on separate sites. Of course that is no guarantee whatsoever that the resulting product, made using equipment with poor capital utilization efficiency (high marginal capital cost), is worth enough to make the enterprise into a business.

And no, “mass production” of the necessary plant equipment, absolutely won’t save you.

In the next articles we’ll use these concepts to evaluate a number of claims in the renewable/alternative energy world, to see whether or not they make sense.

Economy of Vertical Scale

Shell Pearl GTL project in Qatar- a project which can barely make money despite giant scale

There are lots of proposals emerging seemingly every day, based around the notion that we will mass produce some device, plant or process, and then use those mass-produced devices to produce some commodity product- frequently a product made by devices, plants or processes already operated commercially at much larger scale. A few examples seemingly popular at the moment include:

  • small modular nuclear reactors for power generation
  • distributed hydrogen generation (particularly for refuelling vehicles)
  • small units to generate value from fossil gas that would otherwise be flared, by converting it to fuels or chemicals
  • distributed units to process a distributed resource- waste products from agriculture, municipal solid waste, batteries- you name it

The idea is simple enough: we all know that when things are made in large numbers, a couple things happen. One is that we get better at making them, and that learning drives down the cost of each unit produced. The first one costs a lot because it’s a prototype. The 2nd, if it is identical, is easier and hence cheaper because we’ve already proven the concept. And so it goes, with capital cost falling by a certain percentage with each doubling of production- a principle known as Wright’s Law.

Another is that when we increase the scale of the manufacturing plant (made possible by increased numbers of units being sold), we can benefit from the savings associated with automation etc. This is actually one of the features which enables Wright’s Law for the manufacture of certain types of devices.

The fundamental thesis of the sorts of schemes I’m going to take on in this article series can be stated more or less as follows: building big plants is hard. It takes time and lots of capital. So instead, we’ll make a very small plant, do it very well, and then mass produce the very small plant and operate many of them in physical parallel, either on the same site or on a plentitude of sites, the latter to save the costs of distributing the product (or to eliminate the need to build infrastructure to distribute the product). And it’s my job in this series, to explain to you why this idea has a rather tall stack of engineering economics- arising from basic physics- in the way of its success.

It’s important to provide a little context here, so that people can make sense of where such approaches are necessary, where they make sense, and where they’re just somebody playing around with your lack of knowledge of engineering economics and hoping you won’t notice.

Economy of Vertical Scale

You’ve probably noticed that we make many things in large, centralized plants. We distribute feeds of matter and energy and labour to those plants, and we distribute products from those plants to the people/businesses who need them. Why do we do that?

The answer comes from very basic physics, which leads in a very direct way to engineering economics.

Take the simplest example: a piece of pipe to carry a fluid from point A to point B.

Let’s say we’re moving a commodity with that pipe- doesn’t much matter what commodity. Let’s compare two pipes: one has a diameter of X, and the next has a diameter of 2X.

The first pipe can carry a given amount of product per unit time at a particular amount of energy input per unit time lost to friction. The correct size of pipe is determined based on what’s referred to as an “economic velocity”- the flowrate which gives a linear velocity which is an optimum balance of the cost of pumps/compressors and their energy lost to pressure drop in the pipe (higher for smaller pipes) and the capital cost to build, test and maintain the pipe (higher for larger pipes). A different optimal velocity exists for a chemical plant’s piping, for instance, than for a pipeline carrying fluids across a country (with the latter favouring lower velocities).

When we compare pipes with diameter X and 2X, we find right away that we can move four times as much material per unit time in the larger pipe, because the cross sectional area varies as D^2. Indeed it’s even more than 4x, because we get a benefit from an improved ratio between wetted perimeter (where wall friction happens), which varies with D, and cross sectional area which varies as D^2.

But the real benefit is this: the pipe capital cost doesn’t increase by anything near four times.

We’ve just discovered the physical basis for the economy of vertical scale, or “economy of scale” for short. It arises because relationships such as the surface area to volume ratio, become more favourable with increasing scale.

Similar physics are active for all things on a project- every pump, valve, tank, heat exchanger, transformer, motor- you name it. The bigger you make it, the cheaper it gets (in capital cost) to produce a unit of value from that device.

Capital Cost Versus Scale

Let’s say we have two plants: the first plant produces 1 unit of production (doesn’t matter if that’s tonnes per day of a chemical, MW of electricity etc.), and the 2nd one produces 10 units per day of the same undifferentiated thing. We say that plant 2 is 10/1 = 10 times the scale of plant 1, ie. we have a scale factor of 10.

To a first approximation, because of relationships like the one for the pipe example, it can be shown that:

C2 =~ C1 S^0.6

Where C2 is the capital cost of the larger plant, C1 is the capital cost of the smaller one, S is the scale factor (the ratio of production throughput of plant 2 to plant 1), and 0.6 is an exponent which is the average for a typical plant. In fact, each thing in the plant has a similar relationship, with an exponential factor which ranges from between 0.3 (for centrifugal pumps) to 1 (for things like reciprocating compressors above a certain minimum size). Normalized over the cost of a typical plant, the exponent of 0.6 gives the best fit.

Let’s say that 1 unit of production generates 1 unit of revenue per day. Ten units would generate 10 units per day. But let’s say that 1 unit of production rate costs us $1 million in capital. Ten units of production would therefore cost us 10^0.6 x $1 million = ~ $4 million in capital. The cost of capital per unit produced is therefore $1 million/unit/day for the first plant, and $4/10 = $0.4 million/unit/day for the 2nd plant.

The marginal capital cost per unit of production is dramatically lower for the larger plant- assuming:

  • there’s a market big enough to consume all the production of the larger plant
  • there’s feedstock sufficient to feed the larger plant
  • the product and its feedstocks are both legal and possible to transport by practical means
  • we’re within the limits of the scaling equation, meaning that each thing we’re using in the plant, simply gets bigger
  • we’re making a commodity which is fungible, meaning that it’s interchangeable with the same product made elsewhere

We’ve just discovered the reason we do stuff at large scale! It doesn’t matter what undifferentiated fungible commodity product we’re making, as long as it meets our assumptions above (or only bends them a little), every unit of production (every tonne of product, or kWh of electricity etc.) becomes cheaper if we make it in a plant of larger scale.

Limits of Vertical Scaling

Of course there ultimately will be an optimization here too. We rarely think it’s a good idea to make all the world’s supply of any one thing of value in a single plant at one location on earth. That’s putting too many eggs in one basket. Distribution isn’t free of charge, much less free of risk, and logistics limits how far you can move a particular product before the cost of distributing it overwhelms the capital savings. Similarly, the feedstocks are often distributed and their logistics matters too.

Some products- and some starting materials – are too voluminous or unstable or dangerous to make the trip. Doesn’t matter how badly we want to make ozone, for instance, in one centralized plant to make it cheaper, because in 90 minutes, even under ideal conditions, it’s gone- it falls back apart to oxygen again. If you want ozone, you must make it on site and use it as it’s made.

With hydrogen as a feed, unle\ss we’re using a tiny amount, it is generally better (in economic terms) to either set up production near an existing hydrogen plant, or to transport something else to make hydrogen from and then build a small to medium sized plant of our own- because the infrastructure to economically move more than very small quantities of a bulky gas like hydrogen doesn’t exist beyond a few “chemical valley” type situations where large number of plants are co-located in the same geography. Bespoke new infrastructure suitable for moving pure hydrogen is very costly and slow to build.

The same with hazardous wastes: we may find it very efficient to process them in one giant plant, but there are often rules about transporting wastes across borders etc that make it impossible to do so.

Vertical scale, within those limits, is king. It’s the reason we have centralized power plants, oil refineries, chemical plants, car manufacturing plants etc., rather than having one in every town, or every home. The resulting economy of scale can pay for considerable distribution infrastructure too- within limits.

Additional Advantages to Vertical Scale

Many other factors tend to generate lower costs of capital for larger projects rather than smaller ones. The proportion of a project spent on factors like engineering, permitting, controls and instrumentation, accessory facilities, civil/structural work, utilities etc. all tend to be lower per unit of production rate for larger rather than smaller plants, with exceptions of course.

When capital cost intensity decreases, so does the incremental cost of improvements to save energy such as heat integration. Whereas small projects often heat using fuel and cool using a cooling utility, heat integration becomes economically possible as projects become larger. And when plants are integrated into even larger facilities, energy integration from one plant to another becomes possible. Plants can share utilities such as steam, such that surplus steam from one plant is used for motive power or heating by another.

Is There Such A Thing As “Too Big”?

Absolutely. At a certain point, things are just too big to build in practical terms. With some pieces of equipment, you get to the point where there’s only one company in the world who would even try, and they get to name their price and delivery schedule. Sometimes, the issue is shipping the finished article to the site. Sometimes it’s a matter of not being able to afford to build the thing in place, because doing so requires basically building a factory with specialized equipment only for the purpose of building the one unit, squandering much of the benefit of greater scale.

All of these factors lead to the conclusion that there is a maximum practical scale for most things. And beyond that maximum practical scale, you’re pioneering- you’re going one larger, and taking onboard all the learnings of doing so on just your project. Future projects might look at your ruins and laugh, or they may benefit from your suffering, but you’re going to suffer either way.

A certain amount of “heroics” in terms of specialized logistics, heavy cranes, special crawler trailers, or site construction, is necessary in any big project. But when a project goes too far, the result can be a higher cost than if you’d simply built two or even four smaller units which didn’t require heroics to the same extent. You bet that major projects teams suffer over these details, in an effort not to become a signpost on the road of project development which says, “go no further”.

In the next article in this series, we’ll discuss what you do when you reach the limits of vertical scaling.

Recommended Reading: “Capital Costs Quickly Calculated” – Chemical Engineering magazine, April, 2009

Are German Gas Pipelines “Fundamentally Suitable” for Hydrogen?

leaking hydrogen pipeline by DALL E

A recent study https://www.dvgw.de/medien/dvgw/forschung/berichte/g202006-sywesth2-steel-dvgw.pdf

carried out by Open Grid Europe GmbH with the assistance of the University of Stuttgart, paid for by DVGW (Deutscher Verein des Gas- und Wasserfaches- the German Association for Gas and Water) did rather careful, extensive and thorough testing of a wide and characteristic variety of pipeline steels in hydrogen atmospheres of various pressures. 

The report draws a shocking conclusion that has been parroted on high by the #hopium dealers at Hydrogen Europe and various other pro-hydrogen lobby groups:

“Hence, all pipeline steel grades investigated in this project are fundamentally suitable for hydrogen transmission.”

Well that’s it- case closed then!  All gas transmission pipelines are fundamentally suitable to transmit pure hydrogen!  The fossil gas distribution industry is saved! The “sunk cost” of all that infrastructure is rescued!   And all those worry-warts like myself who were pointing out the hazards of such a conversion were just wrong!

While I’m totally happy to find out when I’m wrong, so I can change my opinion to be consistent with the measured facts, I’m afraid that in this case, the answer is rather more complex than just “Paul Martin is wrong- gas pipelines are safe for use with hydrogen”.

TL&DR Summary: extensive materials testing in this study proves that molecular hydrogen does cause pipeline materials to fatigue crack faster (up to 30 times faster than they would in natural gas) and to lose as much as 1/2 their fracture toughness (making them more likely to break). But if you reduce the design pressure of the pipeline substantially- to 1/2 to 1/3 of its original design pressure- the gas industry would consider that “safe enough” under the rules intended for designing new hydrogen pipelines. That would of course drop the capacity of the existing gas pipeline by a lot, requiring that either the lower capacity be accepted or the line be “twinned” or replaced if it were switched to hydrogen. And a host of other problems per my previous article on this topic, are also unresolved.

What Was Studied

Modern gas transmission pipelines are generally made of low alloy, high yield strength carbon steels typified by API 5L grades X42 through X100.  The study examined steels commonly used in pipeline service in Germany, ranging from mild steels of low yield strength such as historical grade St35 (35,000 psi yield), through API 5L X80 (80,000 psi yield strength), including some steels used in the manufacture of pipeline components such as valve bodies.  In many cases, specimens were prepared in such a way that the bulk material of the pipeline, a typical weld deposit and the heat affected zone of the parent metal were all tested.  Thorough, careful work.

The specimens were tested in a cyclic (fatigue) testing apparatus which could be filled with hydrogen atmospheres of varying pressures.  The major factors examined were fatigue crack growth rate and fracture toughness, because these parameters are known, not merely suspected, to be affected in a detrimental way in these steels by the presence of hydrogen.

What They Found

To hopefully nobody’s surprise, the testing found that the presence of hydrogen does greatly accelerate fatigue crack growth, and significantly negatively affects fracture toughness in the tested steels.

Specifically, they were able to build a good model of the fatigue cracking behaviour of these materials.  They found, to quote p. 169 of the study:

  • At lower stress intensities and hydrogen pressure, crack growth is comparable with crack growth in air or natural gas
  • At higher hydrogen pressures, crack growth very rapidly approaches the behaviour at a partial pressure of H2 = 100 bar (~ 1500 psi) , even at lower stress intensities
  • The position of the transitional area from “slow” crack growth to H2-typical rapid crack growth (my emphasis) depends on the hydrogen pressure, although it cannot be predicted exactly

They also found that fracture toughness Kic was negatively affected by the presence of hydrogen.  Fracture toughness was, as expected, reduced even in low yield strength steels like St35, even when small amounts of hydrogen were added.  Fracture toughness was strongly reduced in higher yield strength steels such as L485 (a common modern pipeline steel used in Germany).  Even 0.2 atm H2 dropped fracture toughness greatly, and fracture toughness continued to drop steeply as pH2 was increased.  

fracture toughness vs H2 concentration per DVGW study
(source: DVGW study p. 176)

Hmm…so how did they draw the conclusion that these steels are “fundamentally suitable for hydrogen transmission”?

By comparison against the requirements of the hydrogen pipeline design/fabrication code/standard, ASME B31.12. 

The study found that the crack growth rate was consistent with the assumptions used in the hydrogen design de-rating method used in B31.12. They also found that in  all the steels tested at pH2 = 100 bar, the minimum required Kic value of 55 MPa/m^½ was exceeded.

The TL&DR conclusion here is as follows:  yes, hydrogen causes pipeline steels to fatigue crack faster and to lose fracture toughness to a considerable extent, relative to the same steels used in air or natural gas.  But that’s okay…because it doesn’t crack faster or lose more fracture resistance than expected in a design code used for dedicated hydrogen pipelines.

A design code that fossil gas pipelines are not designed and fabricated to, by the way!

What Does This Mean?  Hydrogen’s Impact on Pipeline Design Pressure

Transmission pipelines are designed, fabricated and inspected in accordance with codes and standards which vary from nation to nation.  The common standards in use in the USA, which serve as a reference standard in many other nations, are ASME B31.8 for fossil gas and other fuel pipelines, and ASME B31.12 for bespoke hydrogen pipelines.  While the latter do exist (some 3000 km of dedicated hydrogen pipelines in the USA alone), the former are much more extensive (some 3,000,000 km of them in the USA).  And if you a) own such a pipeline or b) depend on it to supply the gas distribution network you own, and c) know that without hydrogen, you’ll be out of business post decarbonization, you will be very motivated to conclude that you can re-use your gas pipeline to carry hydrogen in the future.  Hmm, sounds like a bit of a potential conflict of interest, no?   

In both ASME standards, the design pressure of the pipeline is determined via a modification of Barlow’s hoop stress equation, involving the specified minimum yield strength of the piping (S), the pipe nominal wall thickness (t), pipe nominal outer diameter (D), a longitudinal joint factor (E), a temperature de-rating factor T, and a design safety factor  F, which depends on service class/severity and location.  For hydrogen per B31.12, a new factor Hf, a “material performance factor” is applied to effectively de-rate carbon steel pipeline material design pressure to an extent rendering it (arguably) safe for use with hydrogen:

P = 2 S t/D F E T Hf

These helpful tables excerpted from ASME B31.8 and B31.12 were borrowed from Wang, B. et al, I.J. Hydrogen Energy, 43 (2018) 16141-14153

Tables of Hf and F from Wang et al
from Wang et al (reference above)

Design factor F, used in both codes, varies between 0.8 and 0.4 in ASME B31.8 based on “location class”, which is based on factors including proximity to occupied buildings.  

B31.12 for hydrogen has two design factor tables:  one for new, purpose-built hydrogen pipelines, with F values matching those in B31.8 for fossil gas (option B), and one for re-use of pipelines not originally designed to B31.12, which uses a lower (more conservative) table of F values ranging from 0.5 to 0.4 (option A).  The latter, option A, would apply to any fossil gas pipeline repurposed to carry hydrogen.   

For many existing gas pipelines, repurposing the line to carry hydrogen would require de-rating of the design pressure from the current level which is often 72% or 80% of specified minimum stress, to perhaps 40-50%.  

For hydrogen piping, the material de-rating factor Hf ranges from 1 for low yield stress piping materials used at low pressures, to 0.542 for high tensile, high yield strength materials operating at high system design pressures.  No such material de-rating factor is required in ASME B31.8 for the design of fossil gas pipelines.

In the extreme case, a pipeline designed and fabricated for fossil gas per ASME B31.8 in a low criticality (class 1 division 1) location far away from occupied buildings, made of a high yield strength steel, would have its design factor reduced from 0.8 to 0.5, and an Hf applied of 0.542.  The result would be a reduction in design pressure to 34% of the original value, i.e. a reduction of almost three-fold.

A reduction in design pressure represents a very significant reduction in pipeline energy carrying capacity and would require either “twinning” of the line with new pipe, or replacement with new pipe. 

So:  Can We Use Existing Gas Transmission Pipelines for Pure Hydrogen?

The answer is much more complicated than a simple yes or no! 

Can they be re-used?  Maybe- but the pipe material isn’t the only issue.  There are many others, covered in my paper here:

(which I will shortly update with this new information in relation to piping materials- that’s why I love LinkedIn as a publishing medium, because it makes updates easy!)

Can they be re-used at their existing design pressure and hence at their existing energy carrying capacity?  The answer to that is almost certainly NO.  At bare minimum, de-rating of the design pressure would be required, likely to a significant extent.  This would necessitate either twinning the line with new pipe to carry the same amount of energy, replacing the existing pipe, or accepting the reduced capacity.

Will they blow up and kill people if used for hydrogen?  Well…they will crack much faster, even at reduced stress, and will be much more likely to break, than if they carried fossil gas without hydrogen in it. Gas pipelines are often operated at a pressure which varies with respect to time, cycling frequently, whereas dedicated hydrogen pipelines tend to be run at more constant pressures, resulting in less rapid fatigue.   But if the design criteria of a code (B31.12) not used in the design and construction and testing of the original pipe are retroactively applied to the existing pipeline, the industry might consider that to be “safe enough”.  The DVGW testing demonstrates that the design assumptions used in the hydrogen pipeline design code to set its “hydrogen design de-rating factor” are met, in metallurgical terms.

Let’s just say, that’s far from a ringing endorsement for the concept.  If I were a regulatory body in charge of ensuring that gas utilities keep their pipelines safe, I’d be paying very close attention to any pipeline being re-purposed for hydrogen. The gas industry itself is in at very least a potential conflict of interest in regard to this matter, and the regulatory bodies will need to step up and ensure that if any pipeline is converted to carry hydrogen- even hydrogen blends- that this is done in a way that is truly safe.

The Myth of Hydrogen as an Energy Export Commodity

The Suiso Frontier, transporter of coal-derived #hopium in bulk from Australia to Japan!

There is a popular myth in the marketplace of ideas at the moment:  the notion that hydrogen will become a way to export renewable electricity in a decarbonized future, from places with an excess of renewable electricity, to places with a shortage of supply and a large energy demand.  It seems that the hydrogen #hopium purveyors are rarely satisfied with the notion that any particular place- my home and native land of Canada for instance- might make enough green hydrogen to satisfy its own needs for hydrogen, but rather, push on to sell the idea that we will become a hydrogen exporter too!

And like all myths, the notion of hydrogen as an export commodity for energy is separated from an outright lie by a couple grains of truth.

The Lands of Renewable Riches

There are places in the world which have huge potential to generate high capacity factor renewable electricity, and which have no significant local use for electricity (hint- that’s not Canada, folks! Any hydroelectricity we have in excess, has a ready market in the USA)  This is particularly true of special locations- deserts with oceans to the west- which are also so distant from electricity markets that the option of transporting electricity via high voltage DC (HVDC) is costly and challenging to imagine.  Places like Chile, Western Australia, Namibia and other points on the west coast of Africa, come to mind.  Remember that high capacity factor renewables are essential if green hydrogen production is ever to become affordable – electrolyzers and their balance of plant are unlikely to get cheap enough to ever make cheap hydrogen from just the fraction of renewable electricity that would otherwise be curtailed.

The Energy Beggars

There are also places in the world with large, energy-hungry populations, on small landmasses, who aren’t particularly fond of their nearest land neighbours:  South Korea and Japan come immediately to mind- the option of importing HVDC electricity via a cable which can be “stepped on” by an unfriendly neighbour every time they’re irritated with you is clearly not appealing, if the lessons of the Ukraine war and Russian gas supply are of any use!  And there are numerous other places in the world which don’t want the cost and inconvenience of building out huge renewable and storage infrastructure, for renewables with poor capacity factor and hence need broader grids and storage and overbuilding.

These places also have a long history of importing fossil fuels by ship or by pipeline from distant countries- and, usually, a long history of trying unsuccessfully to get un-stuck from that situation for strategic reasons. 

The simpleminded approach to decarbonizing their economies is to import chemical energy, just in another form, this time without the fossil carbon- assuming that is both technically possible and affordable- as long as it’s by ship, so they can switch suppliers in an emergency.  

Hydrogen Exports to the Rescue!

Matching that obvious source of supply with that obviously thirsty demand, seems a no-brainer.  And at first glance, hydrogen seems to fit the bill as a way to connect the two.  It is already produced at scale in the world: we make 120 million tonnes of the stuff per year as pure H2 and as syngas, albeit almost all of which is produced from fossil fuels, without carbon capture, right next to where it is consumed. 

We do know how to move and store it, though we don’t do much of either.  Only about 8% of world H2 production is moved any distance at all, and most hydrogen is consumed immediately without meaningful intermediate storage.  And whereas there are about 3,000 miles of hydrogen pipeline in the USA, which sounds like a lot, that compares with 3,000,000 miles of natural gas pipeline in the USA.  Most hydrogen pipelines are used for outage prevention among refineries and chemical plants, and to serve smaller chemical users, in “chemical valley” type settings such as the US gulf coast, where you can’t throw a stone without hitting a distillation column.  The long distance transmission of hydrogen is, with very few exceptions, basically just not done.  It’s not impossible- we do know how to design and build hydrogen pipelines and compressor stations- it just doesn’t make sense to do it, relative to moving something else (natural gas, for instance), and then making the low density, bulky hydrogen product where and when it’s needed.

If you have energy already in the form of a chemical- particularly a liquid- moving that liquid by pipeline is the way to move it long distances with the lowest energy loss, lowest hazard and lowest cost per unit energy delivered.  When your energy is already in the form of a gas, it’s almost, but not quite, as good.  So at first glance, pipelines look appealing as a way to move hydrogen around- assuming that you already have hydrogen, that is! 

The re-use of existing natural gas pipelines for transporting hydrogen, either as mixtures with natural gas or as the pure gas, has been dealt with in another of my papers:


…and so we won’t re-hash the argument here.  But I concluded, with good evidence:

  1.  The re-use of natural gas long distance transmission pipelines for hydrogen beyond a limit of about 20% by volume H2, is not feasible in most pipelines due to incompatible metallurgy.
  2. 20% H2 in natural gas represents about 7% of the energy in the gas mixture, and hence isn’t as significant as it sounds in energy or decarbonization terms.
  3. Hydrogen, having a lower energy density per unit volume than natural gas, consumes about 3x as much energy in transmission as natural gas does in a pipeline, and would require that all the compressors in the pipeline be replaced with compressors of 3x the suction capacity and 3x the power.

We are therefore really talking about using new long distance transmission infrastructure to move hydrogen around.  We won’t be able to simply repurpose the old natural gas transmission network, as desperately as the fossil fuel want us to believe we can. We can’t, even if we were to manage to take care of all the problems with the distribution network and all the end-use devices for natural gas that are also not compatible with pure hydrogen.

I had a careful look at a recent academic paper, which compared the shipment of hydrogen and other fuels by pipeline, against the shipment of similar energy via high voltage DC (HVDC):

Energy transmission costs from deSantis paper

Costs of transmission only, from De Santis, Lyubovsky et al Cell Press 2021 https://www.cell.com/iscience/fulltext/S2589-0042(21)01466-8

However, the paper commits what I have been calling the 2nd Sin of Thermodynamics:  it confuses electrical energy (which is pure exergy, i.e. can be converted with high efficiency to mechanical energy or thermodynamic work), with chemical energy (i.e. heat, which cannot), just because they are both forms of energy with the same units.  They’re not equivalent, any more than American dollars and Jamaican dollars are equivalent simply because they’re both money, measured in units of dollars!  There’s an exchange rate missing…Note in the figure below, electricity and fuels are compared per unit of LHV (lower heating value).  Convert that back to equivalent units of exergy and you’ll see that hydrogen, at $2-4/kg, is vastly more expensive as a commodity than the on-shore wind electricity it would presumably be made from to compare with.

The paper’s authors make other confusing choices, such as running the hydrogen at a considerably lower velocity in the line than indicated by normal pipeline design methods, and these choices affect the conclusions considerably.  So whereas the energy loss for H2 versus natural gas should be three times as high per unit of energy delivered, they conclude it is actually lower than for natural gas.  The losses stated for HVDC, of 12.9% per 1000 miles, are also considerably over-stated relative to the industry’s metrics of performance (see JRC97720 as just one example). 

When you consider that the energy loss involved in just making hydrogen from electricity is on the order of 30% best case (relative to H2’s LHV of 33.3 kWh/kg), and that this energy needs to be fed as electricity (work), it soon becomes quite clear that the cost of transmission by pipeline versus HVDC is quite foolish if what you’re really looking at is the cost to move exergy (the potential to do work) from one place to another.  If you start with electricity, the cost of using hydrogen as a transmission medium for that electricity includes an electrolyzer and a turbine or fuelcell at the discharge end of the pipeline.  The pipeline itself isn’t actually the controlling variable!

Another paper I recently reviewed;  d’Amore-Domenech et al, Applied Energy, Feb. 2021

This paper looked at both subsea pipelines for carrying 2 GW of energy to distant locations, and at 0.6 GW delivery from offshore to onshore locations.  This is getting closer to the sort of thing which might be considered to move hydrogen from North Africa to Europe, or perhaps one day from Australia to anywhere else.

It turns out that both subsea pipelines and HVDC cables on the order of 1000 km, already exist.  In fact, much longer HVDC lines are currently under study, including one proposed from Darwin, northern Australia, to Singapore, and another from Morocco to the UK.

The paper’s authors assume that HDPE pipe would be used to transmit the hydrogen at electrolyzer discharge pressures of ~ 50 bar(g), to avoid subsea compressor stations ($$$$$).  The pipeline loses hydrogen by permeation through the HDPE pipe (resulting in losses of high GWP potential hydrogen to the ocean and hence the atmosphere), and the pipe is increased in diameter along its length as the hydrogen expands due to frictional pressure loss.  

Sadly, the paper’s authors also commit the 2nd Sin of Thermodynamics, comparing a MWh of delivered electricity (pure exergy) as if it were worth the same as a MWh of hydrogen higher heating value (HHV).  This is a rather glaring error that seems to have passed right through peer review without comment, and it affects the conclusions significantly.

The authors include an 80% (state of the art best case) efficiency for converting electricity to hydrogen HHV at 50 bar(g), and look at this over a 30 yr lifetime.

The energy lost over 30 yrs for HVDC is 1.2×10^4 TJ

The energy lost over 30 yrs for the H2 electrolyzer and pipeline is 1.2x 10^5 TJ, i.e. ten times higher.

Despite this, they conclude that the lifecycle cost of transmitting energy in the form of hydrogen is a little lower for a pipeline than for HVDC at > 1000 km in length.  That is, of course, entirely cancelled out by the 50% conversion factor and the cost of the device at the end of the pipe, required to convert hydrogen HHV back to electricity again, which were ignored in the paper entirely.  In other words, entirely opposite to their conclusion, their paper leads us to conclude that HVDC is actually considerably cheaper on a lifecycle basis.

For distances longer than 1000 km, the paper concludes that liquid H2 transport is the better option.  We’ll deal with that one next…

We won’t even discuss the shipment of compressed gas in cylinders.  A US DOT regulated tube trailer carrying hydrogen at 180 bar(g) (2600 psig), i.e. the biggest tank of hydrogen gas permissible currently to ship over US roads, contains a whopping 380 kg of H2.  While one day US DOT may permit pressure to increase to 250 or even 500 bar(g), it should be clear that shipping BILLIONS of kilograms of hydrogen as a compressed gas in cylinders across transoceanic distances is just utterly a non-starter.

No alt text provided for this image

Liquid Hydrogen (LH2)

Michael Barnard’s article on the subject is well worth a read, 


Here’s my stab at evaluating the export of hydrogen as a cryogenic liquid.

Hydrogen becomes a liquid at atmospheric pressure at a temperature of around -249 C, or 24 kelvin, i.e. 24 degrees above absolute zero.  At that mind-bogglingly low temperature, it is still not very dense.  Whereas compressed hydrogen at 10,000 psig (700 barg) is about 41 kg/m3, liquid hydrogen is only 71 kg/m3.  The improvement in energy density per unit volume is not spectacular.  And whereas to compress hydrogen from the 30-70 bar pressure at the output of an electrolyzer,  to 700 bar(g), can be accomplished for about 10% of the energy in the hydrogen (in the form of work, i.e. electricity, mind you!), liquefying hydrogen takes a mind-boggling 25-35% of the LHV energy in the product hydrogen- again, in the form of electricity to run the compressors- that compares to ~ 10% for liquid methane (LNG).  

Take the exergy of the hydrogen itself into account by applying a conversion efficiency of 50% to the hydrogen at destination to convert it back to electricity, and even without the energy involved in transport of the liquid hydrogen (i.e. whatever energy it takes to move the ship etc.), you get a loss on the order of 50-60%, i.e. you are making very poor use of electricity at the source from which you’re making hydrogen and then liquefying it.

Today, we use liquid H2 as a hydrogen transport medium only very rarely.  The major uses for liquid hydrogen are cooling NMR magnets, and the upper stages of rockets.  That’s about it- there’s no other meaningful use which justifies the extreme complexity and cost of involving a 24 kelvin liquid gas.

The problems of hydrogen liquefaction are considerable, and very technical.  First, hydrogen heats up when you expand it, any time you start at a temperature above about -73 C (200 K)- this behavior arises from hydrogen’s unusual negative Joule-Thomson coefficient above 200 K.  That means, if you want to liquefy hydrogen, you first have to cool it down considerably as a gas.  Generally liquid nitrogen precooling is used for this purpose, necessitating an air liquefaction plant as part of the works.  After precooling, the hydrogen can be liquefied by either a helium refrigeration cycle or a hydrogen Claude cycle (where hydrogen itself is the refrigeration fluid).  

No alt text provided for this image

(image source:  Linde)

The energy input required  is considerable as a result of the difficulty of rejecting heat to the ambient world when starting at such a low temperature.  And although that would be bad enough, hydrogen has another wrinkle:  spin isomerization.  The electron spins of the two hydrogen atoms in a hydrogen molecule can be either aligned (ortho) or opposite (para).  When you condense gaseous hydrogen, you get a mixture of about 75% ortho and 25% para-hydrogen.  As the liquid sits in storage, ortho gradually converts to para, releasing heat.  And that released heat escapes the only way it can- by boiling hydrogen you’ve spent so much energy to cool and condense.  A catalyst is required to carry out the conversion more quickly so the heat can be recovered prior to storage, rather than causing excessive boil-off while the H2 is being stored.

Keeping heat out of liquid hydrogen at 24 kelvin, however, is easier said than done.  Vacuum insulated “dewar” type tanks can be constructed, and for applications like this, spherical containers are the optimal shape with the lowest surface area per unit volume.  A land-based LH2 dewar tank about as big as you can make it, reportedly has excellent performance, where only 0.2% of the hydrogen in the tank,boils off each day.  Any tank smaller than that, or of a less optimal cylindrical shape, allows even MORE than 0.2% hydrogen to boil off per day.  And in transit, on a ship or truck, recapture and re-condensation of the boil-off gas is not possible.  The best you can do is to burn it, hopefully as a fuel, or if in port, to just burn it to prevent it from becoming a greenhouse gas- H2’s global warming potential (GWP) is at least 11x as great as CO2 on the 100 yr time horizon and it is even higher on the relevant 20 yr time horizon.

Once you get to the size of tank possible to put on a truck, 1% boil-off per day is about the best you can do.  Want to make it worse?  Just use a smaller tank!  

Hydrogen’s low density, even as a liquid, is another problem.  Liquid hydrogen, at 2800 kWh/m3 HHV,  contains only about 44% of the HHV energy per unit volume of liquid methane (6300 kWh/m3), i.e. LNG.  On an LHV basis, i.e. if we need work or electricity at the destination instead of heat, it’s even worse- 2364 kWh/m3 for hydrogen versus 9132 kWh/kg for LNG, i.e. about ¼ the energy density per unit volume.  That means either larger energy cargo ships, or several ships to carry the same amount of energy- even if boil-off is managed.

Converting Hydrogen to Other Molecules for Shipment

Confronted with these obvious difficulties, which make hydrogen rather a square wheel for the transport of energy across transoceanic distances, hydrogen proponents don’t give up!   Naturally, they try to shave the corners off hydrogen’s square wheel by converting it to another molecule with more favourable transport properties.  The four main candidates are ammonia, methanol, liquid organic hydrogen carriers (LOHCs), and metal hydrides.  We’ll take these one at a time.


While making green ammonia to replace the black ammonia we rely on to feed about half the humans on earth is inarguably a high merit order use for any green hydrogen we might afford to make in the future, some have gone on to suggest ammonia as a vector by which hydrogen itself may be transported.  

Ammonia is discussed in some detail in my paper here:


The advantage is that it is made from nitrogen which can be collected anywhere from the air.  The downsides are many:

  • Heat is released at the point of manufacture, where energy is already in excess, hence it is likely this energy will be wasted
  • The Haber-Bosch process, while efficient after ~ 110 yrs of optimization, must be operated continuously to have any hope of being economic.  It is high pressure and high temperature, and hence not suitable to cyclic operation as energy supply rises and falls.  This necessitates considerable hydrogen storage if the feed source is renewable electrolysis
  • Breaking ammonia part again to make hydrogen takes heat, at the place where you’re short of energy, and at fairly high temperature (so waste heat from fuelcells isn’t likely to be useful)
  • Ammonia is a poison to fuelcell catalysts
  • When burned in air, ammonia generates copious NOx, requiring yet more ammonia to reduce these toxic and GWP-intensive gases back to nitrogen again (NOx consists of N2O- a 300x CO2 GWP gas which is persistent in the atmosphere, NO- a transient species, and NO2, the toxic one which is water soluble and not persistent in the atmosphere but a precursor of photochemical smog etc. Burn ANYTHING- hydrogen, ammonia, gasoline, your old boss’s photograph etc., and you get all three)
  • Ammonia itself is dangerously toxic, especially in aquatic environments
  • Large shipments of ammonia would be insidious targets for terrorism
  • Cycle efficiencies, starting and ending with electricity, for processes involving ammonia, are on the order of 11-19%, meaning that you get 1 kWh back for every 5-9 kWh you feed

Because substantially all ammonia used in the world is of fossil origin, made from black hydrogen which itself is made from fossils with methane leakage and without carbon capture, and its use literally feeds the people of the earth, I see any use of ammonia as a fuel before black ammonia is replaced with green ammonia, as being basically energetic vandalism.  It has an objective clearly different than that of decarbonization in my view.


Methanol, which is currently exclusively made from natural gas or coal by gasification to produce syngas (mixtures of H2 and carbon monoxide), can also be made by producing an artificial syngas by running the reforming reactions backward- starting with CO2 and H2 and catalytically producing CO and H2O.  While that energy loss, generating water by basically “un-burning” CO2, is substantial, as long as a CO2 source of biological or atmospheric origin can be used, methanol has a series of attractive properties:

  • It is a liquid at room temperature, not just a liquefied gas, so its cost of storage is very low per unit energy (though tanks do need inerting, which is unnecessary for gasoline or diesel)
  • It is toxic, but nothing even close to the toxicity of ammonia
  • Its energy density is lower than that of gasoline and diesel, but once made, it is considerably more favourable as an energy transport or storage medium than ammonia or hydrogen
  • It may be reformed at modest conditions back to synthesis gas again
  • It is a versatile chemical used to make many other molecules, including durable goods such as plastics, and if we are not foolish enough to burn those materials at end of life, it can be a mechanism for carbon sequestration

The big challenge for methanol is that source of CO2.  Direct air capture wastes too much energy in a needless fight against entropy, so forget about it as a source of CO2 to make methanol in my opinion.  Unless a concentrated source of non-fossil CO2 (a brewery, anaerobic digester or biomass combustor) is colocated with the source of electricity and hence hydrogen, the shipment of liquid CO2 by sea to make methanol from, replicates many of the economic challenges of LNG and liquid hydrogen.

While making green methanol is also a clearly no-regrets use of any green hydrogen we may happen to make, methanol as an “e-fuel” is a challenging issue for the above-noted reasons.  Obtaining decent economics per delivered joule would seem very challenging indeed.  Therefore, the hopes of companies like Maersk that they will be able to fuel their ships on fossil-free methanol in the near future, seem perhaps decades premature at best.

The use of methanol as a vector for the transmission of hydrogen for use as hydrogen, makes no sense to me at all.  Reforming the resulting CO back to CO2 and more H2 again using water is possible, but too costly and lossy to make energetic sense to me.   

Liquid Organic Hydrogen Carriers (LOHCs)

These are liquid organic molecules like methylcyclohexane, which can be dehydrogenated to produce hydrogen and toluene.  The toluene, also a gasoline-like liquid, can be shipped back to wherever hydrogen is in excess, and hydrogenated to produce methylcyclohexane again.  Numerous molecule pairs are candidates, each with its suite of benefits and disadvantages.

The big disadvantages of LOHCs are similar to those of ammonia:

  • Parasitic mass is considerable – for MCH/toluene, only 6% of the mass of MCH is converted into hydrogen at destination, and the other 94% of the mass has to be shipped in both directions.  On this basis alone, LOHCs are not good candidates as transportation fuels (i.e. fuels for use to move ships, trucks etc.) in my view
  • Like with ammonia, heat is produced at the place where you have energy in excess, and energy is required (again at high temperature) to supply the endothermic heat of dehydrogenation at destination.  The temperatures required are too high for waste heat to be used
  • There will inevitably be some loss of the molecules in each step.  Yields will never be 100%
  • Considerable capital and operating/maintenance cost will be required at both ends, for the hydrogenation/dehydrogenation equipment.  These are chemical plants, not simple devices like fuelcells or batteries, and hence they will be economical only at very large scale if ever

LOHCs don’t seem to have a good niche in my view. They are useless as sources of hydrogen for transport, below the size of perhaps a ship.  While some, such as Roland Berger in a recent report:


…tend to conclude that LOHCs are a better way to do “last mile” transport of hydrogen under certain circumstances than some of the other options, that is again really a desperate reaction to the impracticality of hydrogen itself as an energy distribution vector, rather than a vote of confidence in the technology itself.

Solid Metal Hydrides

Hydrogen reacts with both the alkali metals (Li, Na) and alkaline earth metals (Ca, Mg) as well as with aluminum and other elements, to form hydrides, i.e. where hydrogen is in the form of H- ion.  These hydrides can form at the surface of the metals, providing a means of “chemi-sorption” for the storing of hydrogen at lower pressures than that required for pure compressed gas.  However, the cost of the lower storage pressure is greatly higher (parasitic) mass, i.e. useless for transport applications, and the need to use heat (generally provided by electric heating) to desorb the hydrogen when required.  

The hydrides themselves can also be made as pure solid substances, such as “alane” (AlH3), magnesium hydride (MgH2) or NaBH4 (sodium borohydride).  These metal hydrides react with water, producing twice as much hydrogen as is found in the original hydride molecule.  For instance:

MgH2 + 2 H2O ⇒ 2 H2 + Mg(OH)2

Sadly, there’s the rub:  aside from the considerable problem of parasitic mass, in each case, the re-formation of the original hydride involves two steps:

  • Production of the metal again from its hydroxide, and
  • Production of the hydride by reaction with hydrogen at high temperature and pressure

The energy cycle efficiency of all such schemes involving metal hydride reactions with water are therefore negligible, tending to be in the single digits, because the process of re-making the metal and then the hydride is so energy-intensive.  Wasting 10 joules merely to deliver 1 joule at destination is not something we’re going to do at scale, or at least that’s my hope!


The export of hydrogen, either as hydrogen itself or as molecules derived from hydrogen for use as fuels directly or as sources of hydrogen to feed engines or fuelcells, seems to be an idea which although technically possible, is extremely difficult to imagine becoming economic.  The energy losses and capital costs and other practical matters standing in the way of hydrogen or hydrogen-derived chemicals being used as vectors for the transoceanic shipment of energy, seems to be rather more a result of #hopium addiction being spread by interested parties, than something derived from a sound techno-economic analysis.

What Should We Do Instead?

It’s clear to me that the opportunity of high capacity factor renewables from hybrid wind/solar installations along the coasts in places like Chile, Western Australia etc. is considerable, and so is the potential for these green energy resources to decarbonize our society.

In my view, however, we’re thinking about it wrong.

We should be thinking about Chile, western Australia etc., becoming hubs for the production of green, energy-intensive molecules and materials- things that we need at scale, which represent large GHG emissions because we currently make them using fossil energy or fossil chemical inputs.  The list includes:

  • Ammonia, and thence nitrate and urea, for use as fertilisers (NOT as fuels!)
  • Methanol, for use as a chemical feedstock, again not as a fuel
  • Iron (hydrogen being used to reduce iron ore to iron metal by direct reduction of iron (DRI), which can then be made into steel at electric arc mini-melt mills wherever the steel is needed
  • Aluminum, and perhaps one day soon, magnesium too- neither of which involve hydrogen really, but both of which will need electricity in a big way if we want to decarbonize them
  • Cementitious/pozzolanic materials- though these are such bulky and low value materials that shipment across transoceanic distances is hard to imagine we’ll be able to afford
  • who knows- maybe diamonds and oxygen! (Just kidding!)

For locations such as north Africa, the obvious solution is to skip the hydrogen and indeed the molecular middleman entirely, and simply to export electricity via HVDC directly to Europe.  Although that doesn’t address the need for energy storage, the resources predicated for the manufacture of economical green hydrogen already suggest high capacity factor, and proximity to the equator makes their seasonal variation considerably lower as well.  Clearly, in my view, making hydrogen simply to permit electricity to be stored for later use is very hard to justify, given the best case cycle efficiency of hydrogen itself- without hydrogen long distance transport and distribution taken into account- is on the order of 37%.  That is far too lossy a battery to be worth major investment. Drop that even further by adding lossy things like hydrogen liquefaction or interconversions to yet other molecules and it looks just too bad to take seriously.

What About Fossil Energy Importers?

Countries like Japan and South Korea, frankly, are in big trouble in a decarbonized future, especially if they make themselves dependent on importing energy in the form of hydrogen or hydrogen-derived molecules.  What kind of cars they drive is really irrelevant:  the energy-intensive industry that is the basis of their economies, will simply need to move offshore, given that their economic competitors would be using energy which costs 1/10th as much per joule, and using that energy directly rather than through a lossy middleman.  Either that, or they’ll need to switch to a service economy and focus on extreme energy conservation- which might be best.

However, what concerns me is that neither the Japanese nor the Koreans are ignorant in these matters.  If I saw both countries building out renewable offshore wind generation like mad, or even going nuts building new nuclear plants, perhaps I’d believe that their interest in decarbonization via hydrogen was truly in earnest, to sop up even at great cost, the residual that they can’t manage to supply locally as electricity.  Rather the focus on hydrogen looks more like an attempt to put off the energy transition until some future date when hydrogen becomes “economic” as an option, burning fossils and fooling around with meaningless pilot projects (JERA burning ammonia in 30% efficient coal-fired power plants, anyone? Or worse still, this brown coal gasification with liquid hydrogen shipment nonsense?) in the meantime.  Because, frankly, looking at the various importation options, the future in which hydrogen as an energy transport vector becomes “economic” across transoceanic distances is likely “never”, relative to more sensible options.

Disclaimer: whereas I always try to be accurate, I’m human and therefore fallible. If you find anything wrong in my article, which you can demonstrate to be wrong via good, reliable references, I’ll be happy to correct it. That’s why I publish on a vehicle like LinkedIn, rather than in journals that remain unedited and therefore preserve my errors in amber!

Oh, and if you don’t like my opinion on these matters, by all means feel free to contact my employer, Spitfire Research Inc.

The president (i.e. myself) will be happy to tell you to get lost and write your own article, with even better references, if you disagree.

Blackish Blue Bruise Coloured Hydrogen Part 2: the Ghost of Blue Hydrogen’s Future

Blackish-Blue Bruise Coloured Hydrogen Part 2:

The Ghost of Blue Hydrogen’s Future

As we found in Part 1, conventional hydrogen production from natural gas using steam methane reforming (SMR) coupled with carbon capture and storage (CCS) is easily written off as a waste of everyone’s time and money. It’s fooling nobody. Because nearly half the CO2 emissions come from combustion in the tube furnace and other combustion equipment, there’s no more benefit to going after high CO2 captures off this equipment than from fossil gas power plant flues or the like. We’d need something new such as SMR run with renewable electric heating to give SMR a shot at survival into a decarbonized future.

Undaunted, hydrogen-as-a-fuel advocates reach for another technology: autothermal reforming (ATR). ATR isn’t something new or unknown: it is a process used at very large scale today when high carbon monoxide (CO) to hydrogen syngas mixtures are required for processes like Fischer Tropsch or methanol synthesis.

Autothermal reforming works the same way, in thermodynamics terms, that SMR works: you heat up a mixture of methane and steam to high temperatures and pass it over a catalyst, so that endothermic reactions can transform it into a mixture of CO and H2. But whereas SMRs combust fuel and transfer the heat through catalyst tubes in a tube furnace to the syngas mixture inside them, in an ATR pure oxygen is fed with the steam and methane feed in a special burner or partial oxidation catalyst inside a refractory lined pressure vessel. The heat is produced by in situ partial combustion, and the feed stream is massively overheated before being passed over the catalyst rather than supplying heat as the reactions proceed. The result is that some of the product hydrogen and CO are also combusted and hence wasted, but all the combustion products (CO2 and water) are contained in the syngas product and hence are easier to capture and remove. The downstream steps: heat recovery, water/gas shift reactor(s) and gas purification are otherwise identical to the SMR, with the exception that there’s now no fuel-hungry tube furnace to dump the gas purification system’s waste hydrogen into.

The process is, as already mentioned, less efficient than SMR if the target is pure hydrogen. Actual “efficiency” figures given are usually polluted with values being given to “export steam” that make comparisons challenging, but that ATR is less energy efficient at making H2 than SMR, assuming CO2 can be disposed of the good old way, to the atmosphere, is not in doubt. The process is currently optimized for the production of higher CO/H2 ratio syngas mixtures which are then blended with SMR output to produce the desired ratio for F-T, methanol or other syngas uses, because ATR can do that and SMR can’t without a risk of soot generation and other problems. Air-blown ATR is also used in a multi step reformer package to make H2/N2 mixtures as a feed for the Haber Bosch ammonia process.

Of course ATR is reached for by blue hydrogen advocates as soon as somebody realizes that the best possible capture from SMR, without major changes, is going to be about 50%- even ignoring methane emissions, due to SMR’s horrible burner box. So: how “blue” can we make an ATR?

If you a) take a few hits from the #hopium pipe, b) ignore methane emissions and c) restrain the project to using only renewable/nonemitting energy sources to run the CCS equipment, the answer is “as blue as you can afford!”.

In purely technical terms, there is no problem at all to remove 99.999% of the CO2 from the resulting hydrogen stream. If that were NOT possible, the product hydrogen would never meet fuelcell specifications which require the total of CO + CO2 to be below 10 ppm to avoid killing the fuelcell’s catalysts. Of course “removal” is just the first and easiest of the steps- you next need to capture, purify and store the CO2 away permanently.

Recall that the Shell Quest project targeted an 80% capture of the CO2 in the syngas mixture from the SMR, which it achieves, except when it doesn’t. That’s a fairly easy capture target from a stream with a high partial pressure of CO2 at a modest absolute pressure, and of course it’s only 80% of about 1/2 the CO2 emitted by the SMR. But as you increase the fraction of the CO2 you want to capture, two things increase: 1) energy use and 2) hydrogen wasting. The former is true in any capture process, arising from an increasingly fraught battle against entropy. While the thermodynamics and energetics are complex and vary greatly from flowsheet to flowsheet, to a first approximation, the energy to remove each order of magnitude of mass is about equal, i.e. 99% capture costs twice as much energy as 90%- for capture, not for capture and storage.

The hydrogen mass loss is likely the more challenging problem, and not just because of the need to throw away valuable product. Removing hydrogen from the captured CO2 would require yet more equipment, energy and flowsheet complexity. Unlike in the “good old days”, when hydrogen was treated as something you could vent without consequence, we now know that hydrogen itself is a GHG with GWP between 11 and 60x that of CO2. And whatever CO2 you don’t capture in your sequestration train, is going to come out in a swamp of H2 and other gases in the gas purification pressure swing absorption (PSA) unit. Without combustion equipment to easily use this material, and with the option to recompress and feed this stuff into the ATR feed being unpleasant for a number of reasons, it is likely that one would be tempted to just burn and vent the flue gas in some unabated combustion unit somewhere. That of course means you will have unabated CO2 emissions coming from that combustion stack.

The real killer to this idea however is those pesky initial assumptions we made when we were high on #hopium.

We might, because of foolish or ignorant regulation, be permitted to just ignore or discount the methane emissions, as is clearly the case with the Shell Quest project which doesn’t even mention them in its public reporting. Remember that methane emissions take Quest’s net CO2 capture from 35% to a real CO2e capture around 21%- from poor to positively craptastic. But the atmosphere won’t give us the luxury of accepting the design philosophy of Mediocrates, whose response to criticism is, “Meh- good enough!” Global warming potential (GWP) is GWP, whether it’s from CO2 or methane emissions. You can’t just pretend that methane doesn’t matter!

If you remember from Part 1, methane leakage at world average leakage rates of 1.5% add about 4 kg of CO2e for every kg of H2 produced, i.e. Methane leakage adds about 40% to the ~ 10 kg of actual CO2 released from an unabated SMR per kg it produces. So if you’re looking for CO2e capture (the total of CO2, methane, N2O and other GHGs from the process) to be very good, you basically would be forced to use the very best, lowest leakage fossil gas sources on earth, i.e. Norwegian gas, the gold standard as far as leakage goes, with leakage below 0.05%. My suggestion would be to forget about any blue hydrogen production from LNG, irrespective of its source- you need access to excellent gas via pipeline, because the leakage/venting from the extra steps to liquefy, transport and revapourize LNG will take you over the top.

Remember also that you need green electricity, and lots of it, to provide the capture and sequestration energy. ATR will generate the needed surplus heat as a result of its reduced efficiency relative to SMR, but you’ll need lots of green electricity- backed up with storage- to be able to run your ATR continuously (the only way hot units like this can run). You’ll need it to run the air separation plant, and the capture equipment, and the storage and CO2 transport compressors. Of course if you were a moron, you’d run this equipment off your product hydrogen, and watch your economics swirl down the toilet.

Finally, we can’t forget about carbon sequestration, i.e. storage. Remember that Shell Quest has an ideal storage reservoir only 60 km away from the plant, 2 km below ground. Because of the economics of CO2 sequestration, we’ll need to be nearby to a sequestration reservoir with capacity enough to take the effluent from our plant from its entire 30+ yrs of design life. And NO, we can’t accept enhanced oil recovery use for this CO2, because if we do, the atmosphere gets not only the CO2 released when that oil is burned, it gets about 40% of it back again when the oil comes to the surface. EOR is not CCS!

So: to make ATR capable of making truly blue hydrogen, any project has to line up just exactly right. It must have:

1) pipeline access to ultralow leakage fossil gas of adequate capacity

2) pipeline access to a nearby disposal reservoir of adequate capacity (forget about liquefying and shipping the CO2, that’s just going to break the bank and blow the CO2 emissions budget by time you’re done)

3) access to high capacity renewable electricity, backed up with storage to permit continuous operation (begging the question- why not just make green hydrogen!?!?!)

When I ran the numbers assuming 95% CO2 capture, 0.05% methane leakage, a world class ATR efficiency, and renewable electricity running the entire carbon capture and storage infrastructure, I was able to get the emissions down to about 1.2 kg of CO2e per kg of hydrogen produced- only about 20% more CO2e emissive than green hydrogen made by electrolysis using renewable electricity can already achieve today.

As to the cost of doing this, let’s just set that aside and dream the dream for a moment!

Hmm, are we seeing any ideal locations on earth yet? Which meet all these criteria, to make truly blue hydrogen?

Me neither.

Are we seeing enough of them to pump up the deflated dream of wasting hydrogen as a natural gas replacement fuel?


I am however seeing lots of places which meet a couple of those criteria, who are hoping that nobody notices that the resulting hydrogen is still very blackish blue and bruise coloured, and hoping that a credulous government somewhere will turn a blind eye and write a blank cheque.

As to the notion of this kind of project being “transitional” while we wait for green hydrogen to get cheaper, please remember that this is nothing like bolting a CCS unit onto an existing SMR plant already feeding black hydrogen to a refinery or upgrader somewhere. We’re talking about bespoke new equipment, at scale so it’s cheap enough per kg, with a design life of at least 30 yrs. Equipment that doesn’t make economical hydrogen relative to today’s standard, but which is being done this way simply because it allows the energy of fossil gas to be made into fossil hydrogen with CO2e emissions that some government regulator somewhere finds tolerable. There’s no way anybody is going to build this stuff unless they’re guaranteed to be able to run it for at least those 30 yrs, unless somebody else is paying all the capital.

Finally, a reminder that even black hydrogen is not a cheap fuel. The ten year average for wholesale Henry Hub gas in the US gulf coast has been around $3.50/MMBTU over the past 10 yrs or so. Such gas can be made into saleable wholesale black hydrogen for about $1.50/kg- the aspirational target price for green hydrogen in, say, 2040 if you’ve had a few hits from the #hopium pipe, and 2050 or beyond (or never) if you’re skeptical like me. That hydrogen already costs $11/MMBTU. While I know full well that North Americans have access to incredibly cheap gas, and by world standards $11/MMBTU doesn’t sound like a very high price, I remind you that this is a wholesale cost and includes zero cost for storage or distribution. Those costs are going to be high, are going to be higher than for fossil gas for reasons of basic physics, and will be much higher than many imagine because the existing fossil gas infrastructure cannot be re-used for its distribution.

Personally I think that blue hydrogen should be taken off the barbeque because it’s been charred on all sides.

It’s done.

And while some governments are waking up to the same conclusion, as a result of the efforts of the @Hydrogen Science Coalition and numerous other groups, we can’t forget that blue hydrogen is of existential importance to the fossil fuel industry. Blue hydrogen is their “get out of jail” card in the energy Monopoly game. Even as a mere idea, rather than an actual technology, blue hydrogen allows fossil gas producers and distributors to pretend that they have a future in the energy supply market post decarbonization. They will not give up on this idea easily, and economics will drive them to put makeup over the bruises rather than making hydrogen truly blue, because the latter is very geographically limited and will also be very costly. It’s my hope that you won’t be fooled.  

Disclaimer: everything you’ve just read was written by an ordinary human being, fallible and capable of error in the most mundane ways. If you find something that I’ve done wrong, and can provide references or calculations which demonstrate where I’ve gone wrong, I’ll be grateful and happy to correct my work.

If what I’ve written makes you angry merely because it puts your future ride on the fossil fuelled gravy train in doubt, then be sure to take it up with Spitfire Research.

Blackish-Blue Bruise Coloured Hydrogen

My readers will know that I have never liked the “colours of hydrogen” that has been spread, as a meme, by the hydrogen-as-a-fuel lobby.

There is only really one kind of hydrogen in the world right now.

Hydrogen- 98.7% of it by generous estimate- is made from fossils, without meaningful carbon capture. It is barest minimum 30% blacker, per joule, in CO2 and methane emissions, than the source fuel it was made from. 30% is a best case figure, corresponding to about 10 kg of fossil CO2 emitted per kg of hydrogen you produce, corresponding to a 70% lower heating value (LHV) efficiency of converting natural gas LHV to hydrogen LHV. Start with lignite (brown coal) and it can be much worse- 30 kg CO2 per kg H2.

By definition, that’s not brown, or gray. That’s black. In fact, it’s ultra-black. It’s black-hole black.

We make up for that by not wasting hydrogen as a fuel, of course. Very little hydrogen is wasted as a fuel today. It is made, and used, as a chemical- sometimes to desulphurize or deoxygenate other fuels, and sometimes as a component in making molecules like methanol which are sometimes used as or to make fuels (i.e. Biodiesel).

I love memes- they can be an extremely effective communication tool, and as my friend Alex Grant says, “money follows memes”- people invest in mematic ideas, sometimes for good, sometimes carelessly, and often using taxpayer money.

When people use memes, especially when they use them carelessly or illegitimately, it’s fun to riff on them, as I’ve done with my headline. Sometimes it is a very useful way to get the opposing point of view across.

The hydrogen-as-a-fuel lobby have a pretty colour euphemism for another way to make hydrogen. If you make it from a fossil energy resource, but capture (some of) the CO2 released and dispose of it in some durable way, they call that “blue” hydrogen. And remember what “blue” hydrogen is, after all: it’s the last grand scam of the fossil fuel industry. It’s the only way the fossil fuel industry can pretend to have a future in the energy system post-decarbonization, aside from as a materials and chemicals supplier (about 15-25% of their current business value). Not a happy prospect, if you’re in that business, seeing your revenue shrink by 75-85%! So, any port in a storm as they say, and “blue hydrogen” to the rescue! Since carbon capture at every point source user of fossil fuels- or even at a multitude of the larger ones- is an economic and practical absurdity, the simpleminded idea is as follows:

  1. convert natural gas to hydrogen centrally
  2. capture the CO2, disposing of it (hopefully by enhanced oil recovery so we get paid twice)
  3. sell hydrogen as a fuel
  4. party like it’s 1989, before we really thought AGW was worth bothering with really


So- is it a go? How “green” IS blue hydrogen?

That question was famously asked by Drs. Howarth and Jacobson in their delightfully sh*t-disturbing 2021 paper in Energy Science &Engineering (12 Aug 2021) https://onlinelibrary.wiley.com/doi/full/10.1002/ese3.956

Truly rare for an academic paper, this one had the heads of certain “hydrogen is fossil fuel’s great salvation” advocates, spinning like that of the child in the Exorcist…

(warning: language and religion are both scary…)

But I’m asking a more finessed version of the same question: how blue is blue hydrogen? How blue can it really ever be? And the way I’m going to do that is a little different than what Bob and Mark did in their paper. I’m going to look at a “real”-ish “blue hydrogen” project. Not a pilot project- one done at considerable scale, which buries 1 million tonnes per year of CO2 and doesn’t try to pretend that using CO2 for enhanced oil recovery is real carbon storage, either- that’s the #1 play in the fossil fuel playbook where CCS is concerned.

And there is one to look at, but only one in the whole world. Here it is:


Quest, originally built by Shell, largely (or perhaps entirely) using money from the Canadian Federal and Alberta provincial governments, is a carbon capture and storage project on the Scotford Upgrader. Hydrogen, made by the conventional method by which most hydrogen is made in the world today (steam methane reforming (SMR)), is used to desulphurize and partially upgrade bitumen (aka “tar sands” heavy oil), for sale into the (largely USA) fossil transport fuels market. That’s a use we hope sincerely won’t be needed soon, because we need to stop burning fossils as fuels in a decarbonized future.

The great thing about Quest is that because it was largely (nearly completely) publicly funded, its data is available as a precondition of public money being spent on the project. So all you non-Canadians, please feel free to send cheques to us for all the learnings we funded on your behalf…I’ll be setting up the GoFundMe page shortly! (grin!)

Let’s look at a simplified flowsheet of a SMR so we can understand what Quest does, and doesn’t do.

SMR emissions diagram

The Steam Methane Reformer, Redux -(C) Spitfire Research Inc. 2021

A steam methane reformer takes natural gas (mostly methane), purifies it, mixes it with steam, preheats it, then sends it to a number of reformer tubes in parallel, suspended in a tube furnace. Each tube is filled with solid catalyst and is heated on the outside using flue gas produced by burning a mixture of fuel gas, heating the tubes to a very high temperature (well above 800 C). The reforming catalyst allows methane to react with water to form a synthesis gas mixture consisting of carbon monoxide, carbon dioxide, unreacted water vapour, a little unreacted methane, and hydrogen. The overall reactions are endothermic, i.e. requiring heat input, and that heat is supplied by the burning of fuel gas outside the tubes. That means ultimately we have to feed at least 30% of the energy we get out of the product hydrogen, into the tube furnace to supply it with heat to drive those reactions.

CH4 + H2O + heat <==> CO + CO2 + H2 with proportions depending on conditions

SMR photo and schematic diagram

(image credit: Air Science Technologies Inc.)

If we want syngas, we’re done- we can separate out the water and maybe the CO2 and feed the gas on to do something useful such as making methanol or reducing iron ore to iron metal (called direct iron reduction or DRI). But we want hydrogen, so the next thing we do is cool down the hot gas in a heat recovery steam generator (HRSG) which produces the steam we need to feed the process plus perhaps a little excess. We then feed the cooled syngas mixture to one or more water-gas shift reactors. These perform the magical water-gas shift (WGS) reaction:

CO + H2O <==> CO2 + H2 + heat

…which, because it produces heat, generates more H2 the colder we run the reaction. Sometimes two steps of WGS with heat removal between them, is done. We’re now left with a stream which consists of H2, CO2, some water vapour, a little unreacted CO, and some unreacted methane, all under some pressure (about 30 bar or so).

Since we want H2, we need to remove the CO2. And here, Shell Quest becomes relevant.

In a normal SMR, the captured CO2 is usually just vented to the atmosphere because that’s the very cheapest thing we can do with it. And let’s face it, CO2 is not a commercial product, it’s a low Gibbs free energy waste molecule, valuable as a product of energy-producing reactions like combustion principally because, until recently, it was easy and cheap to dump it to the atmosphere.

Instead, Quest captures the CO2 using conventional amine absorber/stripper technology (something routinely used in chemical engineering, done at giant scale even before AGW was a “thing”). It generates a nearly pure CO2 stream which it then compresses, dries and dumps into a pipeline for disposal in a nearly perfect disposal reservoir, 2 km underground, some 60 km away from the plant. The absorber takes some pumping energy, and the stripping step takes some heat, at a reasonably high temperature, and CO2 compression takes (considerable) electricity, so we have to find that energy somewhere. We might get some from waste heat from the SMR by heat recovery, but that heat could generally be used elsewhere in the upgrader so it’s not really “free”.

…but even then, we’re not done. We still have CO, some CO2, and some unreacted methane to contend with. Generally, a pressure swing adsorption (PSA) system is used to capture and remove these contaminants. The PSA adsorbs the contaminants at pressure onto a solid adsorbent, which is then periodically depressurized to vent off a gas mixture which is, sadly, mostly hydrogen contaminated with these left-over materials. No matter, as we have a huge, hungry firebox ready to soak up all that otherwise wasted energy- the PSA’s tail gas is sent back to the tube furnace for combustion, and all the CO and CO2 leaves the flue as CO2.

Shell Quest captures about 80% of the CO2 in the syngas stream. That results in about 45% capture of the CO2 from hydrogen production at the plant, per the government website. That also means that only about 56% of the CO2 emitted by hydrogen production in the Quest SMRs is available for capture in the syngas. The other 44% comes mostly out that tube furnace flue, at atmospheric pressure, in a giant swamp of nitrogen and water vapour (some of it also comes out of a natural gas power plant somewhere). The low partial pressure (total pressure times % CO2 in the stream) means there’s a higher entropic hill to climb to capture that CO2, and that costs us energy; such a high hill in fact that it is no easier to capture the CO2 from the tube furnace flue than it would be to just skip hydrogen-as-a-fuel entirely, burn ALL the methane in, say, a gas turbine power plant, and capture the CO2 from its flue. So Quest, wisely, doesn’t even try. It makes itself quite satisfied with capturing 80% of 56%, i.e. 45% of the CO2. The easy 45%. And it does this pretty consistently, except when it doesn’t. Then, it just vents the CO2, like every other SMR on earth.

Source: Government of Alberta website, Shell Quest capture figures, 2019

(note: this 80% of the CO2 in the syngas is only 45% of the CO2 produced by the SMR unit)

Quest has been in operation for over six years, since we generous Canadians built it. And in each of those years, it has captured about 1 million tonnes of CO2 and buried it deep down from whence it will hopefully never return.

Of course Shell, on its various websites, crows about how wonderful this all is- how it’s equivalent to the emissions of about 1.25 million cars each year etc. etc. Why shouldn’t they crow? They’re not paying for it- we Canadian taxpayers are!

Cost and Schedule

How much does all this cost? And how long did it take?

Engineering started in 2009, and the CCS system was operational in 2015. Not quick…but it worked, so I guess Fluor did a good job!

The financial figures are a bit muddier. While the public money dumped into the project is very clear- $120 million from Canada and $745 million from Alberta (all Canadian dollars), the NRCan website talks about “total project costs” of $1.31 billion. The operating costs are on the order of $50 million per year, and the project life is 20 yrs, so the $1.31 billion figure isn’t the “total project cost” including operation and maintenance. The actual total capital cost is actually quite unclear. Shell itself provides no other figures that I can find easily.

But frankly it doesn’t matter that much to me. What’s half a billion between friends? Here’s why: let’s assume it cost “only” the public $845 million and neither Shell nor anybody else put a cent into it. Let’s assume, for fun, that capital is “free”, so we don’t have to argue about discounting rates. And let’s assume a steady $50 million a year, and the 20 yr operating life. Let’s ignore inflation, like we all did until recently… Roll that together at an average of 1 million tonnes per year of CO2 captured and buried, and we get ~ $92/tonne of CO2. Sounds great, right? Canada’s carbon tax is heading to $170/tonne by 2030- very soon the project will be self-funding! Shell further crows on its 5th anniversary of Quest website, that capital costs would drop by 30% next time.

Sadly, we’ve forgotten or ignored a few things. Isn’t that how my papers usually go? It looks so simple, until we get into the nasty details?!

Here’s a better, more accurate version of that first slide I showed you- it represents what’s really going on in an SMR with or without carbon capture.

SMR with emission pathways

Methane Leakage

For every kg of H2 we get out of Quest, by my estimates (based on a Shell presentation whose slides I do not have permission to share with you), we need to feed about 47.5 kWh of methane LHV to the SMR itself. That would, at world average methane leakage rates of 1.5%, result in about 0.055 kg of methane being leaked for every kg of H2 being produced at Quest. Before abatement, Quest generates about 9.7 kg of CO2 per kg of H2 produced, round numbers. Add 0.055 kg of methane at the 20 yr, 84x global warming potential of methane per the IPCC, we’re looking at unabated emissions of 9.7 (direct) plus 4.5 (methane) = 14.2 kg of CO2e per kg of H2, before we do any CCS. Ignoring the methane leakage makes the CO2e emissions look much nicer, doesn’t it?!

Of course if you’re a fossil fuel apologist, you’ll use the 30x CO2 GWP figure for methane from IPCC instead, and knock that 4.5 extra kg of CO2e/kg H2 down to 1.7 kg- only about 18% of the gross CO2 emissions, so not that bad. But remember- we’re considering H2 production from fossils with CCS to be a transitional strategy, because we all know it’s second (or third, or fifth…) best relative to GREEN hydrogen made by electrolysis of water using pure renewable electricity. You can, as some Exorcist head-spinners have done, write off Howarth and Jacobson’s paper (and my analysis) as mere hyperbole because we take the hydrogen-as-a-fuel lobby at its word about “blue” hydrogen being a stop-gap measure. Frankly, anybody using the 100 yr GWP for methane needs a smack upside the head- forgive me, that’s just me being ornery because I’m writing about this, rather than relaxing with a glass of bourbon watching Netflix.

CCS Energy Consumption

Per the public reports,


…the CCS system on Quest takes 0.65 MJ of electricity and 2.1 MJ of heat per kg of CO2 captured. Most of the electricity (0.55 of 0.65) is used to run the compressors. The plant also consumes 10 T/month of amine and 1 T/month of triethylene glycol used for dehydration. Forgetting about the reagents and focusing only on the energy inputs, we discover how much it takes to capture 80% of the CO2 in the syngas, which remember is only 45% of the CO2 emitted by the SMR. Taking the (unabated, post combustion) CO2 emissions of that energy into account, but again forgetting about the methane leakage, the net capture of CO2 by Quest drops from 45%, to 35%. Just capturing the easy 45% of emissions, requires 10/35 = about 1/3 more CO2 to be emitted, in a form which is post combustion and hence uneconomical to capture.

No alt text provided for this image

“Blue” H2 by SMR, not quite so “redux”- (C) 2021, Spitfire Research

Witnesseth that CCS, even from a high partial pressure stream, takes sh*tloads of energy. Imagine the illegitimi, talking about doing this from 416 ppm CO2 in the atmosphere with direct air capture…

Add in the methane emissions associated with that CCS energy- another 0.44 kg CO2e (20 yr basis) on top of the 0.96 kg (direct) CO2 emissions unabated to run CCS, and Quest’s capture drops to about 21% in net CO2e terms.

$0.85 to $1.3 billion, plus $50 million per year, to capture 21% of net CO2e emissions from hydrogen production.

I’m calling that rather blackish-blue, bruise coloured hydrogen at very best. Because that’s what Quest really produces.

It’s also no model, whatsoever, for the use of fossil hydrogen, especially as a fuel, in a decarbonized future.

What Else Could We Do?

Aside from the obvious, i.e. make green hydrogen from renewable electricity and water, to replace all that black hydrogen we’ll need post-decarbonization?

You may have read previously that my former client Monolith Materials is doing an exciting project in Nebraska and has received a tentative offer of $1 billion USD in loans from USDOE to expand the project to full commercial scale. The project takes natural gas, or perhaps biogas methane (future plans), and converts it using electricity from a disused nuclear power plant (with future wind/solar plans) to pyrolyze it into carbon black and hydrogen. Carbon black is a valuable product itself, made normally from heavy oil by a filthy, emissive partial combustion process. They are making the hydrogen into ammonia, to serve the ~40% of US ammonia consumers who are within 100 miles radius of their plant.

A brilliant project, of great decarbonization benefit- but sadly, not scaleable to ever be a major source of hydrogen in terms of world H2 consumption. Replacing the 90 million tonnes of H2 we’ll need post decarbonization, at least initially, by such a process, would make 270 million tonnes of carbon- more than 10x the world market for both carbon black and graphite combined. While some are betting that throwing away 1/2 the energy and 3/4 of the mass of the feedstock will be paid for by the greater ease with which carbon may be buried relative to burying CO2, the jury’s still out on that. This process has the euphemistic tag of “turquoise hydrogen”. How black that turquoise is, depends on many very project-specific factors including methane leakage and the carbon intensity of the electricity used to run pyrolysis.

We could also switch from SMR to another process- oxy-blown autothermal reforming. That process is commercial already, being used in methanol and Fischer-Tropsch plants to make high CO:H2 syngas mixtures. It is less efficient than SMR if the target is pure hydrogen, but (nearly) all the CO2 ends up at high partial pressure in the syngas stream, making higher % captures much easier than with SMR. I will write about this in part 2.

We could also use electric heating, or burn product hydrogen, in the tube furnace of the SMR. The former, called E-SMR, is already under development by several companies, but again has limitations to the ultimate CO2e achievable due to methane leakage and CO2 generated from gas purification. The latter is just a way to burn up money in my opinion.

Finally, I will leave you with the single most authoritative, comprehensive and accurate review of the objectives, practicality and real motivations of carbon capture and storage that I’ve seen. Warning, it may make you laugh, and then cry, and the language might make even a sailor blush a bit.

Disclaimer: I’m human, and hence can easily misinterpret things, make mistakes, push the wrong button on my calculator etc. If you find errors in what I’ve written, and can show me with references or calculations where I’ve gone wrong, I will correct the text with gratitude.

If you can’t find anything wrong with what I’ve said other than that it makes you unhappy, or worried about your continued employment, then perhaps you need to reconsider things a bit. If it makes you angry enough to yell at my employer, Spitfire encourages you to try!

Part 2: the Ghost of Blue Hydrogen’s Future

Waste to Energy/Waste to Fuels: the Great Greenwashing Machine

Most of us find waste to be viscerally repugnant. The smell and even the appearance of garbage revolts us. And yet we all generate it, in seemingly endless quantity.

In “developed” (rich) nations, we therefore have built extensive systems to get this repugnant material out of sight, and hence out of our minds, as quickly as possible. Sure, many of us dutifully separate our waste at source into the compartments required by the local waste authority, and sometimes that makes us feel good. But we’re left with a lot of questions:

1) Is the effort to separate waste at source, worth the bother? Not just the effort, but the energy to collect physically separated materials at source- is that a net environmental improvement?

2) What happens to the waste after we dispose it and it disappears from our consciousness?

3) We hear about waste from developed countries being “shipped overseas”. Is this a disposal strategy? Are we paying others to improperly dispose of our waste?

4) Wouldn’t we be better off to just burn the whole works? Or if that doesn’t sound appealing, couldn’t we do something “smarter” than burning, to harvest the energy contained in that waste for beneficial uses?

I’ve learned that “deep dives” into complex and important topics like this, tend to bore people and don’t get read. So I won’t be diving deep into this one- I’m leaving most of these complex questions unanswered! I just need to make a few quick points, based on decades of experience working on “waste to energy” type schemes of a bewildering variety of sorts. Because frankly, “waste to energy” and “waste to fuels” projects and proposals are popping up now like dog strangling vine on my farm. Whereas the dog strangling vine is just a pure green menace, waste to energy project proposal are seen as a chance to kill two birds with one stone- to deal with the smelly, unsightly, land-consumptive and potentially GHG emissive problem of waste landfills, while at the same time, making some energy that we need.

The problem here is that it sounds too good to be true. Because it’s not true. Or, more accurately, it is true in such a limited set of circumstances that it’s basically the exception rather than the rule.

Municipal solid waste (MSW) is an extremely heterogeneous mixture which varies in composition from place to place, from time to time, and as a result of the policies (or lack thereof) of the municipalities responsible for providing waste disposal as a public service. In places where waste source separation isn’t practiced at all, the waste stream contains a lot of materials that can be, should be, and in most other places, ARE recycled. In some cases, some of these materials aren’t source separated, but rather are separated manually or by machinery at the waste handling facility or transfer station. Let’s talk about groups of materials in general terms and consider them one by one.


Metals are sorted out because all metals are highly recyclable, and recycling them reduces GHG and toxic emissions dramatically relative to making fresh pure metals from their native ores. Of course only reduced metals in solid form (i.e. cans) are readily recovered by sorting.

To give you an idea: a typical aluminum can in 2014 weighed 15 grams. Aluminum has an embodied energy on the order of 200 MJ/kg, which means that a can has about 3 MJ of embodied energy associated with it. That’s about the same as the same can, 1/3 full of gasoline. Even with recycling, aluminum represents about 11.5 kg of CO2 emissions per kg. Anybody wasting aluminum cans needs to have their head examined if they also claim to care about the environment. Those thinking that it makes sense to turn aluminum, made from aluminum oxide by electrolysis, into hydrogen, are really energetic vandals.

Other Inorganics

There’s a lot of non-degradable, non combustible stuff in the average MSW stream. Lots of dirt, concrete, brick, rock, gypsum etc. from demolition.

There’s also a lot of glass and ceramic materials. The former are recyclable- the latter, aren’t. The best you can do with waste ceramics is grind them up and use them as a replacement for sand or gravel in concrete.

Wet Organics

This is food and yard waste, diapers, pet waste and the like. And frankly the only thing that makes sense to do with this stuff is to remove it at source and not landfill it. When you landfill wet organics, you encourage anaerobic degradation which converts the waste into biogas- a nearly equal mixture of CO2 and methane, the latter being 86x worse than CO2 on the 20 yr horizon in terms of global warming potential (GWP).

Paper and Wood

Same deal- paper and wood need to be source separated, but not because of the risk of biodegradation primarily. While paper is compostable, it’s not readily degradable in a typical anaerobic landfill. I’ve personally seen newspapers which were buried in landfill100 yrs ago that were still perfectly legible. While some biodegradation does happen in landfill, landfills are not designed to be bioreactors- quite the opposite in fact.

Paper is, however, of fairly high value for recycling, particularly cardboard and and paperboard. Recycling corrugated cardboard is an environmental no brainer. Cardboard is a high value material and even businesses which really don’t care about the environment at all, collect cardboard separately for recycling because it reduces their waste disposal costs. And wood, meaning lumber and the like rather than yard waste, can always be made into paper. Wood demolition waste is already harvested for this purpose in some locales, if it can be properly source separated. And of course if there’s a surplus, both can be burned as a solid biofuel.


Plastics are nearly 100% of fossil origin. While they can be recycled, how much they are actually recycled depends on the nature of the plastic, the nature of the source collection (which determines how clean and how hard it is to sort), whether it’s a thermoplastic or a thermoset such as those used in composite materials and rubber (thermosets are basically not recycled, because recycling them requires the breaking of chemical bonds which don’t come so easily unstuck), and of course, what purposes the waste can be put to. Recent reports put the average of plastic recycling around 9%, which is far from stellar.

PET, the material used in beverage bottles, is very easily recycled mechanically and is fairly easy to source separate. However, just like with metals, we don’t generally recycle PET bottles back into PET bottles. Rather, we make PET bottles into things like carpet fibre, which don’t care about thinks like leachable content, colour and transparency quite as much as a clear food grade beverage bottle does.

However, the largest volume plastics are polyethylene (PE), polypropylene (PP) and polystyrene (PS). These materials, though readily mechanically recycled, are often used in the form of films, foams or thin-walled goods which can come back from consumers very mixed, dirty, coloured and otherwise hard to separate. And while you can make SOME goods out of PE/PP blends, most uses require quite pure material. Accordingly MOST PE and PP and most PS foam are not in fact recycled, but rather are landfilled.

There are myriad of other plastics, used either alone or in layers with other materials to provide the desired properties. Some like polyvinylchloride (PVC), are at once extremely valuable and useful and also basically a bomb, waiting to go off when you do the wrong thing at the end of life of that plastic material. And there are a myriad of uses for all those plastics, varying from inarguably dumb single uses like tie hangers or individual wrappers for plastic cutlery, to convenient but questionable uses like plastic grocery bags, to life-saving uses like IV bags, catheters, oxygen tubing, disposable syringes and the like.

When you compare the LCA data for plastics against materials they compete against in the marketplace for similar uses (such as paper, glass, aluminum or natural fabrics), plastics tend to come out on top. They use less energy and water, weigh less, have superior properties, and can be recycled. They can also sometimes save giant amounts of other waste- the thin PE wrapping on English cucumbers comes instantly to mind. This wrapping reduces the waste of cucumbers from field to table by at least 50%- a reduction in the mass and impact of waste of around a thousand fold. My friend Chris DeArmitt is a great source of the research into this topic- he knows more about it than just about anyone.

But of course, we all love to hate plastics. With some good reasons, and some bad ones. They are over-exploited in packaging. They have become so extraordinarly inexpensive that they have come to exemplify “cheap”, non-durable, consumptive and wasteful by virtue of the many dumb uses we’ve come up for them- which sadly people in the marketplace have rewarded by buying. Many plastic products are optimized for cost and aesthetic function, not for recyclability. And they enable convenience that suddenly seems mandatory- something you can’t opt out of without trying very hard indeed.

Energy From Waste to the Rescue!

The usual pitch for a “waste to energy” scheme is as follows:

  • get rid of the cost and inconvenience of source selection
  • eliminate the problem of methane generation in landfill
  • offset some fossil burning
  • save precious land by reducing landfilling

Who couldn’t love those things!

The Devil in the Details

As usual the devil lurks in the details.

First of all, we really need source separation for a few reasons. First, getting people involved in source separation helps them focus on minimizing waste generation. Second, it improves recoveries (dramatically) of the highest value, most energetically favourable materials to recycle, i.e. metals, carboard/paperboard etc.

Secondly, waste to energy isn’t the highest value use of the wet organic content. That material contains some energy, but it also contains a lot of water. Burning, gasifying or pyrolyzing waste (heating it up in the absence of oxygen), requires us to boil off all that water, and that takes a lot of energy. In net terms, most of the energy in the wet organic fraction, is needed just to provide the energy to boil off the water it contains. In net terms therefore, MSW which has been source separated for recyclables in even a prefunctory way, doesn’t contain ANY net energy of biological origin.

There’s an alternative which doesn’t mind the water. It’s anaerobic digestion, to produce biogas. That’s what we do in Toronto with our green bin waste.

But there’s still lots of energy in that MSW, right?

Yes. Even moreso if you don’t source separate the waste…

Sadly, that energy is waste plastic. All of which is of fossil origin.

MSW is therefore, in net calorific (heat content) terms, almost entirely a fossil fuel.

Finally, when you heat up waste materials to burn, gasify or pyrolyze them, you carry out chemical reactions which can produce emissions of significant toxicity, carcinogenicity, and leachate toxicity. It is not uncommon for a waste incinerator to dramatically reduce the volume, but to much less dramatically reduce the mass of feed waste materials- but to also render that material leachate toxic whereas the feed material wasn’t leachate toxic. That means that the process of burning “mobilizes” species that can leave in the (nearly inevitable) leachate which must be collected from the bottom of the landfill and treated prior to being disposed. That adds both cost and environmental impact that could be avoided. Incineration has to be made less energy efficient to cope with these toxic materials, both as a result of manipulating combustion conditions to avoid generating the worst species, and by virtue of energy used to scrub out or adsorb/desorb the toxic materials that are produced.

Waste to Fuels- Incineration in a Sexy Green Dress

Of course people have pictures in their heads of incineration that are very unpleasant, and some of that is unfair to incineration. Modern incinerators can generate quite clean exhausts, well scrubbed of toxic chemicals- if at the cost of generating only a fraction of the energy contained in the feedstock as a result. But that’s not the problem.

Ms Waste to Fuels- hot stuff, as she’s incineration in a sexy green dress!

The problem is the plastic.

The problem is the energy derived from the fossil origin materials in the waste stream.

The problem is the needless dumping of fossil CO2 into the atmosphere.

Since the net energetic value in the waste is of fossil origin, the waste itself is, in net terms, a fossil fuel.

Accordingly, some greenwashing is needed to recondition the image of waste to energy schemes, by dressing incineration up in a sexy green dress called “waste to fuels”.

By converting part of the energy and maybe even some of the embodied carbon in the MSW feed, into a new fuel, proponents hope to you won’t see incineration under its new clothes.

These schemes are varied, but they usually involve converting waste to simpler chemical by means of endothermic (heat-consuming) reforming reactions by processes called pyrolysis or gasification, which differ only in degree, energy input and desired suite of products. In both cases, heat, usually generated by burning part of the waste feedstock or part of the products or byproducts, is used to break big molecules into smaller ones. Sometimes, oxygen or air is added to the feed to produce some of that energy right in the reactor, and sometimes it is produced outside the reactor and transferred into it via heat exchangers. Sometimes solid materials, often inorganic constituents of the waste itself, are used to help transfer this heat.

The typical strategy is to make either a very light gaseous material called synthesis gas, which is typically a mixture of carbon monoxide, carbon dioxide, hydrogen, sometimes methane, along with water vapour, nitrogen, and acid gases like hydrogen chloride, hydrogen sulphide etc. (remember, waste is very heterogeneous- and it’s going to contain some PVC, some brominated fire retardants, some fluoropolymers…a host of nasty molecules can result when you break these bigger molecules down). The syngas can then either be burned for energy in a turbine after some basic cleanup, or it can be cleaned up much more thoroughly and converted over catalysts to molecules like hydrogen, methanol or Fischer-Tropsch liquids (waxes, diesel etc.). These materials, including the hydrogen, are generally intended for use as fuels. Yields vary depending on the feedstock and process and conditions, but let’s be clear: a considerably smaller fraction of the energy in the feed material is converted to these secondary fuels than would have been obtained if you simply burned the waste in an incinerator.

What happens to the fossil CO2? It all ends up in the atmosphere. When hydrogen is the product, all that CO2 goes to the atmosphere directly, and more CO2 is released per kg of hydrogen produced than if you started with fossil methane instead. That we can say with certainty just from looking at the nature of the feeds: plastics, the major energetic content of MSW, have a typical formula of (-CH2-), whereas methane has a formula of CH4. The higher the C:H ratio of the feed, the higher the CO2:H2 ratio in the products. The result might be better than coal, but is definitely worse than methane. The only difference between the two is that fossil methane comes along with a burden of production and distribution methane leakage that the plastic waste doesn’t. Depending on where you’re making the “black” hydrogen from fossil methane, that leakage can vary between a significant and a very significant CO2e impact, given methane’s GWP of 86x CO2 on the 20 yr timeframe. Sorry folks, but that alone isn’t going to get you my sympathy for making hydrogen from garbage. It’s going to be close to a wash, at best.

Can You Make it Worse?

Sure. You can do the source separation, separate out the waste plastic, and then gasify it. That’s even worse!

Why is it worse?

Because the alternative would be to simply landfill the waste plastic.

Waste plastic degrades when left in the environment, losing mechanical properties and fragmenting into smaller and smaller pieces. Those degradation processes however are driven by two things: oxygen, and sunlight. Sunlight provides the high energy needed to break the bonds in these synthetic materials, that natural processes like biodegradation were not evolved to break apart. And oxygen can react with the polymers both in the dark and in the light.

What happens when we bury plastics in a landfill? Those degradation mechanisms simply stop. There’s no more driving force for the molecules to fall apart, so they don’t. Waste plastics in a proper anaerobic (covered) landfill are stable for millenia. They don’t break down into microplastics. They don’t leach into the groundwater. They just stay there. Their environmental impact simply ends.

As does their fossil carbon content. It stays there. Sequestered. Durably.

For millenia.

What We Should Do Instead

We should stop believing in ideological fantasies of “circular economy”. We should instead begin to think about optimal recycle. And we should focus on making lots and lots of truly renewable and low emissions energy, because as we do that, not only will we feel less need for fuels made from garbage and waste plastic, we will also increase the optimal amount of recycle- because we will reduce the environmental impact arising from the energy used to drive recycling.


What does an optimal recycling system look like for plastic waste?

Here’s my view:

1) You start with good public policy. Policy which looks holistically at the use and end of life of products, using reliable, disinterested 3rd party LCAs as a guide. Stop making decisions on the basis of the “natural is better” fallacy, like the totally idiotic decision to replace polypropylene drinking straws with waxed paper ones. The waxed paper ones are inferior in function, alter the taste of the beverage, aren’t durable, can’t be re-used, use more water and create more emissions in manufacture and transport than their PP cousins, and yet neither PP nor paper degrade in a landfill. The only, minor benefit of mandating paper straws is that the paper ones degrade a little faster when they’re deposited as litter. Litter makes up perhaps 1% of our disposed waste packaging materials in developed nations. Doing worse with 99% of a product to partially mitigate the impact of 1% of its end of life disposal is not sensible.

Better still: don’t offer a straw of any kind unless it is asked for.

2) Mandate “deposit return” for goods which otherwise don’t end up recycled well. This would apply to goods ranging from beverage bottles to cellphones. Users are quite willing to return goods for cash if there’s cash to be had. This generates cleaner, better sorted waste streams for either re-use or recycling.

3) Maximize mechanical recycling of plastics. And don’t concern yourself about the fact that most recycling- of plastics and of metals and other materials, is really “down-cycling”. Just as we don’t recycle pure copper wire into copper wire again, we don’t recycle PET bottles into bottles again. Why not? Because down-cycling copper wire to copper pipe, and PET bottles to carpet fibre, makes more sense energetically.

4) When you have a stream of mixed plastics that are partially degraded, you can do a limited amount of chemical down-cycling to make materials such as waxes, asphalt extenders, printing inks and the like. Doing this makes sense, but will only be a limited endpoint for the plastics.

5) Most of the degraded, mixed and dirty plastics will end up being useless after they’ve been optimally recycled. How should we deal with them? We should bale them and then bury them in properly constructed landfills. They represent the cheapest, lowest impact, lowest risk post-consumer fossil carbon sequestration strategy imaginable. All you have to do is not burn them.

Finally: if you don’t have space for landfill, you have two choices: work harder at steps 1-4, or pay somebody else to landfill the waste plastic for you.

Disclaimer: this is not a thorough examination of the topic, it has been kept brief so that people might read it. This is an enormously complex topic involving the interrelation of society, technology and values, environmental impact, decarbonization and economics.

I’m not at all saying that there can never be a “waste to X” or even a “waste to energy” scheme that makes environmental sense. I’m specifically attacking waste plastic or MSW to energy or fuels schemes, because I think it’s quite clear they aren’t good waste management practice and aren’t in the interest of decarbonization either.

If you don’t like what I’ve said, that’s OK. If you think I’ve materially erred, that’s entirely possible as I’m human and make mistakes like anyone. Provide good references demonstrating where I’ve gone wrong and I’ll correct my piece with gratitude for your input.

Global Warming Risk Arises From Three Facts

Anthropogenic global warming (AGW) is a real risk for future generations including my own children. It’s a risk I’ve personally taken seriously, and have taken personal action against, since the late 1980s when I was in university. And while we’ve seen some extremely positive developments in the past 30 years such as the creation of new industries to generate wind power, solar power, electric vehicles, biofuels, LED lighting etc., this has barely moved the needle on the root causes of AGW: fossil greenhouse gas (GHG) emissions and land use changes made by humans.

Why have we not taken more action? We knew about AGW thirty years ago- the science was quite solid even back then. The reality is, we ignored the science because many people- ordinary voters AND the people in power who report to them- refused to believe it. Many continue to do so to this today. And why is that? Human motivations are complicated, but I see two key root causes. One is that the worst harms from AGW aren’t likely to be experienced by the generation making the emissions, but rather by future generations, i.e. people may love their children, but not enough to avoid spending their inheritance in this sense. The other is that they’ve been fed a series of lies, in part by parties interested in profiting from the status quo as long as possible, which allow people to cling to a shred of doubt about the underlying science which is, frankly, not supportable by the facts.

The risk of AGW hinges ultimately on three facts. These are indeed facts- things we know, based on measurements- generally multiple measurements which compare favourably with one another. Each of the three facts also has sound theoretical underpinning, meaning that we not only know them to be facts, but we know both why they’re facts and also why they’re important. And these three facts are not the subject of credible dispute in the scientific community. They are not the topic of active discussion in the peer-reviewed journals on the subject, which has another name- the repository of the current state of human knowledge on the topic.

Here are the three facts, one by one, along with peer-reviewed scientific references, or more accessible references which themselves refer to the underlying scientific papers, which will allow you to assure yourself that I’m not just making this up.

Fact #1: Atmospheric CO2 Concentrations Have Increased

We started burning fossil fuels in earnest in the 1720s when the first, highly inefficient steam engines were invented. These engines were in part used to power pumps- in coal mines. Steam engines were in that sense a recursive technology, i.e. one that enables and magnifies its own success. The burning of fossils freed us in a sense from what was at the time, the terrible burden of energy sustainability without modern technology. We no longer needed to balance our need to stay warm against the rate at which trees grew to make firewood for us as just one example.

For the millennium before that, atmospheric carbon dioxide (CO2) concentrations were stable, bouncing around near 280 ppm. They did change a bit as we de-forested much of Europe to provide firewood, through the so-called Mini Ice Age and Medieval Warm Period, and then as we hewed down North America’s forests and burned them too. But for 1000 years, the concentrations remained more or less stable. 

NOAA levels of CO2 methane and N2O vs time


This means that the carbon cycle was more or less in balance. Flows of CO2 and methane up into the atmosphere from respiration of animals and plants, desorption of CO2 from the oceans, decay of organic matter, emissions from methane seeps and volcanoes etc., were in balance with flows of CO2 out of the atmosphere due to photosynthesis, dissolution into the oceans, soil organic carbon generation, oxidation of methane to carbon monoxide (CO) and CO2 in the upper atmosphere, weathering of silicate rocks and the big final sinks- oceanic sequestration, i.e. the conversion of CO2 into carbonate rocks and the permanent burial of oceanic sediments containing biomass. That both the natural up- and down-flows of CO2 and methane are positively massive in fact doesn’t matter- what matters is that they were in balance.

But when we look at the concentration of CO2 in the atmosphere, as measured in bubbles of air trapped in ice cores primarily, what we see is that CO2 concentration was surprisingly consistent: from the Law Dome ice core data, the precision of the CO2 concentrations is estimated at +/- 1.2 ppm, and the observations over the pre-industrial period to about 1006 AD were 275 to 284 ppm. The concentrations started to rise as we started to burn fossils in earnest.


Since the 1960s, independent groups have been continuously monitoring CO2 concentrations in the atmosphere, most notably at the Mauna Loa observatory in Hawaii. The concentrations show a continual increase year after year, with a “sawtooth” of small seasonal changes up and down each year. The “sawtooth” arises from changes in seasons on Earth- there is more photosynthetic plant life in the northern hemisphere than the southern, so when the north is in summer, CO2 concentrations drop a little- only to rise again in winter.

NOAA graph of CO2 vs time short term


CO2 concentrations have recently reached 415 ppm- a concentration not encountered in the past approximately 1,000,000 years. CO2 has never been this high since there was anything recognizable as a human on the planet.


(pic from Luthi et al)

CO2 and T vs time for 1 million yrws

Source https://www.nature.com/articles/nature06949

CO2 has gone up, rapidly, from a stable level, and continues to increase as I write this. So, sadly, have the concentrations of methane, N20, and other so-called “greenhouse gases” (GHGs).

This is a fact, not something a credible person can argue with. 

Fact #2: We Caused The CO2 Increase, Primarily By Burning Fossils

It isn’t sufficient to say that CO2 went up “suspiciously” as our emissions of fossil fuels went up- that is an argument from correlation which does not prove causation. What we can say is that the increase in CO2 concentration is consistent with the theory that fossil fuel burning caused this rise, but though it looks suspicious, this alone absolutely isn’t sufficient proof.

Source http://withouthotair.com/c1/page_9.shtml

(Aside: if you care at all about AGW, or renewable energy, or both- reading the late David Mackay’s brilliant work at www.withouthotair.com from beginning to end is your first minimally necessary step in educating yourself about the issues we’re up against in my opinion. It’s very accessible and its conclusions very clear: dealing with AGW is absolutely necessary but it will be a very challenging problem because we use a lot of energy and hence burn a lot of fossil fuels right now)

There are however two measured facts which prove conclusively that the new CO2 in the atmosphere is primarily there as a result of our burning of fossil fuels.

The first is simple carbon mass balance accounting. We can fairly accurately estimate how much fossil fuel we’ve burned, since fossil fuels a) cost money and b) are taxed. A scientific accounting of the amount of fossil fuel burning does demonstrate that not only did we produce enough CO2 to cause atmospheric concentrations to rise by the amount they did (proven by measurement above), but we actually emitted TWICE THAT MUCH:


Where did the other half go? Some of it went into the oceans, as would be expected by anyone who understands a little physical chemistry. As CO2 dissolves in water, the pH of the water decreases. Acidification (decreasing pH) of the surface oceans has indeed been measured, and has occurred because of the increased CO2 concentration in the atmosphere. This too is of concern to ocean life such as corals and shelled sealife which rely on carbonate fixation as part of their lifecycle.

The rest went into the biosphere.


This animated visualization of the carbon cycle, showing how carbon moved from fossil reservoirs into the atmosphere and then down again into the oceans and biosphere, is most helpful in demonstrating what happened:

The 2nd proof of the anthropogenic origin of the new CO2 in the atmosphere is isotopic measurements of the carbon in atmospheric CO2. The ratios between stable 12^C and 13^C and radioactive 14^C (continuously generated by cosmic rays and continuously decaying to nitrogen) have long been known to have been affected by the addition of ancient carbon to the atmosphere. Living things, while living, have roughly the same ratio between 14^C and 12^C as the atmosphere. But fossil fuels have been dead and separated from the atmosphere for a long time- many, many half-lives of 14^C, and hence are nearly free of 14^C. The result of our fossil burning has been a gradual decrease of the 14^C to ^12C ratio in the atmosphere. The ratios of 13^C to 12^C also demonstrate the same thing- the new CO2 is of fossil origin- it hasn’t desorbed from the oceans, been released by rotting biomass etc. The following references are provided if you want to learn more.


https://en.wikipedia.org/wiki/Suess_effect (providing a brief but clear explanation of the isotopic measurements and what they mean)

http://uscentrist.org/platform/positions/environment/context-environment/docs/Revelle-Suess1957.pdf (Suess’s original paper)

This link contains a figure with the isotope data- it’s a big .pdf so it will take a while to download.


It should be noted that the CO2 emitted from volcanoes is also low in 14^C- but the amount emitted by volcanoes is actually quite small relative to the amount emitted by humans as a result of burning fossil fuels. Two gigantic volcanic eruptions in the recent past- Mt. Pinatubo and Mt. St. Hellens, as examples, produced barely a blip in global CO2 concentrations measured at Mauna Loa. Let that sink in for a moment: we fossil fuel-burning humans are by far the biggest volcano on earth, in terms of emissions.

Again, that the new CO2 in the atmosphere is primarily a result of us burning fossil fuels and dumping the resulting CO2 into the atmosphere, is not in credible scientific dispute. It’s a fact, based on multiple replicate measurements which agree with one another. It’s a fact that we’ve known about for a long time too.

Fact #3: Extra CO2 (and other GHGs) Causes Climactic Forcing

This one again is not a supposition- it is a fact, arising from both an understanding of basic physics known since the late 1800s, and direct measurements.

Most of the atmosphere is NOT CO2. The atmosphere consists mostly of nitrogen, oxygen, argon and water vapour.  CO2 is a minor constituent at only 415 ppm or 0.04%. But everyone should realize that a small concentration of something can have an out-sized effect. If you don’t believe this, breathe some air containing a very small amount – say 400 ppm or 0.04%- of carbon MONoxide…but please don’t do that! You should already know the outcome- and hopefully have a CO detector in your home too, to make sure you don’t do so by mistake.

(Note: CO2 is also toxic- carbon intoxication symptoms start at concentrations above about 5,000 ppm or 0.5%. That’s a far higher concentration than the atmosphere will ever get to, but it certainly can get that high in spaces particularly underground with poor ventilation.)

That is NOT to say, however, that CO2 is an unimportant constituent! It, along with water and solar energy, is one of the three primary fundamental building blocks of life on earth. It isn’t “plant food”, in exactly the same way that cement blocks aren’t food to construction workers. It contains zero useful chemical potential energy- a fact which makes it the desired product of processes intended to liberate chemical energy such as combustion or respiration. It’s merely a material that plants can collect and use solar photochemical energy to convert, along with water, into biomass. Having more building blocks at hand when CO2 concentrations are higher, simply means the plant has to expend less energy to build a given amount of biomass.

CO2 is also a strong absorber in the infrared. The earth receives solar energy at a range of wavelengths ranging from ultra long radio waves to high energy X rays- with the peak of the energy emitted in the visible wavelength band between 400 and 700 nm.

(picture: sun’s spectral irradiance)


The magnetosphere and the upper atmosphere fortunately filter out a lot of the nastier, most damaging short wavelength radiation from the big fusion reactor in the sky before it gets to us on the surface.

Some is reflected, but the earth absorbs the remaining solar energy, with plants capturing only a small amount of it to store in the form of chemical potential energy. The rest, per the 1st law of thermodynamics, has to go somewhere. And that’s what happens- it ultimately goes back into outer space from whence it came. Because the earth is quite cold relative to the ball of radioactive fusion plasma in the sky, it re-radiates the energy absorbed by the sun not as short-wavelength UV or visible light, but as infrared (IR) light. The average solar energy input, minus whatever is stored in the form of biomass and hidden away from being eaten and respired again to CO2 and water, raises the average temperature of the earth until the amount of heat leaving through the atmosphere equals the amount falling on the earth.

Of course if this were all that was happening, the earth would be a snowball and humans wouldn’t exist… 

Instead, the atmosphere absorbs some of the infrared energy radiated by the earth, and re-emits it back to the earth, giving us a 2nd exposure to some IR light that would otherwise be transmitted again to the blackness and cold of space. This partial absorption and re-emission of the radiative emissions of heat from the earth, back to the earth, by gases in the atmosphere, occurs mostly as a result not of CO2, but of water vapour. Water is also a strong IR absorber- and is the earth’s predominant “greenhouse gas” (GHG).

…but before you start to worry about water vapour emissions from your shower or teakettle warming the planet, remember that water vapour is in rapid physical equilibrium with liquid water in the oceans, soils and biosphere. The mean water vapour content of the atmosphere therefore depends on the mean global temperature, and is not meaningfully affected by human emissions of water vapour to the atmosphere.

The absorption spectrum of water vapour and the other “permanent” gases in the atmosphere has a “notch” in it- a range of wavelengths through which IR light can escape unimpeded. 

This is in part the reason that frost and condensation (such as dew) can happen when the air is at a bulk temperature higher than the dew point or the frost point. On a clear cloudless night, surfaces such as your car windshield have a narrow wavelength window through which they “see” the blackness of space at 4 kelvin above absolute zero- and hence can lose heat through this window, surprisingly dropping to a temperature lower than the ambient temperature. Special surfaces which enhance this re-emission are an interesting area of study for reducing energy consumption from cooling systems:


This notch in the IR absorption spectra of the normal constituents of the atmosphere is called the IR re-radiative “window”. And it turns out that CO2, methane, N2O and a number of other gases, absorb IR strongly in this range of wavelengths along with others. They always have done. And we’ve known this for a long time- ever since we were able to measure the IR absorption spectra of these molecules- since around the late 1800s. These GHGs narrow the IR re-radiative wavelength window into outer space, making the earth “dimmer” as an IR emitter and requiring the earth’s mean temperature to rise until it can shove enough IR through the remaining portion of the window. 

Of course this means that if we add EXTRA CO2, methane, N2O etc. to the atmosphere, this will narrow the IR re-radiative window even further. And that, obviously, forces the climate – it requires earth to warm to satisfy the new balance between the in-flow and out-flow of energy. Input minus output equals accumulation, and if we restrict the out-flow of IR from the earth to space, the earth MUST warm to satisfy the new balance point.

It is also true that a doubling of CO2 does not result in a doubling of the resulting climactic forcing- it’s much more complex than that. Extra CO2 does have somewhat diminishing returns as a greenhouse gas. Contrary to the claims of many denialists, however, the effect of extra CO2 is not “saturated”- as one easily found peer-reviewed reference states:

“We conclude that as the concentration of CO2 in the Earth’s atmosphere continues to rise there will be no saturation in its absorption of radiation and thus there can be no complacency with regards to its potential to further warm the climate.”


Warming also increases the amount of water vapour in the atmosphere- remember, water is still the primary GHG. That is a powerful positive feedback.

Note that so far we’ve discussed only CO2- but in reality, humans have also increased the amount of methane and of N2O and other GHGs too through our actions and inaction.

Three Facts- Not In Scientific Dispute

These three facts: CO2 went up, we caused it by burning fossils, and extra CO2 narrows the IR re-radiative wavelength window, forcing the climate – these are not in credible scientific dispute. Nobody is arguing about the fundamental validity of any one of the three of them in the scientific literature. It’s not that they’re scared to, or worried about losing their funding if they did- it’s simply not worth arguing over because it is so well demonstrated as fact, consistent with all the data we know of. And if you hear anyone denying any of these three facts, the conclusion is clear: they’re either ignorant, or they’re lying – to you, to themselves, or both. It’s that simple.

This is not a matter of orthodoxy, it’s a matter of simple measured fact.

Three Facts = RISK of AGW

The three facts I’ve pointed out lead inescapably to the conclusion that there is a very real, fact-based RISK of global warming caused by human activities (i.e. AGW). There can be no other conclusion.

It is of course perfectly valid to then say, “so what?” Let’s say that we accept that CO2 went up, we caused it, and the extra CO2 forces the climate. The last bit of wiggle-room for either denial or skepticism remaining is to claim that the resulting forcing will be minor, and hence not a problem for humans or the rest of the biosphere.

How does one translate the risk of AGW, which is certain, to a particular amount of warming of the earth? We do know the temperature must go up as a result of the facts we know, but how do we know by how much it is likely to increase? And how quickly that will happen?

The answer is that the earth is a complex system, with many inter-related factors, all of which can affect the climate in one direction or another, to a greater or a lesser degree. Some of these effects occur instantaneously, like the changes in the IR absorption. Some happen on a timescale of days, others years, others centuries, and others, millennia or even longer. And many of them are very much predictable.

Science therefore must have recourse to models, to estimate the effect of the additional radiative forcing on the earth’s mean temperature.  Those models have to be very complex to tell us anything meaningful and reliable. And unfortunately, there are no replicate Earths available for us to do tests on, with an accelerated timescale so we can see results soon enough to understand our model’s validity. 

And that’s where some skeptics hang their hats- they make the claim that climate models are fundamentally untestable, and hence not reliable enough to use to draw any conclusions from.

That statement however, fails on two basic points:

1)     There IS a system on which we can test the validity of our models. That system is our earth in its recent PAST.

2)     Merely being unable to precisely estimate a risk, in no way absolves us from the responsibility to act to mitigate that imprecisely-quantified but otherwise certain risk. Any engineer who has ever attended a Hazop review understands this fact intimately (or shouldn’t be involved in Hazop reviews!)

Some throw up their hands and say that it’s impossible to make decisions such as bringing to an end the burning of fossil materials such as coal, petroleum and natural gas, with the discharge of the effluent to the atmosphere, on the basis of something as unreliable as a model of the earth’s climate. But these people do not understand how we engineers make decisions related to risk as we practice our profession. We have a duty to hold the public safety as paramount- and that duty does not allow us to simply throw up our hands and say, “prove it!” before we will take a mitigating action. 

We engineers see risk as the probability of a bad outcome, multiplied by the severity of the bad outcome. Merely bad things which are likely to happen very frequently are a high risk. And truly terrible things- like raising the mean temperature of the earth by, say, four degrees Celcius in a period of a century or so? Those things don’t need to have a very high probability of happening before we MUST take action because the risk is too high.

Others argue that global temperature measurements are difficult, or have been “manipulated”, and hence the data against which the models’ predictions are measured are suspect. Global mean temperature is a very hard thing to measure, and a very noisy signal. It’s also slow to respond- something as large as the accessible portion of the Earth has an enormous heat capacity. But those people are either conspiracy theorists, who think scientists are deliberately lying about climate change in order to further their careers (rather than being the one scientist who blows open such a conspiracy and gets a Nobel prize…), or they’re ignorant of just how many different measures of, and proxies for, global mean temperature have been used to check the models.

While my fundamental opinion is that we should defer to the knowledge and experience of the people who actually study the climate as their principal area of study in their area of expertise, I have seen a few compelling things that demonstrate to me that the models are in fact on the right track.

Here’s the first one: a clever animated infographic which shows that alternative explanations of the rise in global mean temperature that HAS in fact been observed, do not explain the increase. This uses, as I mentioned, the recent past of the earth as a means to test these various hypotheses of why the temperatures we’ve observed have in fact increased by the amount they have done:


No, it’s not the sun- the sun’s output DOES affect the climate, as do orbital and earth tilt cycles. But those things doesn’t explain the increases we’ve seen this time around either.

It’s not aerosol emissions from pollution or volcanoes, ground-level ozone, deforestation/land use changes etc. It’s clearly a result of the increased concentration of GHGs in the atmosphere. The risk is real- it’s having results on the earth’s climate.

The 2nd is this graphic for which I can thank @Mark Tingay for posting repeatedly in response to AGW denialists’ comments here on LinkedIn. It shows the consistency of the models with the measured temperature data, expressed as the “anomaly”, i.e. the excess of the reference 1980 temperature that has been measured:

This is most up to date- from Gavin Schmidt’s Twitter feed. (He’s director of NASA’s Goddard Institute)

Both the models and the measurements of temperature have error bars on them, as does the risk resulting from the three facts I’ve discussed above. But here’s the key point: the error bars on these measurements and calculations do not extend to giving us hope of there being no effect, i.e. an effective risk of zero. And they certainly do not extend to giving us hope that we can continue to burn fossil fuels in the profligate way we’ve been doing for the past century or so, without the negative consequences of AGW. 

We’re seeing rather clear, obvious evidence of those consequences even now- the temperatures have absolutely risen in a significant way. And it stands to reason, and to commonsense, that adding significantly more energy to a system like the earth’s climate, is going to have some serious negative outcomes- some of which, such as the melting of Arctic permafrost, themselves would be gigantic positive feedbacks on AGW, making it even worse. I’m not going to get into listing those impacts here- if you care to hear about the potential nightmare scenarios we could be generating for our progeny and theirs unless we smarten up and curtail our direct burning of fossils as fuels, you can find lots of that stuff elsewhere. Just ask that little Swedish girl that so many seem to be terrified and angered by. She’ll give you an earful- and yes, we all deserve it.

Many people who accept, perhaps grudgingly, the reality of the risk of AGW, are turned off the whole thing as a result of what they see as hyperbole used by those who want to convince us that AGW risk is a very serious issue which needs our prompt and serious attention. They react badly in particular to what they think is “alarmism”- for instance when someone says that we have less than 20 years to avert “global catastrophe”. I sympathize- the shrill rantings of uninformed people make me angry too, and I spend a lot of my time combatting the untruths, half-truths and distortions they are spreading on the Internet.

But it is important to clarify this one point: that we have limited time to avoid locking in potentially catastrophic warming. That point is consistent with the facts actually- but what isn’t being said, or perhaps isn’t being emphasized, is that they’re not saying that the catastrophic warming itself will be encountered in the next 20 years. It won’t be. The earth has a lot of heat capacity, so it heats slowly. It should therefore be no surprise that young people are most concerned about AGW and its effects- because they and their children will be the ones who live long enough to encounter them. We, their parents and grandparents, are not likely to be alive to experience the worst of those effects.

Final Thoughts About AGW and Conservatism

Sadly, some people are motivated to disapprove of AGW and pretend it’s not a problem out of a misguided notion of what it means to be “conservative” or “skeptical”.

Fossil fuels are a precious, finite resource- on the human timescale rather than the geological one. They are used to make ten thousand molecules and materials that are every bit as essential to modern life as energy is. And as someone who has helped people try to do this for a couple decades, I can say from first-hand professional experience that replacing some (many) of those molecules and materials with alternatives or substitutes derived from renewable resources is very difficult indeed- far more difficult than it is to make energy by renewable or non-emitting/low emission means.

Being “conservative” means valuing conservation, not being stodgy and unwilling to change or adapt, and certainly not being willfully blind toward new information when it comes to light. There is plenty of reason to conserve those precious, finite fossil resources for uses of highest value to humankind rather than squandering them as fuels. This would be true even if AGW were a total crock of horse effluent. Future generations will scold us not just for AGW, but also because we squandered their birthright in such a wasteful way and made their lives more difficult as a result.

Finally, being skeptical doesn’t mean rejecting anything you don’t understand. It certainly doesn’t mean relying on your current worldview entirely to inform you about what information you should accept as true and what you should reject as false. Being skeptical merely means being able to say “I don’t know” until you have sufficient proof to actually know. And there is no room for skepticism, whatsoever, in relation to the three facts which underpin the RISK of AGW. Your choice is to be either in denial of reality, or not. Please choose wisely!

Acknowledgements: thanks to @Mark Tingay and many others active here on LinkedIn for tirelessly chopping mutually inconsistent heads off the 9-headed hydra of climate change denial. Many give up, leaving the discussion floor to the denialists unchallenged and leaving the general public the view that there is room for doubt where there is none. It is a hard fight, but a worthwhile one.

Thanks also to Brian Dunning and this particular Skeptoid episode, which was the inspiration for this line of reasoning on my part. Skeptoid articles are fun to listen to, and give you the option to read instead if you, like me, read faster and more accurately than you listen:


DISCLAIMER: everything I say here on LinkedIn is my own opinion. And my opinion is not infallible- and is subject to change when presented with new data with good references. If I’ve made any errors here- and I likely have- then by all means message me or comment to my article here and let me know where I’ve gone wrong. Do it respectfully though- if you go ad hominem, I’ll block you- life’s too short for that kind of horseshit. 

Finally, my employer, Zeton Inc., takes no opinion in these matters, does not endorse my statements, and loves what it does- designing and building pilot plants for the whole breadth of the chemical process industry. If you take issue with anything I say, please take it up with me and leave Zeton out of it.

What Are the Energy Solutions?

What are the Energy Solutions?

UPDATED 25/06/2024,  because LinkedIn messed up the formatting, deleted every apostrophe etc. How the hell does that stuff happen? No idea…but I’ve cleaned up the mess, and added a lot of links to new articles on related topics.  And I’ve also admitted that my predictions about Ontario’s tendency to re-kindle a romance with boring old nuclear turned out to be wrong, sadly.  Well, at least if we build nuclear, we won’t need to build quite so much gas- but it would take a change of provincial government to get us to do the more sensible thing and build more renewables and storage.

Oh Paul, you’re so critical! You just crap on other peoples green solutions, telling us why they wont work in your opinion! Where are YOUR solutions? Put up or shut up!

First Premises First: what are we trying to solve, exactly? Anthropogenic global warming. (Any denialist posts to this article will be deleted. Want to argue about AGW? Do it in response to my article on the subject!)


Toxic pollution that shortens peoples lives. That’s obviously bad too, and it’s long past time we dealt with it.

There are lots of others, but those are the two biggies.

Don’t agree that these are problems worth solving? Wondering what all the fuss is about, and why we don’t just keep burning fossils with mad abandon? Then please- do us a favour and just go somewhere else. Discussing solutions to problems with people who don’t believe the problems are real is just a total waste of time. We’ve been doing that for the past 30 yrs, and there isn’t another 30 for us to waste.

What Must End?

We have to stop burning fossils as fuels. Not immediately, not completely, but eventually, over time. We need to stop burning all of them, but we can start with the worst one- coal. We don’t need to wait, but we can certainly focus on them in order of decreasing C:H ratio. That’s not a bad way to look at it, in weighted terms, i.e. looking at both toxic and fossil GHG emissions. Not pretending that methane leakage isn’t a problem, but not focusing ONLY on GHG emissions either.

We don’t need to replace the primary energy we use today in the form of fossil fuels though.


What we want here is to accomplish as many of the benefits of modern life as we can, just without burning fossils on purpose to do it. We don’t need to burn fossils to move people around. Nor to have comfortable homes. Nor to provide lighting. Nor to provide healthy food, in quantity and variety.

We do burn fossils for all those purposes, in part or in total. Why? Because they’re cheap. And they’re cheap, in part, because we don’t charge much to use the atmosphere as if it were a giant, limitless public sewer.

SOLUTION #1: Carbon Pricing

That’s no solution at all, Paul! Yes, you’re quite correct on that point. Its not a solution in and of itself. It is, however, the absolutely, minimally necessary precursor to ANY SOLUTION to the problem of AGW.


Don’t agree with me? You’re not being serious, or you’re ignorant. Did that sound harsh? I meant it to. We need steep, increasing, durable, sustained and ultimately VERY HIGH carbon taxes. North of $150 USD per tonne of CO2e. And yes, they MUST apply to fossil methane leakage too- except at 86x CO2.We need those carbon taxes to be investment grade. Otherwise, you’d be an idiot- as a business or an individual- to make the expensive, long-term capital investments necessary to avoid those CO2e emissions.

They need to be international, so that laggard or cheater nations don’t get a trade advantage by not imposing their own taxes. And because you know some nations WILL be laggards or will try to cheat, we also need carbon tariffs on the goods and services of laggard or cheater nations. Doing anything less will eliminate the willingness of people to bear being taxed. Everybody has to pay the tax, or no one will.

Carbon taxes are more efficient than cap and trade schemes, and harder to defraud. And while they are necessary, they are not, by themselves, sufficient. They will need to be paired with regulatory controls, or else the tendency for demand to be somewhat inelastic to increasing price will take over.

SOLUTION #2: Electrify Everything

Well, not everything. Just absolutely everything we can. As soon as we can.

And we need to do that, again, in order. Starting with all the places right now where we take an input of heat (chemical energy) but require an output of work (mechanical energy). That means transport, as well as the obvious: we have to stop burning fossils to make electricity.

Canada has the problem of burning fossils to make electricity basically already licked. Yay Canada! About 80% of us have access to a grid which is 40 g CO2/kWh or less. The remaining 20%, in Alberta, Saskatchewan and some of our smaller eastern provinces, are also working hard to decarbonize electricity- at least by means of the half step of transitioning from coal to gas.

But all of us will need to make a lot more electricity using renewable or non-GHG emitting sources. That means lots more wind, solar, geothermal, hydro, biomass, tidal etc. And in some places, it also means lots more nuclear.  But beware- nuclear won’t be cheap, or fast.  Building electricity based on a 10-15 year forward demand forecast is going to get it wrong, and getting this wrong, can be very expensive.

SOLUTION 3: Over-Build Renewables

Wind and solar were once expensive, and totally dependent on subsidy. Not any more! They’re now the cheapest kids on the block, and not by a small amount. When they’re available, they’re cheaper than any other way to make electricity. And they’re not done getting cheaper, either.

All electricity production technologies must engage with that fact. Unless governments actively PREVENT the use of solar by homeowners, and prevent the use of wind power, grids everywhere are going to have to learn to cope with intermittent renewables eating part of their pre-paid lunch.

Wind and solar are, of course, fundamentally intermittent. Solar has predictable daily and seasonal average variation, and unpredictable weather-related variation. Wind has variability which again depends on location and the seasons- but fortunately, at least the wind blows at night some of the time- and may blow more in winter than in summer in some locales. The combination, therefore, is more valuable than using either in isolation.

Wind and solar do however have a variability which results in annualized average capacity factors which are less than 100%. They don’t make power steady, 24/7/365, nor when we need power. In Toronto, solar’s average capacity factor is about 14%, meaning that a 1 kW nameplate solar panel makes only about 1200 kWh in an average year, not 8760 (i.e. 24×365). And Toronto is SOUTH of most of Germany, in case you’re wondering.

As for wind, its capacity factor varies depending on location and turbine size. Big turbines get access to steadier winds, as do offshore locations. Capacity factors for onshore wind are on the order of 30%, and for big turbines offshore are around 45%. For comparison- which is very important here- hydro power which is often talked about as if it were steady year-round, also has an annualized capacity factor of only about 43% (EIA, US, 2018) and coal, only about 53% (same source, 2018) (UPDATE- capacity factors for coal continue to fall- even in China!).

Nuclear is the stand-out at about 93% capacity factor- but just like coal, its capacity factor isn’t a feature of its reliability to make power when needed, but rather a factor of how its owners operate it. Nuclear is expensive to build and has a near-zero fuel cost, so it must be run as close to full capacity as possible to amortize its cost over as many kWh as possible.

Coal, on the other hand, makes more money running when power is needed, but cannot turn on and off in a matter of minutes- so sometimes it is more economical to just not run the plants. Coal’s capacity factor in 2013 was 60%.

The low-ish capacity factors and the moment to moment variability of wind and solar make some dubious about building a future on these unreliable energy sources. And it certainly makes them challenging to use economically for some purposes- making chemicals like hydrogen being the obvious one.

There are two ways to solve this problem however. The simple-minded one, which many assume for some reason to be the only practical solution, would be to pair renewables with storage. If we need 1000 GWh of electricity, we build enough wind and solar to make say 1100 GWh of wind and solar over the year, and then use storage to fill in all the mismatches between supply and demand. That is, frankly, just nonsense.

The obvious solution with wind and solar getting cheaper by the minute, is to simply build more than we need, and spill (curtail) the rest.

Unlike a thermal power plant, it costs nothing to stop making power from a solar panel or a wind turbine. A solar panel can be making full output and then making ZERO output a millisecond later- all you need to do is switch it off! It doesn’t break anything- there’s no water treatment to run, no steam to condense or vent. There’s no cost. It merely means that you don’t get to amortize its capital cost over quite so many kWh per year, so each kWh gets a little more expensive.

If it costs 1/2 what were paying now for electricity, the obvious solution would be to build 50% more than what we need, use the surplus if we can, but not cry a tear when we don’t need it. If it costs 1/4 as much…and it will…well, you get the picture!

I already do this on my farm. We are too far away from power poles, and use too few kWh per year, to make a grid connection worth having. So we have enough panels to make the power we need at peak during the day, we store a bit for use at night in batteries, and we just dump the rest. Its by far the cheapest solution. Its far cheaper than having fewer panels and MORE batteries, too.

SOLUTION 4: Flatten Renewable Supply With Storage

We’ll need to flatten the peaks of demand and troughs of supply with storage. We’ll need quite a bit of short term storage, even more medium-term storage (on the order of hours), and we might also need some longer-term storage (up to a day or two).

How much? Depends how aggressive we are on the other solutions, and how cheaply we can make storage.

The world’s minds are bent on the problem of storage, and while there are different limits to each storage technology, there is nothing standing in our way of solving the storage problem- making storage cheaper, more reliable, and less dependent on either geography, special materials, or burning stuff either.

What storage options?

  1. Using existing hydro – by operating hydro as a modulating source rather than steadily as baseload, we can better flatten the more intermittent renewables. Thats not a feasible option for all hydro installations (i.e. run of river, or where reservoir levels must be controlled, or there is no reservoir at the base of the dam), but right now the potential for modulating hydropower is basically untapped- we run the dams flat out most of the time because this power is cheap and green, and the capex is sunk already. Quebec alone apparently has enough water in its upper reservoirs to power Quebec for a year. The potential for Quebec to become a giant battery for a good chunk of North America seems pretty enormous to me.
  2. Li ion batteries for the shortest duration, highest power, highest value grid support services. The Hornsdale battery in Australia has already paid itself off doing just that- the rest of its lifetime is pure gravy for its owners. UPDATE:  LFP prismatic cells are $90 retail per kWh, delivered to Canada from China, per our recent order in March of 2024.  Guaranteed for 6000 cycles.  These batteries return kWh for about 2.5 cents each, plus whatever interest you want to pay yourself on the investment to buy them.  Building a pack out of them costs $20 and takes ½ hour.  The notion that batteries AREN’T the future of storage is just insanely wrong.  And sodium ion has, over the next decade, the potential to make battery storage so trivially cheap that even long distance sea freight may go electric.
  3. Flow batteries: a flow battery is like a fuelcell in that it de-couples its storage from its power generation. More stored energy is just more electrolyte stored in big plastic   tanks- it has a cost, but it doesn’t go bad or self discharge. And when the power unit- sized to make and absorb the peak power you need- is damaged or degraded, it can be repaired, unlike a Li ion battery which must be recycled because it is a sealed unit. Flow batteries are useless for transport applications but seem ideal for grid storage- if they can be commercialized, which means if they can be made cheaply enough at scale. That remains to be seen.
  4. Pumped Hydro: geographically limited, but not as much as you might think- see Michael Barnard’s piece in CleanTechnica for details. Not perfect- high capital- but high efficiency and low operating cost.
  5. Compressed Air or Liquid Air Storage: the former is well known, not very efficient, and very much dependent on having giant volumes of storage for the very low energy density storage medium. The latter is unproven at scale as a storage medium. But both appear to be far more efficient than their big putative rival, which is hydrogen.
  6. Heat Storage: or cold storage. If were using heat, or keeping things or people cool, we don’t need to do that instant by instant. We can buffer heat, or cold, easily enough. And by so doing, we can shift when we draw electricity to make heat, or cold. Brine ice storage, molten salts etc. all have a role here.
  7. Fuels: well also need some stored fuels for emergency response. Here, we have week-long periods where a high pressure area sets in, winds stop, and snow covers the solar panels. Well need some stored fuels to handle those periods: biofuels and hydrogen are possibilities, but even if we use fossil fuels for 2 weeks a year, we’ve already won.

SOLUTION 5: Flatten Renewable Supply with Wider Grids

The storage problem gets dramatically easier to solve, the easier it is to move power around from where its made to where its used. Grids are already quite efficient- the US average from plant gate to home meter is about 5% loss, which is basically identical to the loss from well to meter for natural gas (and about 1/3 that for hydrogen by the way).Were already moving power vast distances by means of HVDC lines. Power made in Labrador and northern Quebec’s giant hydro dams is right now powering the eastern US by this means.

SOLUTION 6: Flatten Demand By Smart Demand Management

EV charging is a perfect example of this already, even without a smart grid to control it. Users plug in at night to take advantage of cheap overnight power prices. As long as they are back up to the desired level of charge by 7am, they don’t care when or how fast they take power.

Similarly, we never dry our laundry or run our dishwasher before 7pm in the evening. Time of use pricing has trained us well!

(Update:  in the future, you’ll push the button on the dryer and it will say, “Starting at 11:01pm when electricity is cheap.  Press the start button again if you really need your clothes to be dry this instant”.  Devices will come pre-programmed to help you not use expensive energy)

My local utility also pays me for the right to shut off my air conditioner for 15 minute periods when peak demand is happening. The house doesn’t heat up appreciably in 15 minutes, so frankly I have no idea whether or not they’ve ever done this. If they have, I’ve never noticed. Why wouldn’t you do this with many similar big loads?

There are many other examples of things that can be operated when power is cheap, or available, rather than the instant they’re needed. And many can be interrupted any time power is in short supply, without consequences. Some of them arent really storage per se, just being smarter about WHEN we use energy.

SOLUTION 7: Build Nuclear, if You Can Afford It

We won’t be using this solution in Ontario where I live. (UPDATE:  our closeted climate change denialist premier is betting on future small, modular nuclear reactors- four of them to replace the output of a single reactor at Darlington.  This is economic nonsense, as the reactors in question are neither small nor meaningful modular, but it suits his political ideology- including pushing real energy solutions far enough into the future to become somebody else’s problem.)


We make 55% or so of our power from nuclear right now. It was a brilliant decision on the part of my parents’ generation, to build huge CANDU plants instead of the only real alternative at the time- more coal burning. The decision literally saved hundreds of thousands of premature deaths. It was also likely the CHEAPER solution, even without trying to figure out how many person-years of saved lives were worth in some kind of macabre accounting exercise.

There have been ZERO deaths in Ontario from nuclear power accidents. There has been no significant radioactive release, except perhaps at the uranium mines. We know where all the waste is- it was not broadcast across the countryside.

But as much as I think there is good evidence that nuclear can generate very safe, very low GHG and toxic emissions power on a steady 24/7 basis, it certainly isn’t cheap power.

Case in point: Darlington is the last nuclear plant built in Ontario. It cost about $14.4 billion to build, on a budget of $4 billion, in the early 1980s. It has four reactors with a total thermal capacity of 4×878 MW, for a total capacity of 3.5 GW of electricity. It has been a very good plant, making power very safely and reliably since it was built. It is currently being refurbished for another 35 yrs of operation (to 2055), at a cost so far of $13 billion. Lots of kWh generated over a very long projected lifetime (75 years!), but not cheap…In 2013, Ontario looked at building a B unit to make Darlington’s twin. The bids came back way too high, and the idea was shelved. No politician in Ontario will touch the construction of a new

full-size CANDU plant, even with somebody else’s bargepole.

(Update:  I was wrong about this- see above.  We’re apparently going to build four new reactors)

Here, what will happen should have happened is the truly sensible thing: we’ll run our existing nuclear plants right into the ground- as long as they can be run safely. The first one built, Pickering, goes lights out for good in 2024. Two of the six reactors are shut down already- the other four, about 2 GWe worth, go down in 2024

(Update:  we’re considering refurbishing them, likely throwing considerable good money after bad)

And nobody in Ontario knows what well replace it with, except for one thing: it will not be another full size CANDU plant.

(Update:  although there’s talk again about building a Darlington B, or more reactors at Bruce, that talk is likely just talk)

If in your society you can build nuclear, and you think that’s a good deal, then you should do it. As long as you will cope with your own nuclear waste, that is.

Don’t hold out hope for the small modular nuclear reactor however. It will not make cheap kWh, period. It is pure #hopium, predicated on a fundamental misunderstanding of engineering economics. I’ll deal with that in a subsequent article (link given above).

SOLUTION 8:  insert your solution here (as long as it’s NOT hydrogen or e-fuels!)

SOLUTION 9: Be More Efficient, and Use Less

We had a good party for the past 300 yrs on the stored solar energy the earth put up for us in the form of fossil fuels. But sadly, the party’s over.

We learned some foolish energy use practices in that time, like dragging two tonnes of steel around per person everywhere we go. Well need to un-learn that to some degree. We’ll need to be smarter. We’ll need to move information rather than people whenever that’s practical (COVID taught us that this was far more practical than we imagined!). We’ll need to combine moving our bodies to where we want to go, with our need for physical exercise. And well need to build more public transit, and densify our communities to make that transit a viable option. Thats however a 70+ yr, multi-trillion dollar exercise. Its not going to happen tomorrow.

This also means that we use thermodynamic work as work, not stupidly as if it were heat!

That means we use EVs, not engine-driven vehicles, everywhere that’s feasible.


As soon as practical!

Thats basically almost all cars and light trucks, most heavy transport, and part of planes, ships and trains.

The rest?


They’re cheaper and more effective than hydrogen.  They still make toxic emissions, but those matter a lot less when emitted between cities or at 30,000 ft, or mid-ocean, rather than inside densely populated areas.

Make me benevolent dictator and Id ban new sales of non-hybrid ICEs tomorrow. I’d ban new sales of mild hybrids in 2030, and I’d ban fossil fuels for engines by 2035

That, plus carbon taxes, would transition transportation completely.

And no, there are no materials availability nor other impediments to doing that. It’ll just be very expensive- and then as we get better at it, and we finally accept reality, it’ll get cheaper!

Can we scale biofuels enough for the rest? For those applications EVs can’t work for- yet? Yes, I’m convinced we can (2024 update:  I remain convinced of this, more solidly than ever). But not without carbon taxes. Biofuels are much more expensive than fossils. They’re however cheaper than hydrogen, as well as being more effective than hydrogen (more practical). And that means, biofuels are ALSO cheaper than any so-called e-fuel, all of which are actually hydrogen-derived fuels.

Don’t believe me? Don’t like the idea of biofuels? No problem- except YOUR problem just got far harder and more expensive to solve than mine!

You may not realize it, but we already are supplying about 10% of our gasoline in Canada and the US in the form of ethanol, and a fraction of our diesel as well. It is driven stupidly by mandates rather than by carbon taxes, so the fuels aren’t as fossil GHG -efficient as they could be. But the capacity to make vastly more from cellulosic stocks (at much higher cost) is absolutely there. And those stocks can be combined WITH green hydrogen, if that ever gets cheap enough- to make even more.

Note that the whole food versus fuel thing is real, but the real problem with agriculture isn’t biofuels production, it’s an uncontrolled case of parasitism.  You pay 9 out of every 10 dollars for some foodstuffs, to people OTHER than the farmer.  That’s a travesty bordering on criminal in my opinion.  If you’re concerned about food prices, THAT is the problem to focus on solving!


This also means using heat pumps for comfort heating. They’re expensive, but only because gas is so cheap when you get to dump fossil CO2 to the atmosphere nearly for free.

Home heating is much harder to replace with electricity than it is to electrify transport. It will take longer, and it will take more than carbon taxes to make it happen. We should start with steeper requirements on new construction for energy efficiency. Weve been ramping these up since the 1st energy crisis in 1973 and thats a good thing, but we can go MUCH farther if we put our minds to it.

SOLUTION 10: Replace Non-Fuels Uses of Fossils

One of the dividends we’ll get when we stop burning fossils is that well have lots of fossil petroleum and natural gas for uses other than burning. We can make chemicals and plastics from fossils without burning fossils in the process, or in some cases, with a little carbon sequestration being necessary. Not a single new invention is required to do that- just a steep enough tax on carbon to make it pay.


One of the most important transitions is to stop making hydrogen from fossils. Thats how we make 98.7% of hydrogen right now (Update:  it’s still the same- we basically are NOT transitioning away from black hydrogen toward green hydrogen in any meaningful way), and solving that problem is literally existential for humankind if we really do want to kick the fossil-burning monkey off our backs.

We literally depend on that BLACK hydrogen, made from fossils without carbon capture, to continue eating in the post fossil world. Most of our food calories and those of our food animals are dependent on nitrogen fertilizers made from ammonia made using black hydrogen. We have to kick that habit, and we also need to be smarter about how we use nitrogen fertilizers too. N2O is a durable and extremely potent GHG, and it is made any time we over-dose our soils with artificial nitrogen. Just transitioning about 1/2 of the 120 megatonnes of H2 we use in the world yearly right now- in the form of H2 and of H2-containing syngas- to non-emitting sources, is a giant task. We’ll be at that for decades even if we work at it as hard as we possibly can. And we literally haven’t even started yet.

 Needless to say, I therefore do not see us using surplus green hydrogen to solve our GHG emission problems. Thats just a #hopium hallucination you’re being sold by people who would profit from you believing in it.

What About CCS?  Or Direct Air Capture (DAC)?

Carbon capture and storage is likely going to be necessary to some degree. But right now, its as much a fantasy as green hydrogen. Carbon taxes in the world are too low to pay for it.

Can you imagine an industry the size of the current fossil fuel industry, moving TWICE as much mass but in the opposite direction, paid for entirely by carbon taxes?

Sorry, I can’t!

So I see carbon capture and storage as necessary for things like cement manufacture and perhaps a few other things- but NOT for energy production. We should really endeavor to minimize how much of it we use though, as the risk of us suddenly having a giant CO2 eruption somewhere seems quite real.

What we should not be confused by is enhanced oil recovery. Thats a scam being run by the fossil fuel industry, who is trying to get carbon credits for something they do for their own profit. When the oil extracted is burned, we end up with MORE CO2 in the atmosphere, not less. Giving credits to oil and gas companies for EOR is madness. Its the kind of bad public policy that we can expect going forward though, unless were very vigilant.

Lastly, the idea of direct air capture of CO2- moving 1600 tonnes of air through giant absorbers to recover each tonne of CO2- is just something that should be rejected out of hand until AFTER were done burning fossils for good. You patch the hole in the hull first, before you try to bail the boat!


What We Should Not Do

You’ll hear that endlessly in my posts and comments here on LinkedIn. But here are a few:

  1. small modular nuclear reactors (SMNRs)hydrogen for heating or transport (green H2 to replace black H2 is critical though!
  2. hydrogen for high temperature heating- there’s nothing in heating that H2 can do that electricity cant do BETTER, unless you’re looking at avoiding the retrofitting of existing fired equipment. Then it’ll be cheaper to do the retrofit and electricity STILL wins
  3. making ammonia to move green hydrogen- use the green ammonia to replace BLACK ammonia instead!
  4. making e-fuels (from hydrogen) to keep using our beloved ICE vehicles. Biofuels make far more sense for that, at lower cost- those that EVs cant replace yet that is!
  5. waiting for fusion: the fusion reactor 93 million miles up in the sky suffices. The time into the future that fusion power will be a practical alternative is one of the constants of the universe


  • waiting for whatever other fancy deus-ex-machina technological solution you imagine is coming- because we have all the solutions we need to do this NOW

There are innumerable other dumb things that people are going to convince us to try to do. While its very important to separate the truly impossible from the merely difficult or uneconomic, it is not sensible to take the “all of the above” approach. We can and must work from what we know of the limits of each technology, to select which ones have the most promise and pursue those. And we can’t delay! Nor can we dream that the transition is at all possible without solution #1- carbon taxes.

In Conclusion

I intend to edit this as I learn and hence change my mind. Convince me where I’ve gone wrong! I know I have. Thats the fun of it- acknowledging what were trying to do and then working together to solve it, we can accomplish incredible things! But only if we don’t hold our own ideas as too precious to be changed by better ones.

We’ve had 30 solid year- my entire career, so far- to tackle this problem, and we haven’t. Why not? Because we’ve been wallowing in grief instead. Denial and bargaining- they’re stages of grief. Thats where our energy has gone, not into solutions. As natural as grief is, and as unhealthy as it is to suppress it, at a certain point we all must move on. Take off the black clothes and the arm band, put on the workboots, fire up the simulator, dust off the calculator, put on the lab coat, mark an X beside the right candidate’s name in every election.

 Let’s get on with it!

Disclaimer:  this article was written by a human, and humans are known to get things wrong from time to time.  Explain to me how I’ve gotten this wrong, with good references, and I’ll be happy to correct my work. 

If you’re dissatisfied with the article merely because I’ve taken a dump on your pet idea, feel free to contact my employer, Spitfire Research Inc., who will be quite happy to tell you to piss off and write your own article.

Join in the discussion on LinkedIn, as I don’t have comments enabled here- too much spam to make it worthwhile.

So: Exactly How Much Electricity Does it Take to Make a Gallon of Gasoline?

Images from Google

First, why do I care what the answer is? And why should you care? A little history may help you understand.

My son Jacob and I took on a project a little over three years ago, to convert my 1975 Triumph Spitfire roadster into a fully electric vehicle, which we call the E-Fire. The project flowed from my personal and professional interests in the transition to renewable energy sources and in reducing the environmental impact of global energy consumption. It was also spurred on by my purchase of our first Prius in 2008. I have driven nothing but Priuses since: I’m fascinated by the seamless way that Toyota managed to integrate the EV drivetrain with the Atkinson cycle gasoline engine. The vehicle not only has exceptional fuel economy, but toxic emissions are also greatly reduced. 

(We won’t be needing THIS any more! 80 pounds of gasoline with a flip-top lid, separated from the passenger compartment by a 1/8″ thick sheet of pressboard covered with vinyl, always had me a little worried…)

(Jacob wiring the front battery pack: 22 LiFePO4 batteries. Another ten went where the gas tank used to be. The result is 100 km (60 miles) reliable range on a charge, without leaving the cells at a damagingly low depth of discharge)

The E-Fire project was a great opportunity for me to integrate these interests and to work together with my son at an age where he could be both a learner and a real helper too. That said, I would have considered the project a failure if I didn’t also produce a car which was useful for my commute, at a cost (ignoring our labour of course!) far less than that of the cheapest EV available to me at the time, a Gen 1 Nissan Leaf. 

(If you’re going to put up with a 75 mile round-trip commute daily in the Toronto-Hamilton region’s notoriously disgusting traffic, you might as well do it in something fun!)

The result is spectacular: a car which is an absolute blast to drive, and which turns heads, but which has also reduced my commuting energy consumption by 80% and greenhouse gas emissions by 97%. That result is due to both the efficiency of the EV drivetrain and to Ontario’s unbelievable 40 g CO2/kWh electrical grid. After over 11,000 miles of driving, my face is still sore from the “EV grin”. The instant, quiet torque of an electric motor is quite intoxicating!

The project of course also fed into my interests in the feasibility and limits of both renewable generation and energy efficiency improvement measures by which we could meet the incredible challenge of transitioning away from fossil fuels- something which an honest evaluation of the science on the topic makes quite clear to be an absolute necessity. A lot of my LinkedIn commentary is now tied to issues related to efficiency, renewable generation and the threat of global warming, because frankly I see a tremendous amount of uninformed rubbish being spewed daily, in the form of unfair criticism or denial, unsubstantiated claims, crazy predictions or hype related to particular technologies. What often passes for “science journalism” in the Internet era just makes my blood boil: how can we expect non-technical people, which most of our voters and political leaders are, to make reasonable decisions on technical matters if we keep feeding them this garbage?

Case in point is the subject of this article: just exactly how much electricity does it take to refine a gallon of gasoline?

Here’s the quote from Elon Musk, in an interview related to the release of the movie “Revenge of the Electric Car”, which I found here:


Elon: “Exactly. Chris has a nice way of saying it which is, you have enough electricity to power all the cars in the country if you stop refining gasoline. You take an average of 5 kilowatt hours to refine gasoline, something like the Model S can go 20 miles on 5 kilowatt hours. You basically have the energy needed to power electric vehicles if you stop refining.”

Ding! My “hype detector” went off on that one, big time! This had to be an exaggeration. If it were true, it would make no economic sense at current retail gasoline prices, for one thing. But Elon Musk is a smart guy, so there’s no doubt a grain of truth in it, as there is in most myths. It was time for a little digging to find the source of the error.

I sat down and read the GM/Argonne National Laboratories well to tank and well to wheels studies to figure this out, as part of an effort to do accurate calculations for my own converted EV’s energy and GHG performance.


The GM/ANL study and the resulting GREET model are very complex, as you would expect given that oil refineries produce a lot of different fuels-gasoline, diesel and jet fuel, home heating oil and bunker fuel for ships, propane etc. But refineries produce a host of other products, and also internally recycle a lot of material and energy, often sharing energy and materials across the fence with numerous petrochemical plants which don’t even count as part of the refinery itself, and with the greater electrical grid. 

The end result of a very careful and well-documented analysis via the GREET model which takes all of this complexity into account, is that from well to gas tank, gasoline is about 81 to 83% source energy efficient. The figures from the US average in 2001 were 98% for production/recovery, 84.5% for refining, 98.41% for distribution/transport to and from the refinery, and over 99.8% for storage. All the figures except the last one are from the GE/ANL well to tank study (2001) and subsequent well to wheels study, except the last one which is from Environment Canada. The figure compares almost exactly with the more recent EU JRC well to tank study (2014) which gives a comparable figure of about 82%. Put simply, this means that 118 J of energy in crude oil is used to produce 100 J of energy in the form of gasoline. 18 J is “lost” in the process.

As an engineer, such a comparison leaves me a bit cold. Even though heat and chemical energy and thermodynamic work and electrical energy all have the same units (Joules), that does not mean these forms of energy are all equivalent! The gods of thermodynamics take their tithe any time energy changes form

OK, let’s look at Elon’s claim on its face: 5 kWh/US gallon is about 5/35.3 kWh or about 14% of the lower heating value (LHV) of a gallon of gasoline- but only if you were to convert the LHV (heat units) into electrical energy (kWh) at 100% efficiency. Ding! There’s the error! Of course aside from small home or jobsite generators, nobody burns gasoline to make electricity (regrettably there is still a lot of diesel burned though…). In general, simple fuel-burning power plants are on the order of 30% efficient based on the lower heating value (LHV) of the fuel. A modern gas-fired, combined cycle power plant can now reach about 60% efficiency, but that’s really not a fair comparison here.

Looking at the biggest energy sink in the gasoline production chain, which is refining at roughly 15% of the source energy, it’s important to know that most of the energy used in the refinery isn’t electricity. Fuel gas is used either directly in fired equipment or indirectly to produce steam, and it represents most of the source energy loss. A lot of that fuel gas in most refineries consists of byproduct streams from within the refinery itself, so it really does originate from the source, i.e. from crude oil itself. 

Converting 15% of the chemical energy in a gallon of gasoline to electrical kWh at 100% efficiency is disingenuous because doing so in reality is impossible. On average, only about 15% of the energy used in a refinery is used in the form of electricity. Some refineries are electrical importers, and some are exporters from their cogen facilities, so only the average across a country or region is worth considering.

For a fair evaluation of Elon’s claim, the 15% of the 15% of the source (crude oil) energy used in the refinery in the form of electricity should be converted from source energy (fuel gas) at closer to 30% efficiency. A barrel of oil equivalent has a LHV of about 35.6 kWh/gal which is very close to the 35.3 kWh/gal LHV figure for a typical gasoline. So 15% x 15% x 35.6 kWh/gal is about 0.8 kWh/gallon of actual electrical energy use. Converted back to equivalent fuel gas used to make that electricity at 30% efficiency, you end up with an overall real well to tank efficiency for gasoline of about 80%, i.e. 2-3% worse than the figure which treats electric energy and fuel LHV as if they were equivalent. The GHG emissions figures in GREET work out to roughly the same result when you back-calculate.

My converted EV could drive about 3.2 miles at 250 Wh/mile on the electricity saved for every gallon of gasoline it didn’t use (that 250 Wh/mile figure is what my car averages, taking into account the efficiency of the charger and battery). Pre conversion, my Spitfire got about 29 miles to the US gallon if driven conservatively, so we’d make it roughly 1/10th as far. Elon’s Model S is a much heavier and larger car, so it consumes more like 400 Wh/mile and would only travel about 2 miles on that energy. Elon’s estimate is therefore out by an order of magnitude, and my “hype detector” was proven to be reasonably well calibrated. 

We could do the more ridiculous comparison, taking the overall 20% of the source crude energy lost between well and tank, converting it all back to kWh at say 30% efficiency. That yields 20% x 30% x 35.6 kWh/gallon = 2.1 kWh, or enough electricity to take Elon’s model S a whopping 5.2 miles. Even on this ridiculous basis, Elon’s estimate is still out by a factor of about four.

Clearly, the amount of electricity used in refining and distributing gasoline is not insignificant, but nowhere nearly as high as Mr. Musk claimed. We’re going to have to find renewable sources of electricity if we want to convert all our gasoline cars to battery EVs- much less all the diesel cars, aircraft and ships dependent on crude oil. And more importantly, we have to remember that even if we decide tomorrow that fossil gasoline and diesel, jet and bunker fuel are no longer needed in any quantity (and I guarantee you that we won’t do that tomorrow, or in two decades either!), we would still be running crude oil refineries, adjusting the refinery processes to make a different suite of products. We’ll need to do that to produce the host of other materials and products that are every bit as necessary to modern life as electricity, but which are far harder to make from renewable sources. 

I’ll leave you with the measured and calculated energetic and environmental performance of my E-Fire versus itself prior to conversion, and versus my most efficient vehicle- my Prius C hybrid. The results should be crystal clear- in a region like Ontario with a good grid, EVs are an environmental godsend.

Battery EVs are a tremendous technology- one that will greatly help us toward a future which allows us to enjoy most if not all of the benefits we’ve derived from fossil energy, but without the GHG consequences. They will allow us to retain the freedom of individual transport, without discharging toxic emissions directly into the breathing zone of passersby. And the efficiency of the EV drivetrain and lithium-ion battery combination mean that we won’t have to build out as much expensive renewable infrastructure as we would if we were going to rely on other, less efficient renewable fuels such as biofuels or hydrogen- fuels which require more lossy steps of chemical or energy conversion. EVs don’t need hype or exaggerated claims to make them out to be more than they are: they can definitely stand on their own merits.