All my concerns about hydrogen #hopium, in one convenient place!
Hydrogen is being sold as if it were the “Swiss Army knife” of the energy transition. Useful for every energy purpose under the sun. Sadly, hydrogen is rather like THIS Swiss Army knife, the Wenger 16999 Giant. It costs $1400, weighs 7 pounds, and is a suboptimal tool for just about every purpose!
Why do you hate hydrogen so much? I DON’T HATE HYDROGEN! I think it’s a dumb thing to use as a fuel, or as a way to store electricity. That’s all.
I also think it’s part of a bait and switch scam being put forward by the fossil fuel industry. And what about the electrolyzer and fuelcell companies, the technical gas suppliers, natural gas utilities and the renewable electricity companies that are pushing hydrogen for energy uses? They’re just the fossil fuel industry’s “useful idiots” in this regard.
When you look at two cars with the same range that you can actually buy, it turns out that my best case round-trip efficiency estimate- 37%- is too optimistic. The hydrogen fuelcell car uses 3.2x as much energy and costs over 5.4x as much per mile driven.
For trucks- I agree with James Carter- they’re going EV. EVs will do the work from the short range end of the duty, and biofuels will take the longer range, remote/rural delivery market for logistical reasons. Hydrogen has no market left in the middle in my opinion.
Trains: same deal.
Aircraft? Forget about jet aircraft powered by hydrogen. We’ll use biofuels for them, or we’ll convert hydrogen and CO2 to e-fuels if we can’t find enough biofuels. And if we do that, we’ll cry buckets of tears over the cost, because inefficiency means high cost.
(Note that the figures provided by Transport and Environment over-state the efficiency of hydrogen and of the engines used in the e-fuels cases- but in jets, a turbofan is likely about as efficient as a fuelcell in terms of thermodynamic work per unit of fuel LHV fed. The point of the figure is to show the penalty you pay by converting hydrogen and CO2 to an e-fuel- the original T&E chart over-stated that efficiency significantly)
Ships? There’s no way in my view that the very bottom-feeders of the transport energy market- used to burning basically liquid coal (petroleum residuum-derived bunker fuel with 3.5% sulphur, laden with metals and belching out GHGs without a care in the world) are going to switch to hydrogen, much less ammonia, with its whopping 11-19% round-trip efficiency.
Fundamentally, why do we burn things? To make heat, of course!
Right now, we burn things to make heat to make electricity. Hence, it is cheaper to heat things using whatever we’re burning to make electricity, than it is to use electricity. Even with a coefficient of performance for a heat pump, so we can pump 3 joules of heat for every joule of electricity we feed, it’s still cheaper to skip the electrical middleman and use the fuel directly, saving all that capital and all those energy losses.
Accordingly, hydrogen- made from a fuel (methane), is not used as a fuel. Methane is the cheaper option, obviously!
In the future, we’re going to start with electricity made from wind, solar, geothermal etc. And thence, it will be cheaper to use electricity directly to make heat, rather than losing 30% bare minimum of our electricity to make a fuel (hydrogen) from it first. By cutting out the molecular middleman, we’ll save energy and capital. It will be cheaper to heat using electricity.
I know it’s backwards to the way you’re thinking now. But it’s not wrong.
Replacing comfort heating use of natural gas with hydrogen is fraught with difficulties.
Hydrogen takes 3x as much energy to move than natural gas, which takes about as much energy to move as electricity. But per unit exergy moved, electricity wins, hands down. Those thinking it’s easier to move hydrogen than electricity are fooling themselves. And those who think that re-using the natural gas grid just makes sense, despite the problems mentioned in my article above, are suffering from the sunk cost fallacy- and are buying a bill of goods from the fossil fuel industry. When the alternative is to go out of business, people imagine all sorts of things might make sense if it allows them to stay in business.
Hydrogen as Energy Storage
We’re going to need to store electricity from wind and solar- that is obvious.
We’re also going to need to store some energy in molecules, for those weeks in the winter when the solar panels are covered in snow, and a high pressure area has set in and wind has dropped to nothing.
It is, however, a non-sequitur to conclude that therefore we must make those molecules from electricity! It’s possible, but it is by no means the only option nor the most sensible one.
The reality is, black hydrogen is much cheaper. And if you don’t carbon tax the hell out of black hydrogen, that’s what you’re going to get.
Replacing black hydrogen has to be our focus- our priority- for any green hydrogen we make. But sadly, blue (CCS) hydrogen is likely to be cheaper. Increasing carbon taxes are going to turn black hydrogen into muddy black-blue hydrogen, as the existing users of steam methane reformers (SMRs) gradually start to capture and bury the easy portion of the CO2 coming from their gas purification trains- the portion they’re simply dumping into the atmosphere for free at the moment.
There is no green hydrogen to speak of right now. Why not? Because nobody can afford it. It costs a multiple of the cost of blue hydrogen, which costs a multiple of the cost of black hydrogen.
The reality is, you can’t afford either the electricity, or the capital, to make green hydrogen. The limit cases are instructive: imagine you can get electricity for 2 cents per kWh- sounds great, right? H2 production all in is about 55 kWh/kg. That’s $1.10 per kg just to buy the electricity- nothing left for capital or other operating costs. And yet, that’s the current price in the US gulf coast, for wholesale hydrogen internal to an ammonia plant like this one- brand new, being constructed in Texas City- using Air Products’ largest black hydrogen SMR.
At the other end, let’s imagine you get your electricity for free! But you only get it for free at 45% capacity factor- which by the way would be the entire output of an offshore wind park- about as good as you can possibly get for renewable electricity (solar here in Ontario for instance is only 16% capacity factor…)
If you had 1 MW worth of electrolyzer, you could make about 200 kg of H2 per day at 45% capacity factor. If you could sell it all for $1.50/kg, and you could do that for 20 yrs, and whoever gave you the money didn’t care about earning a return on their investment, you could pay about $2.1 million for your electrolyzer set-up- the electrolyzer, water treatment, storage tanks, buildings etc.- assuming you didn’t have any other operating costs (you will have). And…sadly…that’s about what an electrolyzer costs right now, installed. And no, your electrolyzer will not last more than 20 yrs either.
Will the capital costs get better? Sure! With scale, the electrolyzer will get cheaper per MW, as people start mass producing them. And as you make your project bigger, the cost of the associated stuff as a proportion of the total project cost will drop to- to an extent, not infinitely.
But the fundamental problem here is that a) electricity is never free b) cheap electricity is never available 24/7, so it always has a poor capacity factor and c) electrolyzers are not only not free, they are very expensive and only part of the cost of a hydrogen production facility.
Can you improve the capacity factor by using batteries? If you do, your cost per kWh increases a lot- and that dispatchable electricity in the battery is worth a lot more to the grid than you could possibly make by making hydrogen from it.
Can you improve the capacity factor by making your electrolyzer smaller than the capacity of your wind/solar park? Yes, but then the cost per kWh of your feed electricity increases because you’re using your wind/solar facility less efficiently, throwing away a bunch of its kWh. And I thought that concern over wasting that surplus electricity was the whole reason we were making hydrogen from it!?!?
John Poljak has done a good job running the numbers. And the numbers don’t lie. Getting hydrogen to the scale necessary to compete with blue much less black hydrogen is going to take tens to hundreds of billions of dollars of money that is better spent doing something which would actually decarbonize our economy.
UPDATE: John’s most recent paper makes it even clearer- the claims being made by green hydrogen proponents of ultra-low costs per kg of H2 are “aspirational” and very hard to justify in the near term. They require a sequence of miracles to come true.
We’ve known these things for a long time. Nothing has changed, really. Renewable electricity is more available, popular, and cheaper than ever. But nothing about hydrogen has changed. 120 megatonnes of the stuff was made last year, and 98.5% of it was made from fossils, without carbon capture. It’s a technical gas, used as a chemical reagent. It is not used as a fuel or energy carrier right now, at all. And that’s for good reasons associated with economics that come right from the basic thermodynamics.
What we have is interested parties muddying the waters, selling governments a bill of goods- and believe me, those parties intend to issue an invoice when that bill of goods has been sold! And that’s leading us toward an end that I think is absolutely the wrong way to go: it’s leading us toward a re-creation of the fossil fuel paradigm, selling us a fossil fuel with a thick obscuring coat of greenwash. That’s not in the interest of solving the crushing problem of anthropogenic global warming:
We need to solve the decarbonization problem OF hydrogen, first. Hydrogen is a valuable (120 million tonne per year) commodity CHEMICAL – a valuable reducing agent and feedstock to innumerable processes- most notably ammonia as already mentioned. That’s a 40 million tonne market, essential for human life, almost entirely supplied by BLACK hydrogen right now. Fix those problems FIRST, before dreaming of having any excess to waste as an inefficient, ineffective heating or comfort fuel!!!
Here’s my version of @Michael Liebreich’s hydrogen merit order ladder. I’ve added coloured circles to the applications where I think there are better solutions THAN hydrogen. Only the ones in black make sense to me in terms of long-term decarbonization, assuming we solve the problem OF hydrogen by finding ways to afford to not make it from methane or coal with CO2 emissions to the atmosphere- virtually the only way we actually make hydrogen today.
If Not Hydrogen, Then What?
Here’s my suite of solutions. The only use I have for green hydrogen is as a replacement for black hydrogen- very important so we can keep eating.
There are a few uses for H2 to replace difficult industrial applications too. Reducing iron ore to iron metal is one example- it is already a significant user of hydrogen and more projects are being planned and piloted as we speak. But there, hydrogen is not being used as a fuel per se- it is being used as a chemical reducing agent to replace carbon monoxide made from coal coke. The reaction between iron oxide and hydrogen is actually slightly endothermic. The heat can be supplied with electricity- in fact arc furnaces are already widely used to make steel from steel scrap.
In summary: the hydrogen economy is a bill of goods, being sold to you. You may not see the invoice for that bill of goods, but the fossil fuel industry has it ready and waiting for you, or your government, to pay it- once you’ve taken the green hydrogen bait.
DISCLAIMER: everything I say here, and in each of these articles, is my own opinion. I come by it honestly, after having worked with and made hydrogen and syngas for 30 yrs. If I’ve said something in error, please by all means correct me! Point out why what I’ve said is wrong, with references, and I’ll happily correct it. If you disagree with me, disagree with me in the comments and we’ll have a lively discussion- but go ad hominem and I’ll block you.
Scaling Object Lesson #2: Water Electrolyzers For Hydrogen Production
We learned about vertical scaling in the 1st article in this series:
…and about horizontal scaling or “numbering up” in the 2nd:
Now we’ll use these tools to examine the scaling future of an extremely important decarbonization technology: electrolyzers for producing hydrogen from renewable electricity.
My readers will know that I think hydrogen is a massive decarbonization problem, and that I think we must focus our efforts on making green (electrolytic) hydrogen to replace the 98.7% of the stuff that is made from fossils without carbon capture- the ultra-black hydrogen that dominates the market today.
In a decarbonized future, we’ll need a lot of green hydrogen, even if we aren’t stupid enough to waste any as an inefficient vehicle or heating fuel! I estimate that 90 of the current ~ 120 million tonnes/yr of hydrogen we use today, is durable in a decarbonized future. And in reality we’ll need more than that, because there are some uses for hydrogen as a molecule which are also a very good idea: for instance, using H2 to replace the carbon monoxide used in the direct reduction of iron ore to iron metal.
Replacing that 90 million tonnes of H2 per year would take a monumental effort. Optimistically, it would take 4500 TW h of green electricity– more than twice as much as all the wind and solar made on earth in 2019. Given that today we’re making less than 0.03% of world H2 production by the on-purpose electrolysis of water, we’re really at ground zero on the important task of decarbonizing hydrogen itself.
If we had renewable electricity available to us at 100% capacity factor, we’d need only ~ 513 GW worth of electrolyzers to make that hydrogen- but since renewable electricity isn’t available with such high capacity factors, we’ll really need 1000-1500 GW of electrolyzers.
How much on-purpose electrolyzer capacity is there on earth today? Less than 1 GW. We need to increase electrolyzer capacity by at least 3 orders of magnitude and probably more.
Hmm: how can we apply what we learned about vertical and horizontal scaling to this most important decarbonization problem?
There are two main water electrolysis technologies which are front-runners today: the alkaline electrolyzer, and the proton exchange membrane (PEM) electrolyzer. They differ in details, features, benefits, disadvantages etc in important ways, but for our purposes we don’t need to worry about that. Let’s look at the basic components of an electrolysis system.
The electrolyzer itself consists of a “stack” of electrolysis cells, each with an anode and a cathode (and many other parts, depending on which technology). Cells are arranged in physical parallel, meaning that each cell is fed water/electrolyte and the products (hydrogen and oxygen gas) are collected from each cell.
(schematic of an electrolyzer stack- image credit, IRENA)
The cells may be arranged electrically in series, parallel, or series/parallel arrangements to allow us to select DC current and voltage inputs of a manageable level. High currents mean big conductors, expensive power controls, and high ohmic losses- high voltages mean lower currents, but also the potential for currents flowing where we don’t want them to- so we are involved in a balancing act.
Electrolysis is an area-based process. The key parameter for electrolyzer design is current density, i.e. the current, in amperes, which flows through each unit of electrode/cell area. Lower current density means lower voltage per cell and hence higher efficiency, but also means more capital cost per unit of H2 production (or power input) because each unit of electrode area produces less value (less hydrogen) per unit time.
While we could build complete cells individually, and connect each one up to the others with individual tubes and wires or buss bars, that would be very expensive- and we’re smarter than that. Designs vary, but you won’t go far wrong as a basic mental model to think of a stack of plates held together with draw bolts, with internal manifolds and process connections at each end, arranged rather like a plate and frame heat exchanger.
It is advantageous to produce the hydrogen product under pressure, so that less mechanical compression is needed prior to transport or storage of the bulky gas. While pressure forces the equilibrium the wrong way (back from H2 toward water) which costs us some voltage, it also makes the gas bubbles smaller and hence leaves less electrode area blocked by nonconductive gas. Smaller bubbles mean lower current density and hence higher efficiency- to a limit. That means we get a certain amount of gas compression out of an electrolyzer basically “for free” in energy terms- very desirable! Unfortunately, pressure acting on a unit of area generates a force that wants to separate our plates and make the electrolyzer leak- so the bigger we make each plate, the stiffer it must be and the larger and more numerous the bolts that draw the plates together. Ultimately, this basic physics puts practical limits on how big we can make each plate.
Balance of Plant
The electrolyzer stack or stacks are only part of an electrolysis plant. Everything outside the stack which supports it, is called the “balance of plant” or BoP.
(balance of plant for an alkaline electrolyzer: image credit, IRENA)
Each electrolyzer needs a supply of pure water- at least 9 kg per kg of H2 produced. You can use freshwater or seawater, but in either case you must use reverse osmosis to purify it first- a process which takes a trivial amount of electricity (only 0.035 kWh/kg of H2 even starting with seawater) relative to the 50 kWh/kg it takes to make 1 kg of H2 from water. Impurities in the water can (significantly) sap efficiency by carrying stray currents, can make products like chlorine which contaminate the product gases and destroy materials of construction, deactivate cell catalysts etc. Water purification is a no brainer here- and trying to electrolyze dirty or saline water as a way to make hydrogen, is a fool’s errand.
Electrolyzers need DC electricity of controlled current and voltage. Large variable output DC power supplies referred to as “rectifiers” (but much more complex than a bunch of diodes!) are therefore required, along with all their safety gear, measurement instrumentation and controls, and heat removal systems (because these power supplies are not 100% efficient).
Electrolyzers themselves are not 100% efficient, which means that they convert some of the electricity they are fed, into heat. While a little heat is good (improves efficiency), too much wrecks materials of construction or boils the feed water. Accordingly, heat removal systems are required, and the amount of heat generated at scale is a) is not trivial and b) generally produced in places which already have excess energy available, and which hence have little use for low-grade heat. That means we need pumps and heat exchangers to manage this heat, and yet more heat exchangers to reject this heat to the atmosphere.
The product hydrogen is saturated with water vapour and also contains some oxygen. Generally the hydrogen is passed over a catalyst which burns out the oxygen, forming more water. Drying is accomplished in stages with compression and cooling, and for some processes, more complex drying (based on regenerable adsorbents) may be required. So here too, we’re talking about catalyst beds, heat exchangers, compressors, refrigeration equipment, adsorbent beds etc.
The product oxygen is usually vented, but if it is to be compressed and monnetized, it too needs drying, hydrogen removal and compression.
Finally, the product(s) need to be compressed for transport or storage. Whereas electrolyzers might operate at 30-70 bar, storage is generally at 250 bar or higher. That’s more compressors, piping and storage tanks.
“Outside Battery Limits” (OSBL) Equipment and Infrastructure
An electrolysis plant is like any chemical plant making any other chemical product, in that there are support systems outside the “battery limit” of the chemical plant itself. Some examples include:
electrical substations, switchgear etc.
Control and data acquisition system and human/machine interface
pipeline connections, trailer loading facilities
water supply and wastewater management
emergency relief systems, flares, vent stacks etc.
Buildings or weather enclosures
facilities for operators or maintenance staff
roads, parking, civil works
That’s just a partial list- but every such project has to take the OSBL into account, and pay for it.
Scaling of Electrolysis Systems
Assuming that what we want is to make hydrogen as cheaply as possible per kilogram, how should we proceed with scaling up an electrolysis system?
From the discussion about the basic physics of a cell and cell stack, a few things are clear:
cells are repeating identical units consisting of parts which we might mass produce
making each cell larger in area will allow each cell to consume more current and hence produce more H2
making each cell larger in area makes it more likely to leak and requires stronger materials
arranging cells in fluidic parallel in stacks makes more sense than building innumerable tiny cells and connecting each one
The first point gives us some hope that Wright’s Law can come to the rescue. It is entirely possible to imagine cell components being made in automated factories, and then robotically assembled into cells, and cells into stacks, in a true mass-production environment. And as we do that, with every doubling of production, we should expect the units to get cheaper.
It’s fairly clear that cells of larger electrode area are going to be desirable, but that larger cells will require stronger materials and will be more likely to leak. An optimum size should exist, but that size will vary depending on many things, including the limitations of our mass production scheme.
It’s also fairly clear that it would be desirable to stack up as many such cells into a “stack” as practical, so we have as few stacks as possible- but that there will likely be a practical physical maximum based on mechanical properties, fluid mechanics, and dimensional limitations for transport etc.
And so, the electrolysis industry has concluded. While stack sizes are pushed upward yearly, right now the biggest single stack has a capacity on the order of about 10 MW of input power. A 1 GW electrolyzer therefore would consist of 100 such stacks, arranged in physical parallel, likely in a number of “trains”.
So: what’s our scaling conclusion about electrolyzer stacks? I think we can conclude:
a) Wright’s Law will likely be applicable to the stack
b) since just to replace black hydrogen, we’ll need to increase the capacity of electrolyzers on earth by at least 1000x, there’s lots of room for Wright’s Law to run down the cost of each stack
c) both the cells and the stacks of cells will likely have an optimum size, but people will be motivated to increase that optimum size by clever design
d) the optimum size of stack will require that multiple stacks be “numbered up” to achieve plants of sufficient scale
a) and b) are good, hopeful signs for future electrolyzer stacks to become cheaper with time- assuming somebody will pay for the initial stacks which are expensive per unit of production and not mind doing so.
d) however means that projects of substantial size will involve lots of stacks in physical parallel, and that means that there will be limited economy of vertical scale for the electrolyzer portion of the project
What about the balance of plant?
That consists of “tanks and pumps and sh*t”, as my old boss used to say whenever he was confronted with something that looked complicated and scary, but wasn’t really. Conventional stuff, nothing magical. Nothing we shouldn’t be able to scale vertically to kingdom come. And as we scale that stuff- as an electrolysis project gets bigger in capacity- we should expect the marginal cost of the balance of plant to drop per kg of H2 for the good reasons given in my first article in this series. However, we should also expect nearly zero Wright’s Law benefit in relation to the balance of plant. The same for the OSBL: as a proportion of cost per kg of H2, it should drop as the scale of the project increases, but there should be little to no Wright’s Law learning curve to save our bacon. It’s not like pumps, tanks, heat exchangers, buildings, parking lots, electrical substations etc get cheaper as we make more of them- we’ve made too many of them already, and getting to the next “doubling” takes decades.
The Current State of Electrolyzer Scaling
The major players in electrolyzer manufacture have grown up making fairly small units. Prior to the most recent hydrogen #hopium pandemic, the whole role for electrolysis was either to make high purity hydrogen for specialist applications, or to make quantities of hydrogen to rescue users of small quantities of the gas from the high prices being charged by gas suppliers for tube trailer deliveries from a commercial hydrogen plant. Anybody who needed hydrogen at a meaningful scale, simply bought their own small SMR and made it themselves instead from much cheaper fossil gas.
Often, electrolysis supported pilot projects or provided hydrogen as a utility to an existing facility. Accordingly, many of the electrolysis suppliers designed modular products, often based around “seacans” (shipping containers) which served as both environmental enclosure and support for the equipment. Some units were self contained in a single container, while others required several containers for a complete unit.
What do you think about the idea of mass producing containerized complete small electrolyer systems in containers? What are the economic prospects for such a design?
In my opinion, based on the type of analysis we’ve used in the past two articles, the prospects for such an approach are very poor indeed. I see this design as a hangover from the industry’s history.
While the stacks will still have Wright’s Law cost learning benefit, complete containerized packaged systems could benefit from factory fabrication but likely would have minimal to no Wright’s Law learning based cost reduction benefit. The economy of vertical scale for the balance of plant would be abandoned due to the very small maximum scale of complete units stuffed into containers of a limited size, and the resulting marginal capital cost per kg of H2 would be higher than necessary. Furthermore, shipping containers, though themselves mass produced (for use AS shipping containers…) are an inefficient use of steel, an inefficient way to enclose space relative to constructing a building, they offer poor access for operation and maintenance, and are (greatly!) suboptimal in size relative to (considerably) larger modular frameworks which can ALSO be moved by road and sea to most locations. In modular systems, the optimal size of a module is the biggest piece you can move down the road without excessive “heroics”, so you have to re-assemble as few modules as practical on site: this maximizes one of the key benefits of modularization, which is minimizing expensive site work by doing as much in the factory as practical.
The ridiculous extension of this approach is the proposed path to scale of Enapter, a company developing a technology called anion exchange membrane electrolysis. Though the AEM technology has an interesting combination of desirable properties of both alkaline and PEM units, with some of the downsides of each removed, Enapter’s publicly announced strategy is to mass produce complete 2.3 kWh electrolyzers, each including its own complete balance of plant. Thousands of such factory mass-produced units would be physically paralleled to produce an electrolysis project at scale. Sadly, I think this is an object lesson in how not to scale up an otherwise potentially promising technology. It shows to me a lack of understanding of fundamentals of engineering economics.
Fortunately, there are smarter people in the electrolyzer space, both among the market leaders and a number of their rivals. They have a clear view of what it would take to make electrolysis cheap enough to make cheap green hydrogen at scale, and are pursuing development projects for equipment that are consistent with good engineering economics. Some of them are even smart enough to be Spitfire Research customers! But in the interest of maintaining confidentiality, I won’t mention their names, unless they want to “out” themselves in the comments.
Insights for the Future Cost of Green Hydrogen
To become truly inexpensive, truly green hydrogen requires the following things:
a) very cheap renewable electricity available at high capacity factor, which means hybrids of wind and solar in places which have sunny days and windy nights
b) projects of very large vertical scale
c) electrolysis plants that are inexpensive at scale
d) to make green hydrogen production into a business, you also need a way to get the green hydrogen to market.
(Sharp readers will notice that I never mentioned efficiency- and that might make some people suspect I’ve been smoking some #hopium myself. Why is efficiency not on that list? Because the existing state of the art of electrolysis is already 83% HHV efficient, which sadly is only 70% LHV efficient- though hard to afford at that low current density. The incremental benefit from 83% to 100% on an HHV basis- the limit for water electrolysis set by thermodynamics- is, in my view, much less important than the other factors!)
To achieve a), you need large hybrids of wind and solar, located far away from electricity markets (or else you’ll simply make and sell electricity instead!)- locations such as western Australia, Chile etc. That unfortunately collides with d)- locations far away from electricity markets are also far away from places which can use hydrogen, and moving hydrogen across transoceanic distances is a bear of a problem. Accordingly, those projects will need to make something you can move, such as ammonia, direct reduced iron,, methanol or the like. And, hopefully, we’ll be smart enough not to waste those materials as ways to make hydrogen again…
To achieve c), you need somebody willing to buy expensive electrolyzers- and the expensive hydrogen they produce- sufficiently to get electrolyzer production far enough along the Wright’s Law learning curve to make the cells and stacks cheap enough. That’s possible but it will take deep pockets- our public pockets. So it’s my hope that we aren’t stupid enough to waste any of that precious, expensive but truly green hydrogen on dumb uses that are better served by electrification directly or via batteries, or which can be eliminated, or which can be solved more effectively by other means (biofuels etc.)
What you also need is b), ie for the balance of plant and OSBL to be done at large vertical scale, because it won’t be subject to Wright’s Law and hence won’t get cheaper as a result of learnings. And sadly, there’s only so much that vertical scale can do to reduce the costs of this part of the cost of a total hydrogen plant.
Will green H2 get cheaper? Absolutely! But only because it is insanely expensive at the moment.
Will it get cheap enough? Depends on what you mean by cheap enough, and for what purpose.
Will it get cheap enough to replace black hydrogen? That depends more on what we do in relation to carbon taxes and emission bans than it does in relation to the scaling of electrolysis technology. I dearly hope so- our lives are depending on it, if we want to keep eating that is. Dumping CO2 to the atmosphere for free, or nearly free, is a giant subsidy on the backs of our childrens’ futures that we must END, TODAY.
Will it get cheap enough to go head to head with the electricity it is made from? No. That should be obvious.
Will it get cheap enough to use as a fuel, for transport or heating? I doubt it. I think there are better options that will achieve decarbonization of these functions, at far lower cost to society. That goes even moreso for so-called e-fuels derived from hydrogen. Fighting thermodynamics all the way back from water, CO2 and electricity is a fool’s errand- one that should be undertaken only if we are both rich and desperate.
Now, let’s used the tools we’ve learned, to took at some examples from the effort to decarbonize our economy.
The first example to take a swing at with our new understanding of vertical and horizontal scaling is the small modular nuclear reactor, or SMNR for short. SMR means something else to me- steam methane reformer- a technology which pre-dates the entire nuclear industry.
First, a disclaimer. I am not a nuclear engineer. I have no nuclear design experience, and I claim no special expertise in relation to nuclear power. I am however a chemical engineer with decades of experience helping people scale up- and down- chemical process technology. I have also spent decades designing and building small modular chemical plants in a factory environment, and have more than a passing familiarity with their engineering economics.
Let’s also be clear: I think nuclear power can be made adequately safe, and that its use in the 1970s and 80s undoubtedly saved countless premature deaths in my home province of Ontario relative to its true market competitor at the time- burning coal to make electricity. Nuclear is also very clearly a dispatchable, or at least reliably available, source of electricity which also has very low GHG emissions. While I acknowledge that the both the decommissioning of nuclear plants and the storage of nuclear waste are major political and public relations problems, I think that they both have quite practical technical solutions- though the cost of those solutions is unclear to me. That’s a long list of pluses for nuclear power- rather big ones. There is some argument as to whether or not that list of pluses is worth whatever nuclear might cost us, but I’m not of that opinion. Nuclear is just one option- and there are others. We need a societal discussion about what those options are, and what their costs and impacts are.
Nuclear Power’s Present: Giant Vertical Scale
The modern nuclear reactor power plant is the epitome of “go big or go home”- of maximizing vertical scale in an (often seemingly vain) effort to keep the cost per kWh low for consumers. Initially, nuclear reactors were small as we learned how to use them. Engineers understood the economy of vertical scale just as well in the 1950s as they do today, and so it was quite clear that if nuclear power were to become cheap, it would do so by building nuclear reactors at considerable (vertical) scale. And as the industry grew, and project experience was gained, reactors got bigger- not smaller.
It is important to realize what a nuclear reactor is, at its essence. It is a steam power plant with a nuclear fission heatsource, and associated safety and controls equipment to operate and keep that heatsource safe. Given that there’s nothing magical about a steam power plant, surely the steam power plant portion of the project should benefit from ordinary economy of vertical scale.
What about the fission reactor itself?
There have been plenty of examples in the recent past of cost and schedule over-runs in nuclear power projects in numerous locations around the world- in fact, it’s much harder to find examples of nuclear power projects which are anything close to on time and under budget (though nuclear apologists always have a few of those to trot out as examples). The preponderance of cost/schedule over-runs has led people to conclude, perhaps not unreasonably, that nuclear went too far into a giant, megaproject vertical scale that was larger than truly practical.
The current scale of reference is, round numbers, 3 GW of thermal output, or about 1 GW of electrical output per reactor. The cost/schedule overruns would imply that an optimal scale for nuclear power deployment might be at some scale smaller than the current scale- a scale requiring fewer “heroics”.
A nuclear power plant generally has several reactors of around 1 GW electrical capacity, sited together. An example is the Darlington nuclear power station located east of Toronto. It consists of four units with a total generation capacity of 3.5 GW of electricity- roughly 20% of Ontario’s electricity demand. The units are operated independently but share common infrastructure, again to save capital cost.
Darlington’s construction started in 1982 with unit 1, and ended in 1993 when unit 4 was completed. The original budget of $4 billion was exceeded, considerably- the plant cost $14.4 billion, or roughly $23 billion in 2020 dollars. Some of that overage resulted from financing costs arising from project delays caused by government “interference” etc. – the story is a long and sordid one, which likely is an object lesson in why projects like this should not be allowed to become political footballs. In 2021, the 30 yr refit project for the plant was started, at a cost of another $13 billion. If we’re honest, that refurbishment cost was “baked in” when we decided to build the project in the first place.
In 2013, Ontario went out to tender on a “twinning” of Darlington. Prices came back so eye-wateringly high that nuclear ambitions in Ontario came to a screeching halt.
Until 2021, that is…
A new project, referred to as “Darlington New Nuclear”, hit the headlines. The plan is for a new 300 MW (and it looks like that means 300 MW electrical capacity) Hitachi boiling water reactor for the Darlington site.
The moniker “small modular nuclear reactor” has been attached to the project- but the plant, at 1/3 the size of the existing units, is not small in objective terms- it’s in fact bigger than the 200 MWe Douglas Point plant, the prototype for the CANDU reactors later built at Darlington and elsewhere, constructed in the 1960s.
It’s also not modular in the sense that most people understand. It will not be completely assembled in a factory and delivered in sections that are easy to put back together, on the back of a truck or trucks. The plant, if ever built, will be substantially site constructed, just as the larger Darlington units were.
But I digress…
Here’s the SMNR pitch, being made by many firms today:
The reason nuclear power plants are so expensive is that they’re always a brand new, 1st of kind design. There’s no steady crew of people with the specialist skills to efficiently build them, because we never build the same design twice nor do we build them one after another. No common design means people spend more time engineering, and less time building.
A “new” nuclear fission reactor technology will be used. Sometimes, that’s just a new twist on the existing “boiling water” reactor. Sometimes, they’re talking about a totally different technology, such as a molten salt fuel cycle.
The units will be built at much, much smaller scale than the existing state of the art. NuScale, for instance, the project which seems to be farthest along in the USA, has a capacity per unit of about 77 MW electrical per reactor. Twelve (12) units operating in physical parallel would be required to replace a single Darlington-scale unit.
The smaller scale is claimed to be small enough to be “intrinsically safe”, or something near enough to that, so the hope is they will be simpler to build, and easier and quicker to permit.
The units are small enough that they can be built (and apparently, also fuelled) in a factory, and shipped to site “largely assembled”- in NuScale’s case, in three truck-shippable pieces per reactor, totalling 700 tonnes per reactor. The claim is that will make projects faster to begin producing some power, and hence much cheaper.
Because the factory will make the same unit again and again, what the unit lacks in economy of vertical scaling, will be more than compensated for by a) factory fabrication by a trained team b) “mass production” and c) “simplicity” arising from the smaller scale
The small units will be perfect for use on remote sites like mines, small remote communities etc.
Let’s examine the claims one by one, using NuScale as a reference case because its information is quite widely available to the public, not because it is especially worthy of either praise or criticism:
To me, this is a popular myth, not a reflection of the real reason nuclear power plants are expensive. Like all myths, there’s a grain of truth there: nuclear is a specialist industry with a heavy certification burden. The real reason they’re expensive is that they are massive capital projects with extremely long design life, a high risk profile, and accordingly a long permitting, approval and construction process. Projects which must be done at positively massive scale to deliver sufficient economy of vertical scale to make each kWh seem cheap enough for ratepayers to afford, for reasons made obvious in my 1st article in this series. And the regulatory attention is inescapable, because the risk profile means that only the public has deep enough pockets to insure these projects against accidents.
To my non-nuclear eye, NuScale isn’t really a new nuclear technology- it’s just a small boiling water reactor with a different cooling scheme involving a thermosiphon and water immersion rather than active pumping. And, shockingly, from a brief review of NuScale’s website, it seems that the plan is to connect each unit to its own (tiny) “skid mounted” (modular) steam plant, such that even the steam plant part of the job will lack economy of vertical scale.
At 77 MW electrical output per unit, the unit definitely qualifies as “small”. Therefore, numerous individual units will need to be installed, either in physical parallel on the same site with common infrastructure, or on numerous sites, to supply equivalent amounts of power to the units they seek to replace. Which of these two options will be cheaper? Obviously the former, and by a lot!
“Intrinsic safety” is obviously something which is easy to claim, but quite hard to demonstrate to the satisfaction of a regulatory body who knows that the public, not a private entity, will be providing insurance against an accident. By “hard”, I mean “will cost a lot and take a long time to achieve”
Certainly the pieces of each NuScale reactor look to be small enough to be shipped by a number of different means, including by heavy logistical truck/trailer units- but by no means would those be “routine” shipments given the diameter and weight, even if they didn’t also contain active nuclear material. The project is, however, still modular in the way people typically understand that term in the industry.
Until orders of such reactors are so common that maintaining a dedicated factory full time for their fabrication is a practical option, each unit will be built more or less by hand, albeit in a factory environment. Subcomponents and sub-assemblies will be made in other dedicated factories, just as all plants are built today, whether they’re modular or “stick built”. But the project would have access to the benefits of modular fabrication. Calling that “mass production”, however, is more than a small stretch of the definition of that term! Such a factory would have almost nothing in common with, for instance, a factory making cars. People, not robots, will be doing most of the work. The notion that sufficient savings in labour and schedule would be possible to overcome the rather obvious lack of economy of vertical scale of each unit is therefore very questionable.
It is clear that the lack of economy of vertical scale will not be compensated for adequately by modular fabrication even if the units are ganged in parallel on a common site with common infrastructure. The notion that putting tiny units alone or in small groups on numerous different sites could yield affordable kWh for consumers is just preposterous.
Let’s look closely at claim 6) – that mass production in a factory environment would overcome the conventional economy of vertical scale.
Let’s take the unit cost of one NuScale 77 MW unit as x units of capital cost. What would we expect 12 such units, factory modular, to cost if they were all ordered at the same time? I’d guess 12^0.9 x at best, to be generous, or about 9.3x. It could easily be higher.
What should, in comparison, one unit of 12*77 = 924 MW electrical output, cost? About x * 12^0.6, or 4.4x…
For a project which hinges on capital cost per unit of value production, that’s a death sentence. On the basis of decades of experience doing it for a living, there’s no way that factory modular fabrication is going to drop the price per unit sufficiently to make up for that ocean of a difference.
Now let’s look at a few other issues which seem obvious even to me as someone who absolutely makes no claim to be a nuclear power expert:
From a nuclear proliferation, security, terrorism etc perspective, distributing nuclear reactors on numerous sites, particularly remote/rural ones, is far riskier than larger centralized sites which can be better planned and protected. Power distribution costs won’t be reduced unless we also decide to site these numerous little nukers much closer to population centres- something that is unlikely to go over well with the people who would be living next door. You can claim that such fears are unjustified, but that doesn’t mean they won’t present themselves with pitchforks and torches at every public meeting.
Because fuelling costs are low, and capital costs are (very) high, nuclear power is generally operated as close to 100% capacity factor as physically possible, generally being given preferential access to serve loads on the grid. The issue isn’t that nuclear power plants can’t be “turned down” in output- the issue is that you can’t afford to operate them that way. And that fact means that nuclear doesn’t play well with intermittent wind and solar power, which are cheaper when they are available and simply not available when they aren’t. Making the plants smaller won’t change that, at all. The putative benefit of having 12 units you can individually control, really adds not very much to that economic equation.
Could Wright’s Law really be counted on to make each subsequent reactor cheaper than the last? Can SMNRs become like solar panels or Li ion batteries? That depends on how applicable you think Wright’s law is to fairly conventional equipment- heat exchangers, welded pipe systems, steam power plants etc. My bet is that it’s not very applicable because manufacturing processes for such equipment are already very well understood- we make enormous numbers of pieces of such equipment in the world yearly already. The potential for Wright’s Law “doublings” which lead to learning-based cost reductions, seems small, though I don’t doubt there would be a learning rate if there were sufficient doublings.
For Wright’s Law to kick in, we’d need to have a single design which is the obvious favourite, and to build that one only. Does such a design exist? No- rather, there are manydesigns being proposed, for both conventional and new fuel cycles, with no clear winner.
If a particular future fuel cycle (molten salt, thorium, what have you!) is somehow limited to a small maximum scale due to its nuclear physics, to me that’s a flaw, not a feature. It means that the technology will have challenges to achieve a low cost per unit of production, ie. per kWh it makes for consumers. It also means that each technology will make its cheapest kWh when built out to its largest practical scale- just like all other technologies which produce commodity products.
Can the smaller units be refurbished? What’s their design life, and how would one extend that to maximize the number of kWh each unit generates before it becomes a pile of (low level) radioactive waste? I sincerely don’t know the answer to these questions, but I’m sure others might.
Is a nuclear reactor really a great tool to site at a remote location like a mine or remote community? Are such locations ideal in terms of emergency response, skilled and trained maintenance staff etc. Etc.? And does the power use of such sites, and the resulting GHG emissions, really make a big difference to total world GHG emissions? Is this a “hard to decarbonize sector” or just an excuse to sell units to places already accustomed to paying high prices for power from diesel generators and the like?
It is sometimes claimed that SMNRs provide greater possibility to provide combined heat and power than conventional nuclear power plants, given that heat can’t be shipped over distances as great as electricity can in economic terms. However, that’s only true if we put them on numerous sites which are each closer to populated centres, and then we’re willing to spend the money to build district heating etc.. That is, as already noted, not a recipe for low capital cost.
From this analysis, and based on long discussions with nuclear advocates and nuclear critics, I can say that I consider the small modular nuclear reactor to be basically nearly pure nuclear #hopium. It’s a concept that fails a basic economic “sniff test”- a proposed solution that seems incapable of solving nuclear’s really big problem, which is its enormous capital intensity- not its tendency to draw out “no nukes” protesters.
It also seems to run quite contrary to the learnings of the past. And you know what they say about that: doing the same thing over again and expecting a different outcome is a fairly accurate definition of delusion.
Why are SMNRs So Popular, Then?
Lots of smart people, and entire companies with world class pedigrees such as Rolls Royce, are lined up in opposition to what I’m telling you in this piece. Why am I so sure that they’re wrong and I’m not?
Simple. It’s a concept known as “moral hazard”.
When I was in the business of designing pilot plants for new processes for clients, on occasion the client or their investors might ask me whether I thought the process was “worth piloting”, i.e. Did it have a likelihood of economic success? I would (rightly) refuse to answer such questions, and if pressed, I would simply repeat the client’s own claims to them and say, “If A, then B”. Why did I give such a cagey answer? Because, as a designer/builder of pilot plants, I benefited financially from designing and building the pilot plant, whether the process had any chance of economic success or not! Any opinion I offered to such questions was therefore offered from a position of an actual conflict of interest- and I was in a position of “moral hazard”. As an aside, I love the fact that as an independent consultant, I can now tell clients straight up about every strength and every weakness I see in their plans- with no moral hazard.
Clients who can’t take the truth as I see it, I’m quite happy to part ways with- I even advertise this as a feature of my consultancy on my website.
Now put yourself in the shoes of a nuclear engineer: you’re coming to the end of your career in what is basically otherwise a dying industry. Very few at-scale nuclear projects are being built, so maybe you’re working on a refurbishment project- the last one on that plant. To you, the chance to work on a SMNR project, probably one lasting many years, especially one funded by governments or by people who will pay your salary whether the project achieves its goals or not, is likely a very pleasant one relative to trying to find a new industry to work in late in your career. You’re an expert in nuclear power, certainly- but do I give your opinion about its potential for success of SMNRs, any real weight? Or do I consider that opinion to be one offered from a position of moral hazard?
That’s certainly not an accurate description of everyone who supports SMNRs, by a long shot. There are many people of such high personal integrity that they will tell the truth, when asked, and the whole truth, even if that truth is contrary to their personal economic interest. But it describes a lot of them, especially many of the ones advocating the concept most loudly in public.
There are some people who are absolutely not in a position of moral hazard who also think SMNRs are the bee’s knees. They may genuinely believe that SMNRs have a real chance to make cheap kWh one day, if we only finally standardize on one design and then build them by the thousand every year. I just don’t think those people are thinking clearly. I think they’ve smoked a little too much #hopium for their own good. Doesn’t make them bad people- doesn’t make them right, either.
Finally, there’s the even more cynical group of people. People like Doug Ford, recently re-elected premier of the province of Ontario. Doug is many things- aside from being the older brother of the imfamous late crack-smoking former mayor of Toronto Rob Ford, Doug is quite likely also a closeted climate change denialist, although he is far too cagey to ever admit that publicly.
Let’s say you’re our dear leader DoFo. You have lots of people clamouring at you about climate change, and since you fancy yourself to be a populist, you don’t want to appear to be doing nothing. But you don’t believe in it, so you don’t want to spend any real money dealing with it. Especially not on “green power” projects which you ran against as costly boondoggles and a blot on your rural voters’ landscape- projects which you cancelled, then passed legislation preventing the project’s proponents from seeking the cancellation fees former governments had agreed to. You also know that the Pickering nuclear power plant is scheduled to close down in 2024, for good, because it’s finally too many years past its best-before date to extend any further.
So: what do you do?
How about planning a 300 MW “small modular nuclear reactor”? Are you troubled that it’s not really small, nor modular? Nope. You talk about all those Ontario jobs. And you know that it will be quite happily studied- by people largely in a moral hazard position- until you’ve retired from office. The money spent on this gives you plausible deniability about the whole AGW issue as you see the province build more fossil gas-fired power plants to replace Pickering. “Just wait”, you say- “the SMNRs are coming to save the day!” Doug can add #hopium dealer to his long resume…
Recommended Reading: you can’t go far wrong in reading @Michael Barnard ‘s treatment of the same topic, which was informed in part by discussions we had about the topic but which also contains Michael’s usual, top notch research and analysis.
Once we reach a certain maximum practical vertical scale for a particular piece of equipment, it becomes impractical to build a bigger unit, or to transport and erect it once it’s built, as noted in the previous article. At that point, we shed a few tears, because we know that the continually decreasing capital cost per unit of value created is now more or less over for that unit. But generally, this is not encountered in every unit on a plant at the same scale. When we’ve got the largest pump, filter, reactor etc. that can be made in practical terms, the next step isn’t to build two complete twin plants on the same site, much less two complete plants on two different sites to save on distribution. Rather, we will usually “number up” the largest practical thing, running several of them in parallel.
Sometimes we do that even though we’re below the maximum practical scale. Some plants have “trains” of duplicated units which run in parallel, such that one train can be shut down for maintenance, or because the market dries up. Common infrastructure is maintained, and that gives us some of the benefit of vertical scale- just not to the same extent as if we made each unit bigger.
An example is the Shell Pearl GTL project pictured in the 1st article.
(Shell Pearl GTL- a mammoth gas to liquids plant installed in Qatar- photo credit, Shell)
That project produces liquid hydrocarbons ranging from LPG to waxes, starting with fossil gas. The process, known as Fischer Tropsch, takes CH4 apart to CO and H2 (and lots of CO2) and then back-hydrogenates CO to -CH2- and H2O. It is so inefficient, as a result of wasting all that hydrogen “un-burning” CO to make water, that the only way it can make money is:
You must do it at positively mammoth scale to drop the marginal capital cost of the overall plant to the lowest practical level
You must pair it with a gas source which is both enormous and basically free
You must be able to dispose of fossil CO2 to the atmosphere, again basically for free
Accordingly Pearl GTL is positively mammoth- it must be to make money for its owners, at prices of products that the market are willing to pay. Capital cost was on the order of $20 billion.
It is in fact so huge that it “numbers up” its reactors.
Each reactor is a giant pressure vessel- weighing 1,200 tonnes- containing 29,000 catalyst tubes in a common shell from which the heat is removed. Each reactor is as big as Shell could make them, without what Shell considered to be excessive “heroics”.
There are two trains of reactors. And in each train, there are 12 reactors operating in physical parallel.
This is basically an object lesson in “numbering up”. Each catalyst tube is as big as it can be without making the wrong products. As many such tubes are put into a single pressure vessel as practical, to make the cost of physical paralleling of these tubes as low as practical. And then, large numbers of these pressure vessels are installed again in physical parallel, arranged in trains.
The rest of the plant is similarly divided up into single or multiple units in accordance with the maximum practical scale at which the process itself can be carried out. If I recall correctly, it has two of the largest air separation plants ever built, a huge autothermal reformer etc.
Shell didn’t pursue this giant scale for fun and games. Apparently they looked at this project numerous times over more than a decade, before deciding to pursue it. They ran the numbers and convinced themselves, quite correctly, that the only way to make money from Fischer Tropsch, even with a nearly free gas supply, is to go big- so big in fact that numbering up the reactors was the only practical option.
“Numbering Up” to Minimize Scale-Up Risk
There’s another reason some people give for pursuing a “numbering up” rather than scaling up strategy. Sometimes, making the bigger unit with higher production capacity is easy- but sometimes, it’s very risky indeed. The larger unit we design might not work at all, or might make a different product, or undesired byproducts etc.- even if you retain experts to help you make the best stab at building the larger unit that you can. The risk here isn’t just money, it’s time. A development project for a larger unit, can take considerable time. And if customers are knocking down your door already today, you may not want to wait. Under those circumstances, numbering up small units which have already been proven to work, seems a tempting option.
The Downsides of Numbering Up
Unfortunately, for every unit we have to run in physical parallel (because we had to number up rather than scaling up), we now have multiple devices to procure, install, connect, control and test. That means more valves, more switches, more wires, more instruments and controls, more installation labour, quality control, testing etc.- and more cost. It also means more likelihood of an individual failure of some kind, even though the failure of one unit might only reduce production by a small amount of the total (that’s an advantage of numbering up in “trains” as noted above).
If making a larger unit is possible, even with some risk, it’s likely worth doing- if the market can support that much product. What we are unlikely to get away with is to instead build multiple identical smaller units and operate them in physical parallel- especially if the individual units are a small fraction of the total production that the larger plant could produce.
Of course if we take “numbering up” or “horizontal scale” to its logical conclusion, we see some of the many things we use on a daily basis- articles that are mass produced in plants which themselves take advantage of as much vertical scale as possible, so that each commodity product item (computer, solar panel, car etc.) itself is as cheap as it can be.
This is where the proponents of certain schemes, fall into the ditch. They frequently confuse the apparatus making the commodity goods, with the commodity goods themselves!
Why “Mass Production” – of the Means of Production- Can’t Win
But surely if we mass-produce entire plants to make our commodity, those plants will get cheaper and their capital cost per unit of production will drop?
No, they won’t. Because S^0.6 is an exponential function. It positively destroys any benefit which mass production of the plant itself could possibly generate.
Furthermore, the sorts of things we’re talking about here aren’t suitable to true mass production.
Sure, you can build a complete chemical plant in a factory, in pieces of a size and weight suitable to be shipped by whatever means you like. Such construction is referred to as “modular”, and I was in the business of designing and building small modular chemical plants used as pilot and demonstration and small commercial units for over two decades. Modular construction offers many advantages- faster schedule, better build quality, and higher labour productivity among others, when it’s done right.
And despite this, I can say, unequivocally, on the basis of that considerable experience, that nobody would ever achieve lower cost per unit of production by getting ten modular plants of identical design, built at the same time as modular projects, if building a 10x larger plant on site (referred to as a “stick built” rather than modular plant) was a practical alternative. Whereas a modular design/build operation might be able to offer ten plants of unit cost 1 for 10^0.9 = ~ 8.1x the cost of the first one, the 10x larger plant- including the extra cost associated with “stick building” the parts of it that were too big to modularize, would cost only 10^0.6 = 4x. These figures are, of course, very rough, but they give you the basic idea.
If you want real mass production, order 10,000 of them at the same time…but how likely are you to do that?
Where Horizontal Scale is Your Only Choice
The first article in this series examined the conditions under which the economy of vertical scale was valid. If any of those conditions are violated, horizontal scale may be your only choice.
If your product is unstable, i.e. ozone, you have no choice but to put an ozone generator on every site that needs ozone. That those ozone generators are mass produced, however, does not make ozone a cheap chemical! Its cost is high not just because it is energy inefficient to make, but also because the need to make it on site in tiny ozone plants makes inefficient use of capital, even though the ozone units themselves are built in factories. That inefficiently used capital cost makes every kg of ozone that much more expensive.
If your feed or product can’t be distributed readily, you may also have no choice but to go for horizontal scaling, on separate sites. Of course that is no guarantee whatsoever that the resulting product, made using equipment with poor capital utilization efficiency (high marginal capital cost), is worth enough to make the enterprise into a business.
And no, “mass production” of the necessary plant equipment, absolutely won’t save you.
In the next articles we’ll use these concepts to evaluate a number of claims in the renewable/alternative energy world, to see whether or not they make sense.
There are lots of proposals emerging seemingly every day, based around the notion that we will mass produce some device, plant or process, and then use those mass-produced devices to produce some commodity product- frequently a product made by devices, plants or processes already operated commercially at much larger scale. A few examples seemingly popular at the moment include:
small modular nuclear reactors for power generation
distributed hydrogen generation (particularly for refuelling vehicles)
small units to generate value from fossil gas that would otherwise be flared, by converting it to fuels or chemicals
distributed units to process a distributed resource- waste products from agriculture, municipal solid waste, batteries- you name it
The idea is simple enough: we all know that when things are made in large numbers, a couple things happen. One is that we get better at making them, and that learning drives down the cost of each unit produced. The first one costs a lot because it’s a prototype. The 2nd, if it is identical, is easier and hence cheaper because we’ve already proven the concept. And so it goes, with capital cost falling by a certain percentage with each doubling of production- a principle known as Wright’s Law.
Another is that when we increase the scale of the manufacturing plant (made possible by increased numbers of units being sold), we can benefit from the savings associated with automation etc. This is actually one of the features which enables Wright’s Law for the manufacture of certain types of devices.
The fundamental thesis of the sorts of schemes I’m going to take on in this article series can be stated more or less as follows: building big plants is hard. It takes time and lots of capital. So instead, we’ll make a very small plant, do it very well, and then mass produce the very small plant and operate many of them in physical parallel, either on the same site or on a plentitude of sites, the latter to save the costs of distributing the product (or to eliminate the need to build infrastructure to distribute the product). And it’s my job in this series, to explain to you why this idea has a rather tall stack of engineering economics- arising from basic physics- in the way of its success.
It’s important to provide a little context here, so that people can make sense of where such approaches are necessary, where they make sense, and where they’re just somebody playing around with your lack of knowledge of engineering economics and hoping you won’t notice.
Economy of Vertical Scale
You’ve probably noticed that we make many things in large, centralized plants. We distribute feeds of matter and energy and labour to those plants, and we distribute products from those plants to the people/businesses who need them. Why do we do that?
The answer comes from very basic physics, which leads in a very direct way to engineering economics.
Take the simplest example: a piece of pipe to carry a fluid from point A to point B.
Let’s say we’re moving a commodity with that pipe- doesn’t much matter what commodity. Let’s compare two pipes: one has a diameter of X, and the next has a diameter of 2X.
The first pipe can carry a given amount of product per unit time at a particular amount of energy input per unit time lost to friction. The correct size of pipe is determined based on what’s referred to as an “economic velocity”- the flowrate which gives alinear velocity which is an optimum balance of the cost of pumps/compressors and their energy lost to pressure drop in the pipe (higher for smaller pipes) and the capital cost to build, test and maintain the pipe (higher for larger pipes). A different optimal velocity exists for a chemical plant’s piping, for instance, than for a pipeline carrying fluids across a country (with the latter favouring lower velocities).
When we compare pipes with diameter X and 2X, we find right away that we can move four times as much material per unit time in the larger pipe, because the cross sectional area varies as D^2. Indeed it’s even more than 4x, because we get a benefit from an improved ratio between wetted perimeter (where wall friction happens), which varies with D, and cross sectional area which varies as D^2.
But the real benefit is this: the pipe capital cost doesn’t increase by anything near four times.
We’ve just discovered the physical basis for the economy of vertical scale, or “economy of scale” for short. It arises because relationships such as the surface area to volume ratio, become more favourable with increasing scale.
Similar physics are active for all things on a project- every pump, valve, tank, heat exchanger, transformer, motor- you name it. The bigger you make it, the cheaper it gets (in capital cost) to produce a unit of value from that device.
Capital Cost Versus Scale
Let’s say we have two plants: the first plant produces 1 unit of production (doesn’t matter if that’s tonnes per day of a chemical, MW of electricity etc.), and the 2nd one produces 10 units per day of the same undifferentiated thing. We say that plant 2 is 10/1 = 10 times the scale of plant 1, ie. we have a scale factor of 10.
To a first approximation, because of relationships like the one for the pipe example, it can be shown that:
C2 =~ C1 S^0.6
Where C2 is the capital cost of the larger plant, C1 is the capital cost of the smaller one, S is the scale factor (the ratio of production throughput of plant 2 to plant 1), and 0.6 is an exponent which is the average for a typical plant. In fact, each thing in the plant has a similar relationship, with an exponential factor which ranges from between 0.3 (for centrifugal pumps) to 1 (for things like reciprocating compressors above a certain minimum size). Normalized over the cost of a typical plant, the exponent of 0.6 gives the best fit.
Let’s say that 1 unit of production generates 1 unit of revenue per day. Ten units would generate 10 units per day. But let’s say that 1 unit of production rate costs us $1 million in capital. Ten units of production would therefore cost us 10^0.6 x $1 million = ~ $4 million in capital. The cost of capital per unit produced is therefore $1 million/unit/day for the first plant, and $4/10 = $0.4 million/unit/day for the 2nd plant.
The marginal capital cost per unit of production is dramatically lower for the larger plant- assuming:
there’s a market big enough to consume all the production of the larger plant
there’s feedstock sufficient to feed the larger plant
the product and its feedstocks are both legal and possible to transport by practical means
we’re within the limits of the scaling equation, meaning that each thing we’re using in the plant, simply gets bigger
we’re making a commodity which is fungible, meaning that it’s interchangeable with the same product made elsewhere
We’ve just discovered the reason we do stuff at large scale! It doesn’t matter what undifferentiated fungible commodity product we’re making, as long as it meets our assumptions above (or only bends them a little), every unit of production (every tonne of product, or kWh of electricity etc.) becomes cheaper if we make it in a plant of larger scale.
Limits of Vertical Scaling
Of course there ultimately will be an optimization here too. We rarely think it’s a good idea to make all the world’s supply of any one thing of value in a single plant at one location on earth. That’s putting too many eggs in one basket. Distribution isn’t free of charge, much less free of risk, and logistics limits how far you can move a particular product before the cost of distributing it overwhelms the capital savings. Similarly, the feedstocks are often distributed and their logistics matters too.
Some products- and some starting materials – are too voluminous or unstable or dangerous to make the trip. Doesn’t matter how badly we want to make ozone, for instance, in one centralized plant to make it cheaper, because in 90 minutes, even under ideal conditions, it’s gone- it falls back apart to oxygen again. If you want ozone, you must make it on site and use it as it’s made.
With hydrogen as a feed, unle\ss we’re using a tiny amount, it is generally better (in economic terms) to either set up production near an existing hydrogen plant, or to transport something else to make hydrogen from and then build a small to medium sized plant of our own- because the infrastructure to economically move more than very small quantities of a bulky gas like hydrogen doesn’t exist beyond a few “chemical valley” type situations where large number of plants are co-located in the same geography. Bespoke new infrastructure suitable for moving pure hydrogen is very costly and slow to build.
The same with hazardous wastes: we may find it very efficient to process them in one giant plant, but there are often rules about transporting wastes across borders etc that make it impossible to do so.
Vertical scale, within those limits, is king. It’s the reason we have centralized power plants, oil refineries, chemical plants, car manufacturing plants etc., rather than having one in every town, or every home. The resulting economy of scale can pay for considerable distribution infrastructure too- within limits.
Additional Advantages to Vertical Scale
Many other factors tend to generate lower costs of capital for larger projects rather than smaller ones. The proportion of a project spent on factors like engineering, permitting, controls and instrumentation, accessory facilities, civil/structural work, utilities etc. all tend to be lower per unit of production rate for larger rather than smaller plants, with exceptions of course.
When capital cost intensity decreases, so does the incremental cost of improvements to save energy such as heat integration. Whereas small projects often heat using fuel and cool using a cooling utility, heat integration becomes economically possible as projects become larger. And when plants are integrated into even larger facilities, energy integration from one plant to another becomes possible. Plants can share utilities such as steam, such that surplus steam from one plant is used for motive power or heating by another.
Is There Such A Thing As “Too Big”?
Absolutely. At a certain point, things are just too big to build in practical terms. With some pieces of equipment, you get to the point where there’s only one company in the world who would even try, and they get to name their price and delivery schedule. Sometimes, the issue is shipping the finished article to the site. Sometimes it’s a matter of not being able to afford to build the thing in place, because doing so requires basically building a factory with specialized equipment only for the purpose of building the one unit, squandering much of the benefit of greater scale.
All of these factors lead to the conclusion that there is a maximum practical scale for most things. And beyond that maximum practical scale, you’re pioneering- you’re going one larger, and taking onboard all the learnings of doing so on just your project. Future projects might look at your ruins and laugh, or they may benefit from your suffering, but you’re going to suffer either way.
A certain amount of “heroics” in terms of specialized logistics, heavy cranes, special crawler trailers, or site construction, is necessary in any big project. But when a project goes too far, the result can be a higher cost than if you’d simply built two or even four smaller units which didn’t require heroics to the same extent. You bet that major projects teams suffer over these details, in an effort not to become a signpost on the road of project development which says, “go no further”.
In the next article in this series, we’ll discuss what you do when you reach the limits of vertical scaling.
Recommended Reading: “Capital Costs Quickly Calculated” – Chemical Engineering magazine, April, 2009
carried out by Open Grid Europe GmbH with the assistance of the University of Stuttgart, paid for by DVGW (Deutscher Verein des Gas- und Wasserfaches- the German Association for Gas and Water) did rather careful, extensive and thorough testing of a wide and characteristic variety of pipeline steels in hydrogen atmospheres of various pressures.
The report draws a shocking conclusion that has been parroted on high by the #hopium dealers at Hydrogen Europe and various other pro-hydrogen lobby groups:
“Hence, all pipeline steel grades investigated in this project are fundamentally suitable for hydrogen transmission.”
Well that’s it- case closed then! All gas transmission pipelines are fundamentally suitable to transmit pure hydrogen! The fossil gas distribution industry is saved! The “sunk cost” of all that infrastructure is rescued! And all those worry-warts like myself who were pointing out the hazards of such a conversion were just wrong!
While I’m totally happy to find out when I’m wrong, so I can change my opinion to be consistent with the measured facts, I’m afraid that in this case, the answer is rather more complex than just “Paul Martin is wrong- gas pipelines are safe for use with hydrogen”.
TL&DR Summary: extensive materials testing in this study proves that molecular hydrogen does cause pipeline materials to fatigue crack faster (up to 30 times faster than they would in natural gas) and to lose as much as 1/2 their fracture toughness (making them more likely to break). But if you reduce the design pressure of the pipeline substantially- to 1/2 to 1/3 of its original design pressure- the gas industry would consider that “safe enough” under the rules intended for designing new hydrogen pipelines. That would of course drop the capacity of the existing gas pipeline by a lot, requiring that either the lower capacity be accepted or the line be “twinned” or replaced if it were switched to hydrogen. And a host of other problems per my previous article on this topic, are also unresolved.
What Was Studied
Modern gas transmission pipelines are generally made of low alloy, high yield strength carbon steels typified by API 5L grades X42 through X100. The study examined steels commonly used in pipeline service in Germany, ranging from mild steels of low yield strength such as historical grade St35 (35,000 psi yield), through API 5L X80 (80,000 psi yield strength), including some steels used in the manufacture of pipeline components such as valve bodies. In many cases, specimens were prepared in such a way that the bulk material of the pipeline, a typical weld deposit and the heat affected zone of the parent metal were all tested. Thorough, careful work.
The specimens were tested in a cyclic (fatigue) testing apparatus which could be filled with hydrogen atmospheres of varying pressures. The major factors examined were fatigue crack growth rate and fracture toughness, because these parameters are known, not merely suspected, to be affected in a detrimental way in these steels by the presence of hydrogen.
What They Found
To hopefully nobody’s surprise, the testing found that the presence of hydrogen does greatly accelerate fatigue crack growth, and significantly negatively affects fracture toughness in the tested steels.
Specifically, they were able to build a good model of the fatigue cracking behaviour of these materials. They found, to quote p. 169 of the study:
At lower stress intensities and hydrogen pressure, crack growth is comparable with crack growth in air or natural gas
At higher hydrogen pressures, crack growth very rapidly approaches the behaviour at a partial pressure of H2 = 100 bar (~ 1500 psi) , even at lower stress intensities
The position of the transitional area from “slow” crack growth to H2-typical rapid crack growth (my emphasis) depends on the hydrogen pressure, although it cannot be predicted exactly
They also found that fracture toughness Kic was negatively affected by the presence of hydrogen. Fracture toughness was, as expected, reduced even in low yield strength steels like St35, even when small amounts of hydrogen were added. Fracture toughness was strongly reduced in higher yield strength steels such as L485 (a common modern pipeline steel used in Germany). Even 0.2 atm H2 dropped fracture toughness greatly, and fracture toughness continued to drop steeply as pH2 was increased.
Hmm…so how did they draw the conclusion that these steels are “fundamentally suitable for hydrogen transmission”?
By comparison against the requirements of the hydrogen pipeline design/fabrication code/standard, ASME B31.12.
The study found that the crack growth rate was consistent with the assumptions used in the hydrogen design de-rating method used in B31.12. They also found that in all the steels tested at pH2 = 100 bar, the minimum required Kic value of 55 MPa/m^½ was exceeded.
The TL&DR conclusion here is as follows: yes, hydrogen causes pipeline steels to fatigue crack faster and to lose fracture toughness to a considerable extent, relative to the same steels used in air or natural gas. But that’s okay…because it doesn’t crack faster or lose more fracture resistance than expected in a design code used for dedicated hydrogen pipelines.
A design code that fossil gas pipelines are not designed and fabricated to, by the way!
What Does This Mean? Hydrogen’s Impact on Pipeline Design Pressure
Transmission pipelines are designed, fabricated and inspected in accordance with codes and standards which vary from nation to nation. The common standards in use in the USA, which serve as a reference standard in many other nations, are ASME B31.8 for fossil gas and other fuel pipelines, and ASME B31.12 for bespoke hydrogen pipelines. While the latter do exist (some 3000 km of dedicated hydrogen pipelines in the USA alone), the former are much more extensive (some 3,000,000 km of them in the USA). And if you a) own such a pipeline or b) depend on it to supply the gas distribution network you own, and c) know that without hydrogen, you’ll be out of business post decarbonization, you will be very motivated to conclude that you can re-use your gas pipeline to carry hydrogen in the future. Hmm, sounds like a bit of a potential conflict of interest, no?
In both ASME standards, the design pressure of the pipeline is determined via a modification of Barlow’s hoop stress equation, involving the specified minimum yield strength of the piping (S), the pipe nominal wall thickness (t), pipe nominal outer diameter (D), a longitudinal joint factor (E), a temperature de-rating factor T, and a design safety factor F, which depends on service class/severity and location. For hydrogen per B31.12, a new factor Hf, a “material performance factor” is applied to effectively de-rate carbon steel pipeline material design pressure to an extent rendering it (arguably) safe for use with hydrogen:
P = 2 S t/D F E T Hf
These helpful tables excerpted from ASME B31.8 and B31.12 were borrowed from Wang, B. et al, I.J. Hydrogen Energy, 43 (2018) 16141-14153
Design factor F, used in both codes, varies between 0.8 and 0.4 in ASME B31.8 based on “location class”, which is based on factors including proximity to occupied buildings.
B31.12 for hydrogen has two design factor tables: one for new, purpose-built hydrogen pipelines, with F values matching those in B31.8 for fossil gas (option B), and one for re-use of pipelines not originally designed to B31.12, which uses a lower (more conservative) table of F values ranging from 0.5 to 0.4 (option A). The latter, option A, would apply to any fossil gas pipeline repurposed to carry hydrogen.
For many existing gas pipelines, repurposing the line to carry hydrogen would require de-rating of the design pressure from the current level which is often 72% or 80% of specified minimum stress, to perhaps 40-50%.
For hydrogen piping, the material de-rating factor Hf ranges from 1 for low yield stress piping materials used at low pressures, to 0.542 for high tensile, high yield strength materials operating at high system design pressures. No such material de-rating factor is required in ASME B31.8 for the design of fossil gas pipelines.
In the extreme case, a pipeline designed and fabricated for fossil gas per ASME B31.8 in a low criticality (class 1 division 1) location far away from occupied buildings, made of a high yield strength steel, would have its design factor reduced from 0.8 to 0.5, and an Hf applied of 0.542. The result would be a reduction in design pressure to 34% of the original value, i.e. a reduction of almost three-fold.
A reduction in design pressure represents a very significant reduction in pipeline energy carrying capacity and would require either “twinning” of the line with new pipe, or replacement with new pipe.
So: Can We Use Existing Gas Transmission Pipelines for Pure Hydrogen?
The answer is much more complicated than a simple yes or no!
Can they be re-used? Maybe- but the pipe material isn’t the only issue. There are many others, covered in my paper here:
(which I will shortly update with this new information in relation to piping materials- that’s why I love LinkedIn as a publishing medium, because it makes updates easy!)
Can they be re-used at their existing design pressure and hence at their existing energy carrying capacity? The answer to that is almost certainly NO. At bare minimum, de-rating of the design pressure would be required, likely to a significant extent. This would necessitate either twinning the line with new pipe to carry the same amount of energy, replacing the existing pipe, or accepting the reduced capacity.
Will they blow up and kill people if used for hydrogen? Well…they will crack much faster, even at reduced stress, and will be much more likely to break, than if they carried fossil gas without hydrogen in it. Gas pipelines are often operated at a pressure which varies with respect to time, cycling frequently, whereas dedicated hydrogen pipelines tend to be run at more constant pressures, resulting in less rapid fatigue. But if the design criteria of a code (B31.12) not used in the design and construction and testing of the original pipe are retroactively applied to the existing pipeline, the industry might consider that to be “safe enough”. The DVGW testing demonstrates that the design assumptions used in the hydrogen pipeline design code to set its “hydrogen design de-rating factor” are met, in metallurgical terms.
Let’s just say, that’s far from a ringing endorsement for the concept. If I were a regulatory body in charge of ensuring that gas utilities keep their pipelines safe, I’d be paying very close attention to any pipeline being re-purposed for hydrogen. The gas industry itself is in at very least a potential conflict of interest in regard to this matter, and the regulatory bodies will need to step up and ensure that if any pipeline is converted to carry hydrogen- even hydrogen blends- that this is done in a way that is truly safe.
There is a popular myth in the marketplace of ideas at the moment: the notion that hydrogen will become a way to export renewable electricity in a decarbonized future, from places with an excess of renewable electricity, to places with a shortage of supply and a large energy demand. It seems that the hydrogen #hopium purveyors are rarely satisfied with the notion that any particular place- my home and native land of Canada for instance- might make enough green hydrogen to satisfy its own needs for hydrogen, but rather, push on to sell the idea that we will become a hydrogen exporter too!
And like all myths, the notion of hydrogen as an export commodity for energy is separated from an outright lie by a couple grains of truth.
The Lands of Renewable Riches
There are places in the world which have huge potential to generate high capacity factor renewable electricity, and which have no significant local use for electricity (hint- that’s not Canada, folks! Any hydroelectricity we have in excess, has a ready market in the USA) This is particularly true of special locations- deserts with oceans to the west- which are also so distant from electricity markets that the option of transporting electricity via high voltage DC (HVDC) is costly and challenging to imagine. Places like Chile, Western Australia, Namibia and other points on the west coast of Africa, come to mind. Remember that high capacity factor renewables are essential if green hydrogen production is ever to become affordable – electrolyzers and their balance of plant are unlikely to get cheap enough to ever make cheap hydrogen from just the fraction of renewable electricity that would otherwise be curtailed.
The Energy Beggars
There are also places in the world with large, energy-hungry populations, on small landmasses, who aren’t particularly fond of their nearest land neighbours: South Korea and Japan come immediately to mind- the option of importing HVDC electricity via a cable which can be “stepped on” by an unfriendly neighbour every time they’re irritated with you is clearly not appealing, if the lessons of the Ukraine war and Russian gas supply are of any use! And there are numerous other places in the world which don’t want the cost and inconvenience of building out huge renewable and storage infrastructure, for renewables with poor capacity factor and hence need broader grids and storage and overbuilding.
These places also have a long history of importing fossil fuels by ship or by pipeline from distant countries- and, usually, a long history of trying unsuccessfully to get un-stuck from that situation for strategic reasons.
The simpleminded approach to decarbonizing their economies is to import chemical energy, just in another form, this time without the fossil carbon- assuming that is both technically possible and affordable- as long as it’s by ship, so they can switch suppliers in an emergency.
Hydrogen Exports to the Rescue!
Matching that obvious source of supply with that obviously thirsty demand, seems a no-brainer. And at first glance, hydrogen seems to fit the bill as a way to connect the two. It is already produced at scale in the world: we make 120 million tonnes of the stuff per year as pure H2 and as syngas, albeit almost all of which is produced from fossil fuels, without carbon capture, right next to where it is consumed.
We do know how to move and store it, though we don’t do much of either. Only about 8% of world H2 production is moved any distance at all, and most hydrogen is consumed immediately without meaningful intermediate storage. And whereas there are about 3,000 miles of hydrogen pipeline in the USA, which sounds like a lot, that compares with 3,000,000 miles of natural gas pipeline in the USA. Most hydrogen pipelines are used for outage prevention among refineries and chemical plants, and to serve smaller chemical users, in “chemical valley” type settings such as the US gulf coast, where you can’t throw a stone without hitting a distillation column. The long distance transmission of hydrogen is, with very few exceptions, basically just not done. It’s not impossible- we do know how to design and build hydrogen pipelines and compressor stations- it just doesn’t make sense to do it, relative to moving something else (natural gas, for instance), and then making the low density, bulky hydrogen product where and when it’s needed.
If you have energy already in the form of a chemical- particularly a liquid- moving that liquid by pipeline is the way to move it long distances with the lowest energy loss, lowest hazard and lowest cost per unit energy delivered. When your energy is already in the form of a gas, it’s almost, but not quite, as good. So at first glance, pipelines look appealing as a way to move hydrogen around- assuming that you already have hydrogen, that is!
The re-use of existing natural gas pipelines for transporting hydrogen, either as mixtures with natural gas or as the pure gas, has been dealt with in another of my papers:
…and so we won’t re-hash the argument here. But I concluded, with good evidence:
The re-use of natural gas long distance transmission pipelines for hydrogen beyond a limit of about 20% by volume H2, is not feasible in most pipelines due to incompatible metallurgy.
20% H2 in natural gas represents about 7% of the energy in the gas mixture, and hence isn’t as significant as it sounds in energy or decarbonization terms.
Hydrogen, having a lower energy density per unit volume than natural gas, consumes about 3x as much energy in transmission as natural gas does in a pipeline, and would require that all the compressors in the pipeline be replaced with compressors of 3x the suction capacity and 3x the power.
We are therefore really talking about using new long distance transmission infrastructure to move hydrogen around. We won’t be able to simply repurpose the old natural gas transmission network, as desperately as the fossil fuel want us to believe we can. We can’t, even if we were to manage to take care of all the problems with the distribution network and all the end-use devices for natural gas that are also not compatible with pure hydrogen.
I had a careful look at a recent academic paper, which compared the shipment of hydrogen and other fuels by pipeline, against the shipment of similar energy via high voltage DC (HVDC):
However, the paper commits what I have been calling the 2nd Sin of Thermodynamics: it confuses electrical energy (which is pure exergy, i.e. can be converted with high efficiency to mechanical energy or thermodynamic work), with chemical energy (i.e. heat, which cannot), just because they are both forms of energy with the same units. They’re not equivalent, any more than American dollars and Jamaican dollars are equivalent simply because they’re both money, measured in units of dollars! There’s an exchange rate missing…Note in the figure below, electricity and fuels are compared per unit of LHV (lower heating value). Convert that back to equivalent units of exergy and you’ll see that hydrogen, at $2-4/kg, is vastly more expensive as a commodity than the on-shore wind electricity it would presumably be made from to compare with.
The paper’s authors make other confusing choices, such as running the hydrogen at a considerably lower velocity in the line than indicated by normal pipeline design methods, and these choices affect the conclusions considerably. So whereas the energy loss for H2 versus natural gas should be three times as high per unit of energy delivered, they conclude it is actually lower than for natural gas. The losses stated for HVDC, of 12.9% per 1000 miles, are also considerably over-stated relative to the industry’s metrics of performance (see JRC97720 as just one example).
When you consider that the energy loss involved in just making hydrogen from electricity is on the order of 30% best case (relative to H2’s LHV of 33.3 kWh/kg), and that this energy needs to be fed as electricity (work), it soon becomes quite clear that the cost of transmission by pipeline versus HVDC is quite foolish if what you’re really looking at is the cost to move exergy (the potential to do work) from one place to another. If you start with electricity, the cost of using hydrogen as a transmission medium for that electricity includes an electrolyzer and a turbine or fuelcell at the discharge end of the pipeline. The pipeline itself isn’t actually the controlling variable!
Another paper I recently reviewed; d’Amore-Domenech et al, Applied Energy, Feb. 2021
This paper looked at both subsea pipelines for carrying 2 GW of energy to distant locations, and at 0.6 GW delivery from offshore to onshore locations. This is getting closer to the sort of thing which might be considered to move hydrogen from North Africa to Europe, or perhaps one day from Australia to anywhere else.
It turns out that both subsea pipelines and HVDC cables on the order of 1000 km, already exist. In fact, much longer HVDC lines are currently under study, including one proposed from Darwin, northern Australia, to Singapore, and another from Morocco to the UK.
The paper’s authors assume that HDPE pipe would be used to transmit the hydrogen at electrolyzer discharge pressures of ~ 50 bar(g), to avoid subsea compressor stations ($$$$$). The pipeline loses hydrogen by permeation through the HDPE pipe (resulting in losses of high GWP potential hydrogen to the ocean and hence the atmosphere), and the pipe is increased in diameter along its length as the hydrogen expands due to frictional pressure loss.
Sadly, the paper’s authors also commit the 2nd Sin of Thermodynamics, comparing a MWh of delivered electricity (pure exergy) as if it were worth the same as a MWh of hydrogen higher heating value (HHV). This is a rather glaring error that seems to have passed right through peer review without comment, and it affects the conclusions significantly.
The authors include an 80% (state of the art best case) efficiency for converting electricity to hydrogen HHV at 50 bar(g), and look at this over a 30 yr lifetime.
The energy lost over 30 yrs for HVDC is 1.2×10^4 TJ
The energy lost over 30 yrs for the H2 electrolyzer and pipeline is 1.2x 10^5 TJ, i.e. ten times higher.
Despite this, they conclude that the lifecycle cost of transmitting energy in the form of hydrogen is a little lower for a pipeline than for HVDC at > 1000 km in length. That is, of course, entirely cancelled out by the 50% conversion factor and the cost of the device at the end of the pipe, required to convert hydrogen HHV back to electricity again, which were ignored in the paper entirely. In other words, entirely opposite to their conclusion, their paper leads us to conclude that HVDC is actually considerably cheaper on a lifecycle basis.
For distances longer than 1000 km, the paper concludes that liquid H2 transport is the better option. We’ll deal with that one next…
We won’t even discuss the shipment of compressed gas in cylinders. A US DOT regulated tube trailer carrying hydrogen at 180 bar(g) (2600 psig), i.e. the biggest tank of hydrogen gas permissible currently to ship over US roads, contains a whopping 380 kg of H2. While one day US DOT may permit pressure to increase to 250 or even 500 bar(g), it should be clear that shipping BILLIONS of kilograms of hydrogen as a compressed gas in cylinders across transoceanic distances is just utterly a non-starter.
Liquid Hydrogen (LH2)
Michael Barnard’s article on the subject is well worth a read,
Here’s my stab at evaluating the export of hydrogen as a cryogenic liquid.
Hydrogen becomes a liquid at atmospheric pressure at a temperature of around -249 C, or 24 kelvin, i.e. 24 degrees above absolute zero. At that mind-bogglingly low temperature, it is still not very dense. Whereas compressed hydrogen at 10,000 psig (700 barg) is about 41 kg/m3, liquid hydrogen is only 71 kg/m3. The improvement in energy density per unit volume is not spectacular. And whereas to compress hydrogen from the 30-70 bar pressure at the output of an electrolyzer, to 700 bar(g), can be accomplished for about 10% of the energy in the hydrogen (in the form of work, i.e. electricity, mind you!), liquefying hydrogen takes a mind-boggling 25-35% of the LHV energy in the product hydrogen- again, in the form of electricity to run the compressors- that compares to ~ 10% for liquid methane (LNG).
Take the exergy of the hydrogen itself into account by applying a conversion efficiency of 50% to the hydrogen at destination to convert it back to electricity, and even without the energy involved in transport of the liquid hydrogen (i.e. whatever energy it takes to move the ship etc.), you get a loss on the order of 50-60%, i.e. you are making very poor use of electricity at the source from which you’re making hydrogen and then liquefying it.
Today, we use liquid H2 as a hydrogen transport medium only very rarely. The major uses for liquid hydrogen are cooling NMR magnets, and the upper stages of rockets. That’s about it- there’s no other meaningful use which justifies the extreme complexity and cost of involving a 24 kelvin liquid gas.
The problems of hydrogen liquefaction are considerable, and very technical. First, hydrogen heats up when you expand it, any time you start at a temperature above about -73 C (200 K)- this behavior arises from hydrogen’s unusual negative Joule-Thomson coefficient above 200 K. That means, if you want to liquefy hydrogen, you first have to cool it down considerably as a gas. Generally liquid nitrogen precooling is used for this purpose, necessitating an air liquefaction plant as part of the works. After precooling, the hydrogen can be liquefied by either a helium refrigeration cycle or a hydrogen Claude cycle (where hydrogen itself is the refrigeration fluid).
(image source: Linde)
The energy input required is considerable as a result of the difficulty of rejecting heat to the ambient world when starting at such a low temperature. And although that would be bad enough, hydrogen has another wrinkle: spin isomerization. The electron spins of the two hydrogen atoms in a hydrogen molecule can be either aligned (ortho) or opposite (para). When you condense gaseous hydrogen, you get a mixture of about 75% ortho and 25% para-hydrogen. As the liquid sits in storage, ortho gradually converts to para, releasing heat. And that released heat escapes the only way it can- by boiling hydrogen you’ve spent so much energy to cool and condense. A catalyst is required to carry out the conversion more quickly so the heat can be recovered prior to storage, rather than causing excessive boil-off while the H2 is being stored.
Keeping heat out of liquid hydrogen at 24 kelvin, however, is easier said than done. Vacuum insulated “dewar” type tanks can be constructed, and for applications like this, spherical containers are the optimal shape with the lowest surface area per unit volume. A land-based LH2 dewar tank about as big as you can make it, reportedly has excellent performance, where only 0.2% of the hydrogen in the tank,boils off each day. Any tank smaller than that, or of a less optimal cylindrical shape, allows even MORE than 0.2% hydrogen to boil off per day. And in transit, on a ship or truck, recapture and re-condensation of the boil-off gas is not possible. The best you can do is to burn it, hopefully as a fuel, or if in port, to just burn it to prevent it from becoming a greenhouse gas- H2’s global warming potential (GWP) is at least 11x as great as CO2 on the 100 yr time horizon and it is even higher on the relevant 20 yr time horizon.
Once you get to the size of tank possible to put on a truck, 1% boil-off per day is about the best you can do. Want to make it worse? Just use a smaller tank!
Hydrogen’s low density, even as a liquid, is another problem. Liquid hydrogen, at 2800 kWh/m3 HHV, contains only about 44% of the HHV energy per unit volume of liquid methane (6300 kWh/m3), i.e. LNG. On an LHV basis, i.e. if we need work or electricity at the destination instead of heat, it’s even worse- 2364 kWh/m3 for hydrogen versus 9132 kWh/kg for LNG, i.e. about ¼ the energy density per unit volume. That means either larger energy cargo ships, or several ships to carry the same amount of energy- even if boil-off is managed.
Converting Hydrogen to Other Molecules for Shipment
Confronted with these obvious difficulties, which make hydrogen rather a square wheel for the transport of energy across transoceanic distances, hydrogen proponents don’t give up! Naturally, they try to shave the corners off hydrogen’s square wheel by converting it to another molecule with more favourable transport properties. The four main candidates are ammonia, methanol, liquid organic hydrogen carriers (LOHCs), and metal hydrides. We’ll take these one at a time.
While making green ammonia to replace the black ammonia we rely on to feed about half the humans on earth is inarguably a high merit order use for any green hydrogen we might afford to make in the future, some have gone on to suggest ammonia as a vector by which hydrogen itself may be transported.
Ammonia is discussed in some detail in my paper here:
The advantage is that it is made from nitrogen which can be collected anywhere from the air. The downsides are many:
Heat is released at the point of manufacture, where energy is already in excess, hence it is likely this energy will be wasted
The Haber-Bosch process, while efficient after ~ 110 yrs of optimization, must be operated continuously to have any hope of being economic. It is high pressure and high temperature, and hence not suitable to cyclic operation as energy supply rises and falls. This necessitates considerable hydrogen storage if the feed source is renewable electrolysis
Breaking ammonia part again to make hydrogen takes heat, at the place where you’re short of energy, and at fairly high temperature (so waste heat from fuelcells isn’t likely to be useful)
Ammonia is a poison to fuelcell catalysts
When burned in air, ammonia generates copious NOx, requiring yet more ammonia to reduce these toxic and GWP-intensive gases back to nitrogen again (NOx consists of N2O- a 300x CO2 GWP gas which is persistent in the atmosphere, NO- a transient species, and NO2, the toxic one which is water soluble and not persistent in the atmosphere but a precursor of photochemical smog etc. Burn ANYTHING- hydrogen, ammonia, gasoline, your old boss’s photograph etc., and you get all three)
Ammonia itself is dangerously toxic, especially in aquatic environments
Large shipments of ammonia would be insidious targets for terrorism
Cycle efficiencies, starting and ending with electricity, for processes involving ammonia, are on the order of 11-19%, meaning that you get 1 kWh back for every 5-9 kWh you feed
Because substantially all ammonia used in the world is of fossil origin, made from black hydrogen which itself is made from fossils with methane leakage and without carbon capture, and its use literally feeds the people of the earth, I see any use of ammonia as a fuel before black ammonia is replaced with green ammonia, as being basically energetic vandalism. It has an objective clearly different than that of decarbonization in my view.
Methanol, which is currently exclusively made from natural gas or coal by gasification to produce syngas (mixtures of H2 and carbon monoxide), can also be made by producing an artificial syngas by running the reforming reactions backward- starting with CO2 and H2 and catalytically producing CO and H2O. While that energy loss, generating water by basically “un-burning” CO2, is substantial, as long as a CO2 source of biological or atmospheric origin can be used, methanol has a series of attractive properties:
It is a liquid at room temperature, not just a liquefied gas, so its cost of storage is very low per unit energy (though tanks do need inerting, which is unnecessary for gasoline or diesel)
It is toxic, but nothing even close to the toxicity of ammonia
Its energy density is lower than that of gasoline and diesel, but once made, it is considerably more favourable as an energy transport or storage medium than ammonia or hydrogen
It may be reformed at modest conditions back to synthesis gas again
It is a versatile chemical used to make many other molecules, including durable goods such as plastics, and if we are not foolish enough to burn those materials at end of life, it can be a mechanism for carbon sequestration
The big challenge for methanol is that source of CO2. Direct air capture wastes too much energy in a needless fight against entropy, so forget about it as a source of CO2 to make methanol in my opinion. Unless a concentrated source of non-fossil CO2 (a brewery, anaerobic digester or biomass combustor) is colocated with the source of electricity and hence hydrogen, the shipment of liquid CO2 by sea to make methanol from, replicates many of the economic challenges of LNG and liquid hydrogen.
While making green methanol is also a clearly no-regrets use of any green hydrogen we may happen to make, methanol as an “e-fuel” is a challenging issue for the above-noted reasons. Obtaining decent economics per delivered joule would seem very challenging indeed. Therefore, the hopes of companies like Maersk that they will be able to fuel their ships on fossil-free methanol in the near future, seem perhaps decades premature at best.
The use of methanol as a vector for the transmission of hydrogen for use as hydrogen, makes no sense to me at all. Reforming the resulting CO back to CO2 and more H2 again using water is possible, but too costly and lossy to make energetic sense to me.
Liquid Organic Hydrogen Carriers (LOHCs)
These are liquid organic molecules like methylcyclohexane, which can be dehydrogenated to produce hydrogen and toluene. The toluene, also a gasoline-like liquid, can be shipped back to wherever hydrogen is in excess, and hydrogenated to produce methylcyclohexane again. Numerous molecule pairs are candidates, each with its suite of benefits and disadvantages.
The big disadvantages of LOHCs are similar to those of ammonia:
Parasitic mass is considerable – for MCH/toluene, only 6% of the mass of MCH is converted into hydrogen at destination, and the other 94% of the mass has to be shipped in both directions. On this basis alone, LOHCs are not good candidates as transportation fuels (i.e. fuels for use to move ships, trucks etc.) in my view
Like with ammonia, heat is produced at the place where you have energy in excess, and energy is required (again at high temperature) to supply the endothermic heat of dehydrogenation at destination. The temperatures required are too high for waste heat to be used
There will inevitably be some loss of the molecules in each step. Yields will never be 100%
Considerable capital and operating/maintenance cost will be required at both ends, for the hydrogenation/dehydrogenation equipment. These are chemical plants, not simple devices like fuelcells or batteries, and hence they will be economical only at very large scale if ever
LOHCs don’t seem to have a good niche in my view. They are useless as sources of hydrogen for transport, below the size of perhaps a ship. While some, such as Roland Berger in a recent report:
…tend to conclude that LOHCs are a better way to do “last mile” transport of hydrogen under certain circumstances than some of the other options, that is again really a desperate reaction to the impracticality of hydrogen itself as an energy distribution vector, rather than a vote of confidence in the technology itself.
Solid Metal Hydrides
Hydrogen reacts with both the alkali metals (Li, Na) and alkaline earth metals (Ca, Mg) as well as with aluminum and other elements, to form hydrides, i.e. where hydrogen is in the form of H- ion. These hydrides can form at the surface of the metals, providing a means of “chemi-sorption” for the storing of hydrogen at lower pressures than that required for pure compressed gas. However, the cost of the lower storage pressure is greatly higher (parasitic) mass, i.e. useless for transport applications, and the need to use heat (generally provided by electric heating) to desorb the hydrogen when required.
The hydrides themselves can also be made as pure solid substances, such as “alane” (AlH3), magnesium hydride (MgH2) or NaBH4 (sodium borohydride). These metal hydrides react with water, producing twice as much hydrogen as is found in the original hydride molecule. For instance:
MgH2 + 2 H2O ⇒ 2 H2 + Mg(OH)2
Sadly, there’s the rub: aside from the considerable problem of parasitic mass, in each case, the re-formation of the original hydride involves two steps:
Production of the metal again from its hydroxide, and
Production of the hydride by reaction with hydrogen at high temperature and pressure
The energy cycle efficiency of all such schemes involving metal hydride reactions with water are therefore negligible, tending to be in the single digits, because the process of re-making the metal and then the hydride is so energy-intensive. Wasting 10 joules merely to deliver 1 joule at destination is not something we’re going to do at scale, or at least that’s my hope!
The export of hydrogen, either as hydrogen itself or as molecules derived from hydrogen for use as fuels directly or as sources of hydrogen to feed engines or fuelcells, seems to be an idea which although technically possible, is extremely difficult to imagine becoming economic. The energy losses and capital costs and other practical matters standing in the way of hydrogen or hydrogen-derived chemicals being used as vectors for the transoceanic shipment of energy, seems to be rather more a result of #hopium addiction being spread by interested parties, than something derived from a sound techno-economic analysis.
What Should We Do Instead?
It’s clear to me that the opportunity of high capacity factor renewables from hybrid wind/solar installations along the coasts in places like Chile, Western Australia etc. is considerable, and so is the potential for these green energy resources to decarbonize our society.
In my view, however, we’re thinking about it wrong.
We should be thinking about Chile, western Australia etc., becoming hubs for the production of green, energy-intensive molecules and materials- things that we need at scale, which represent large GHG emissions because we currently make them using fossil energy or fossil chemical inputs. The list includes:
Ammonia, and thence nitrate and urea, for use as fertilisers (NOT as fuels!)
Methanol, for use as a chemical feedstock, again not as a fuel
Iron (hydrogen being used to reduce iron ore to iron metal by direct reduction of iron (DRI), which can then be made into steel at electric arc mini-melt mills wherever the steel is needed
Aluminum, and perhaps one day soon, magnesium too- neither of which involve hydrogen really, but both of which will need electricity in a big way if we want to decarbonize them
Cementitious/pozzolanic materials- though these are such bulky and low value materials that shipment across transoceanic distances is hard to imagine we’ll be able to afford
who knows- maybe diamonds and oxygen! (Just kidding!)
For locations such as north Africa, the obvious solution is to skip the hydrogen and indeed the molecular middleman entirely, and simply to export electricity via HVDC directly to Europe. Although that doesn’t address the need for energy storage, the resources predicated for the manufacture of economical green hydrogen already suggest high capacity factor, and proximity to the equator makes their seasonal variation considerably lower as well. Clearly, in my view, making hydrogen simply to permit electricity to be stored for later use is very hard to justify, given the best case cycle efficiency of hydrogen itself- without hydrogen long distance transport and distribution taken into account- is on the order of 37%. That is far too lossy a battery to be worth major investment. Drop that even further by adding lossy things like hydrogen liquefaction or interconversions to yet other molecules and it looks just too bad to take seriously.
What About Fossil Energy Importers?
Countries like Japan and South Korea, frankly, are in big trouble in a decarbonized future, especially if they make themselves dependent on importing energy in the form of hydrogen or hydrogen-derived molecules. What kind of cars they drive is really irrelevant: the energy-intensive industry that is the basis of their economies, will simply need to move offshore, given that their economic competitors would be using energy which costs 1/10th as much per joule, and using that energy directly rather than through a lossy middleman. Either that, or they’ll need to switch to a service economy and focus on extreme energy conservation- which might be best.
However, what concerns me is that neither the Japanese nor the Koreans are ignorant in these matters. If I saw both countries building out renewable offshore wind generation like mad, or even going nuts building new nuclear plants, perhaps I’d believe that their interest in decarbonization via hydrogen was truly in earnest, to sop up even at great cost, the residual that they can’t manage to supply locally as electricity. Rather the focus on hydrogen looks more like an attempt to put off the energy transition until some future date when hydrogen becomes “economic” as an option, burning fossils and fooling around with meaningless pilot projects (JERA burning ammonia in 30% efficient coal-fired power plants, anyone? Or worse still, this brown coal gasification with liquid hydrogen shipment nonsense?) in the meantime. Because, frankly, looking at the various importation options, the future in which hydrogen as an energy transport vector becomes “economic” across transoceanic distances is likely “never”, relative to more sensible options.
Disclaimer: whereas I always try to be accurate, I’m human and therefore fallible. If you find anything wrong in my article, which you can demonstrate to be wrong via good, reliable references, I’ll be happy to correct it. That’s why I publish on a vehicle like LinkedIn, rather than in journals that remain unedited and therefore preserve my errors in amber!
Oh, and if you don’t like my opinion on these matters, by all means feel free to contact my employer, Spitfire Research Inc.
The president (i.e. myself) will be happy to tell you to get lost and write your own article, with even better references, if you disagree.
As we found in Part 1, conventional hydrogen production from natural gas using steam methane reforming (SMR) coupled with carbon capture and storage (CCS) is easily written off as a waste of everyone’s time and money. It’s fooling nobody. Because nearly half the CO2 emissions come from combustion in the tube furnace and other combustion equipment, there’s no more benefit to going after high CO2 captures off this equipment than from fossil gas power plant flues or the like. We’d need something new such as SMR run with renewable electric heating to give SMR a shot at survival into a decarbonized future.
Undaunted, hydrogen-as-a-fuel advocates reach for another technology: autothermal reforming (ATR). ATR isn’t something new or unknown: it is a process used at very large scale today when high carbon monoxide (CO) to hydrogen syngas mixtures are required for processes like Fischer Tropsch or methanol synthesis.
Autothermal reforming works the same way, in thermodynamics terms, that SMR works: you heat up a mixture of methane and steam to high temperatures and pass it over a catalyst, so that endothermic reactions can transform it into a mixture of CO and H2. But whereas SMRs combust fuel and transfer the heat through catalyst tubes in a tube furnace to the syngas mixture inside them, in an ATR pure oxygen is fed with the steam and methane feed in a special burner or partial oxidation catalyst inside a refractory lined pressure vessel. The heat is produced by in situ partial combustion, and the feed stream is massively overheated before being passed over the catalyst rather than supplying heat as the reactions proceed. The result is that some of the product hydrogen and CO are also combusted and hence wasted, but all the combustion products (CO2 and water) are contained in the syngas product and hence are easier to capture and remove. The downstream steps: heat recovery, water/gas shift reactor(s) and gas purification are otherwise identical to the SMR, with the exception that there’s now no fuel-hungry tube furnace to dump the gas purification system’s waste hydrogen into.
The process is, as already mentioned, less efficient than SMR if the target is pure hydrogen. Actual “efficiency” figures given are usually polluted with values being given to “export steam” that make comparisons challenging, but that ATR is less energy efficient at making H2 than SMR, assuming CO2 can be disposed of the good old way, to the atmosphere, is not in doubt. The process is currently optimized for the production of higher CO/H2 ratio syngas mixtures which are then blended with SMR output to produce the desired ratio for F-T, methanol or other syngas uses, because ATR can do that and SMR can’t without a risk of soot generation and other problems. Air-blown ATR is also used in a multi step reformer package to make H2/N2 mixtures as a feed for the Haber Bosch ammonia process.
Of course ATR is reached for by blue hydrogen advocates as soon as somebody realizes that the best possible capture from SMR, without major changes, is going to be about 50%- even ignoring methane emissions, due to SMR’s horrible burner box. So: how “blue” can we make an ATR?
If you a) take a few hits from the #hopium pipe, b) ignore methane emissions and c) restrain the project to using only renewable/nonemitting energy sources to run the CCS equipment, the answer is “as blue as you can afford!”.
In purely technical terms, there is no problem at all to remove 99.999% of the CO2 from the resulting hydrogen stream. If that were NOT possible, the product hydrogen would never meet fuelcell specifications which require the total of CO + CO2 to be below 10 ppm to avoid killing the fuelcell’s catalysts. Of course “removal” is just the first and easiest of the steps- you next need to capture, purify and store the CO2 away permanently.
Recall that the Shell Quest project targeted an 80% capture of the CO2 in the syngas mixture from the SMR, which it achieves, except when it doesn’t. That’s a fairly easy capture target from a stream with a high partial pressure of CO2 at a modest absolute pressure, and of course it’s only 80% of about 1/2 the CO2 emitted by the SMR. But as you increase the fraction of the CO2 you want to capture, two things increase: 1) energy use and 2) hydrogen wasting. The former is true in any capture process, arising from an increasingly fraught battle against entropy. While the thermodynamics and energetics are complex and vary greatly from flowsheet to flowsheet, to a first approximation, the energy to remove each order of magnitude of mass is about equal, i.e. 99% capture costs twice as much energy as 90%- for capture, not for capture and storage.
The hydrogen mass loss is likely the more challenging problem, and not just because of the need to throw away valuable product. Removing hydrogen from the captured CO2 would require yet more equipment, energy and flowsheet complexity. Unlike in the “good old days”, when hydrogen was treated as something you could vent without consequence, we now know that hydrogen itself is a GHG with GWP between 11 and 60x that of CO2. And whatever CO2 you don’t capture in your sequestration train, is going to come out in a swamp of H2 and other gases in the gas purification pressure swing absorption (PSA) unit. Without combustion equipment to easily use this material, and with the option to recompress and feed this stuff into the ATR feed being unpleasant for a number of reasons, it is likely that one would be tempted to just burn and vent the flue gas in some unabated combustion unit somewhere. That of course means you will have unabated CO2 emissions coming from that combustion stack.
The real killer to this idea however is those pesky initial assumptions we made when we were high on #hopium.
We might, because of foolish or ignorant regulation, be permitted to just ignore or discount the methane emissions, as is clearly the case with the Shell Quest project which doesn’t even mention them in its public reporting. Remember that methane emissions take Quest’s net CO2 capture from 35% to a real CO2e capture around 21%- from poor to positively craptastic. But the atmosphere won’t give us the luxury of accepting the design philosophy of Mediocrates, whose response to criticism is, “Meh- good enough!” Global warming potential (GWP) is GWP, whether it’s from CO2 or methane emissions. You can’t just pretend that methane doesn’t matter!
If you remember from Part 1, methane leakage at world average leakage rates of 1.5% add about 4 kg of CO2e for every kg of H2 produced, i.e. Methane leakage adds about 40% to the ~ 10 kg of actual CO2 released from an unabated SMR per kg it produces. So if you’re looking for CO2e capture (the total of CO2, methane, N2O and other GHGs from the process) to be very good, you basically would be forced to use the very best, lowest leakage fossil gas sources on earth, i.e. Norwegian gas, the gold standard as far as leakage goes, with leakage below 0.05%. My suggestion would be to forget about any blue hydrogen production from LNG, irrespective of its source- you need access to excellent gas via pipeline, because the leakage/venting from the extra steps to liquefy, transport and revapourize LNG will take you over the top.
Remember also that you need green electricity, and lots of it, to provide the capture and sequestration energy. ATR will generate the needed surplus heat as a result of its reduced efficiency relative to SMR, but you’ll need lots of green electricity- backed up with storage- to be able to run your ATR continuously (the only way hot units like this can run). You’ll need it to run the air separation plant, and the capture equipment, and the storage and CO2 transport compressors. Of course if you were a moron, you’d run this equipment off your product hydrogen, and watch your economics swirl down the toilet.
Finally, we can’t forget about carbon sequestration, i.e. storage. Remember that Shell Quest has an ideal storage reservoir only 60 km away from the plant, 2 km below ground. Because of the economics of CO2 sequestration, we’ll need to be nearby to a sequestration reservoir with capacity enough to take the effluent from our plant from its entire 30+ yrs of design life. And NO, we can’t accept enhanced oil recovery use for this CO2, because if we do, the atmosphere gets not only the CO2 released when that oil is burned, it gets about 40% of it back again when the oil comes to the surface. EOR is not CCS!
So: to make ATR capable of making truly blue hydrogen, any project has to line up just exactly right. It must have:
1) pipeline access to ultralow leakage fossil gas of adequate capacity
2) pipeline access to a nearby disposal reservoir of adequate capacity (forget about liquefying and shipping the CO2, that’s just going to break the bank and blow the CO2 emissions budget by time you’re done)
3) access to high capacity renewable electricity, backed up with storage to permit continuous operation (begging the question- why not just make green hydrogen!?!?!)
When I ran the numbers assuming 95% CO2 capture, 0.05% methane leakage, a world class ATR efficiency, and renewable electricity running the entire carbon capture and storage infrastructure, I was able to get the emissions down to about 1.2 kg of CO2e per kg of hydrogen produced- only about 20% more CO2e emissive than green hydrogen made by electrolysis using renewable electricity can already achieve today.
As to the cost of doing this, let’s just set that aside and dream the dream for a moment!
Hmm, are we seeing any ideal locations on earth yet? Which meet all these criteria, to make truly blue hydrogen?
Are we seeing enough of them to pump up the deflated dream of wasting hydrogen as a natural gas replacement fuel?
I am however seeing lots of places which meet a couple of those criteria, who are hoping that nobody notices that the resulting hydrogen is still very blackish blue and bruise coloured, and hoping that a credulous government somewhere will turn a blind eye and write a blank cheque.
As to the notion of this kind of project being “transitional” while we wait for green hydrogen to get cheaper, please remember that this is nothing like bolting a CCS unit onto an existing SMR plant already feeding black hydrogen to a refinery or upgrader somewhere. We’re talking about bespoke new equipment, at scale so it’s cheap enough per kg, with a design life of at least 30 yrs. Equipment that doesn’t make economical hydrogen relative to today’s standard, but which is being done this way simply because it allows the energy of fossil gas to be made into fossil hydrogen with CO2e emissions that some government regulator somewhere finds tolerable. There’s no way anybody is going to build this stuff unless they’re guaranteed to be able to run it for at least those 30 yrs, unless somebody else is paying all the capital.
Finally, a reminder that even black hydrogen is not a cheap fuel. The ten year average for wholesale Henry Hub gas in the US gulf coast has been around $3.50/MMBTU over the past 10 yrs or so. Such gas can be made into saleable wholesale black hydrogen for about $1.50/kg- the aspirational target price for green hydrogen in, say, 2040 if you’ve had a few hits from the #hopium pipe, and 2050 or beyond (or never) if you’re skeptical like me. That hydrogen already costs $11/MMBTU. While I know full well that North Americans have access to incredibly cheap gas, and by world standards $11/MMBTU doesn’t sound like a very high price, I remind you that this is a wholesale cost and includes zero cost for storage or distribution. Those costs are going to be high, are going to be higher than for fossil gas for reasons of basic physics, and will be much higher than many imagine because the existing fossil gas infrastructure cannot be re-used for its distribution.
Personally I think that blue hydrogen should be taken off the barbeque because it’s been charred on all sides.
And while some governments are waking up to the same conclusion, as a result of the efforts of the @Hydrogen Science Coalition and numerous other groups, we can’t forget that blue hydrogen is of existential importance to the fossil fuel industry. Blue hydrogen is their “get out of jail” card in the energy Monopoly game. Even as a mere idea, rather than an actual technology, blue hydrogen allows fossil gas producers and distributors to pretend that they have a future in the energy supply market post decarbonization. They will not give up on this idea easily, and economics will drive them to put makeup over the bruises rather than making hydrogen truly blue, because the latter is very geographically limited and will also be very costly. It’s my hope that you won’t be fooled.
Disclaimer: everything you’ve just read was written by an ordinary human being, fallible and capable of error in the most mundane ways. If you find something that I’ve done wrong, and can provide references or calculations which demonstrate where I’ve gone wrong, I’ll be grateful and happy to correct my work.
If what I’ve written makes you angry merely because it puts your future ride on the fossil fuelled gravy train in doubt, then be sure to take it up with Spitfire Research.
My readers will know that I have never liked the “colours of hydrogen” that has been spread, as a meme, by the hydrogen-as-a-fuel lobby.
There is only really one kind of hydrogen in the world right now.
Hydrogen- 98.7% of it by generous estimate- is made from fossils, without meaningful carbon capture. It is barest minimum 30% blacker, per joule, in CO2 and methane emissions, than the source fuel it was made from. 30% is a best case figure, corresponding to about 10 kg of fossil CO2 emitted per kg of hydrogen you produce, corresponding to a 70% lower heating value (LHV) efficiency of converting natural gas LHV to hydrogen LHV. Start with lignite (brown coal) and it can be much worse- 30 kg CO2 per kg H2.
By definition, that’s not brown, or gray. That’s black. In fact, it’s ultra-black. It’s black-hole black.
We make up for that by not wasting hydrogen as a fuel, of course. Very little hydrogen is wasted as a fuel today. It is made, and used, as a chemical- sometimes to desulphurize or deoxygenate other fuels, and sometimes as a component in making molecules like methanol which are sometimes used as or to make fuels (i.e. Biodiesel).
I love memes- they can be an extremely effective communication tool, and as my friend Alex Grant says, “money follows memes”- people invest in mematic ideas, sometimes for good, sometimes carelessly, and often using taxpayer money.
When people use memes, especially when they use them carelessly or illegitimately, it’s fun to riff on them, as I’ve done with my headline. Sometimes it is a very useful way to get the opposing point of view across.
The hydrogen-as-a-fuel lobby have a pretty colour euphemism for another way to make hydrogen. If you make it from a fossil energy resource, but capture (some of) the CO2 released and dispose of it in some durable way, they call that “blue” hydrogen. And remember what “blue” hydrogen is, after all: it’s the last grand scam of the fossil fuel industry. It’s the only way the fossil fuel industry can pretend to have a future in the energy system post-decarbonization, aside from as a materials and chemicals supplier (about 15-25% of their current business value). Not a happy prospect, if you’re in that business, seeing your revenue shrink by 75-85%! So, any port in a storm as they say, and “blue hydrogen” to the rescue! Since carbon capture at every point source user of fossil fuels- or even at a multitude of the larger ones- is an economic and practical absurdity, the simpleminded idea is as follows:
convert natural gas to hydrogen centrally
capture the CO2, disposing of it (hopefully by enhanced oil recovery so we get paid twice)
sell hydrogen as a fuel
party like it’s 1989, before we really thought AGW was worth bothering with really
Truly rare for an academic paper, this one had the heads of certain “hydrogen is fossil fuel’s great salvation” advocates, spinning like that of the child in the Exorcist…
(warning: language and religion are both scary…)
But I’m asking a more finessed version of the same question: how blue is blue hydrogen? How blue can it really ever be? And the way I’m going to do that is a little different than what Bob and Mark did in their paper. I’m going to look at a “real”-ish “blue hydrogen” project. Not a pilot project- one done at considerable scale, which buries 1 million tonnes per year of CO2 and doesn’t try to pretend that using CO2 for enhanced oil recovery is real carbon storage, either- that’s the #1 play in the fossil fuel playbook where CCS is concerned.
And there is one to look at, but only one in the whole world. Here it is:
Quest, originally built by Shell, largely (or perhaps entirely) using money from the Canadian Federal and Alberta provincial governments, is a carbon capture and storage project on the Scotford Upgrader. Hydrogen, made by the conventional method by which most hydrogen is made in the world today (steam methane reforming (SMR)), is used to desulphurize and partially upgrade bitumen (aka “tar sands” heavy oil), for sale into the (largely USA) fossil transport fuels market. That’s a use we hope sincerely won’t be needed soon, because we need to stop burning fossils as fuels in a decarbonized future.
The great thing about Quest is that because it was largely (nearly completely) publicly funded, its data is available as a precondition of public money being spent on the project. So all you non-Canadians, please feel free to send cheques to us for all the learnings we funded on your behalf…I’ll be setting up the GoFundMe page shortly! (grin!)
Let’s look at a simplified flowsheet of a SMR so we can understand what Quest does, and doesn’t do.
The Steam Methane Reformer, Redux -(C) Spitfire Research Inc. 2021
A steam methane reformer takes natural gas (mostly methane), purifies it, mixes it with steam, preheats it, then sends it to a number of reformer tubes in parallel, suspended in a tube furnace. Each tube is filled with solid catalyst and is heated on the outside using flue gas produced by burning a mixture of fuel gas, heating the tubes to a very high temperature (well above 800 C). The reforming catalyst allows methane to react with water to form a synthesis gas mixture consisting of carbon monoxide, carbon dioxide, unreacted water vapour, a little unreacted methane, and hydrogen. The overall reactions are endothermic, i.e. requiring heat input, and that heat is supplied by the burning of fuel gas outside the tubes. That means ultimately we have to feed at least 30% of the energy we get out of the product hydrogen, into the tube furnace to supply it with heat to drive those reactions.
CH4 + H2O + heat <==> CO + CO2 + H2 with proportions depending on conditions
(image credit: Air Science Technologies Inc.)
If we want syngas, we’re done- we can separate out the water and maybe the CO2 and feed the gas on to do something useful such as making methanol or reducing iron ore to iron metal (called direct iron reduction or DRI). But we want hydrogen, so the next thing we do is cool down the hot gas in a heat recovery steam generator (HRSG) which produces the steam we need to feed the process plus perhaps a little excess. We then feed the cooled syngas mixture to one or more water-gas shift reactors. These perform the magical water-gas shift (WGS) reaction:
CO + H2O <==> CO2 + H2 + heat
…which, because it produces heat, generates more H2 the colder we run the reaction. Sometimes two steps of WGS with heat removal between them, is done. We’re now left with a stream which consists of H2, CO2, some water vapour, a little unreacted CO, and some unreacted methane, all under some pressure (about 30 bar or so).
Since we want H2, we need to remove the CO2. And here, Shell Quest becomes relevant.
In a normal SMR, the captured CO2 is usually just vented to the atmosphere because that’s the very cheapest thing we can do with it. And let’s face it, CO2 is not a commercial product, it’s a low Gibbs free energy waste molecule, valuable as a product of energy-producing reactions like combustion principally because, until recently, it was easy and cheap to dump it to the atmosphere.
Instead, Quest captures the CO2 using conventional amine absorber/stripper technology (something routinely used in chemical engineering, done at giant scale even before AGW was a “thing”). It generates a nearly pure CO2 stream which it then compresses, dries and dumps into a pipeline for disposal in a nearly perfect disposal reservoir, 2 km underground, some 60 km away from the plant. The absorber takes some pumping energy, and the stripping step takes some heat, at a reasonably high temperature, and CO2 compression takes (considerable) electricity, so we have to find that energy somewhere. We might get some from waste heat from the SMR by heat recovery, but that heat could generally be used elsewhere in the upgrader so it’s not really “free”.
…but even then, we’re not done. We still have CO, some CO2, and some unreacted methane to contend with. Generally, a pressure swing adsorption (PSA) system is used to capture and remove these contaminants. The PSA adsorbs the contaminants at pressure onto a solid adsorbent, which is then periodically depressurized to vent off a gas mixture which is, sadly, mostly hydrogen contaminated with these left-over materials. No matter, as we have a huge, hungry firebox ready to soak up all that otherwise wasted energy- the PSA’s tail gas is sent back to the tube furnace for combustion, and all the CO and CO2 leaves the flue as CO2.
Shell Quest captures about 80% of the CO2 in the syngas stream. That results in about 45% capture of the CO2 from hydrogen production at the plant, per the government website. That also means that only about 56% of the CO2 emitted by hydrogen production in the Quest SMRs is available for capture in the syngas. The other 44% comes mostly out that tube furnace flue, at atmospheric pressure, in a giant swamp of nitrogen and water vapour (some of it also comes out of a natural gas power plant somewhere). The low partial pressure (total pressure times % CO2 in the stream) means there’s a higher entropic hill to climb to capture that CO2, and that costs us energy; such a high hill in fact that it is no easier to capture the CO2 from the tube furnace flue than it would be to just skip hydrogen-as-a-fuel entirely, burn ALL the methane in, say, a gas turbine power plant, and capture the CO2 from its flue. So Quest, wisely, doesn’t even try. It makes itself quite satisfied with capturing 80% of 56%, i.e. 45% of the CO2. The easy 45%. And it does this pretty consistently, except when it doesn’t. Then, it just vents the CO2, like every other SMR on earth.
Source: Government of Alberta website, Shell Quest capture figures, 2019
(note: this 80% of the CO2 in the syngas is only 45% of the CO2 produced by the SMR unit)
Quest has been in operation for over six years, since we generous Canadians built it. And in each of those years, it has captured about 1 million tonnes of CO2 and buried it deep down from whence it will hopefully never return.
Of course Shell, on its various websites, crows about how wonderful this all is- how it’s equivalent to the emissions of about 1.25 million cars each year etc. etc. Why shouldn’t they crow? They’re not paying for it- we Canadian taxpayers are!
Cost and Schedule
How much does all this cost? And how long did it take?
Engineering started in 2009, and the CCS system was operational in 2015. Not quick…but it worked, so I guess Fluor did a good job!
The financial figures are a bit muddier. While the public money dumped into the project is very clear- $120 million from Canada and $745 million from Alberta (all Canadian dollars), the NRCan website talks about “total project costs” of $1.31 billion. The operating costs are on the order of $50 million per year, and the project life is 20 yrs, so the $1.31 billion figure isn’t the “total project cost” including operation and maintenance. The actual total capital cost is actually quite unclear. Shell itself provides no other figures that I can find easily.
But frankly it doesn’t matter that much to me. What’s half a billion between friends? Here’s why: let’s assume it cost “only” the public $845 million and neither Shell nor anybody else put a cent into it. Let’s assume, for fun, that capital is “free”, so we don’t have to argue about discounting rates. And let’s assume a steady $50 million a year, and the 20 yr operating life. Let’s ignore inflation, like we all did until recently… Roll that together at an average of 1 million tonnes per year of CO2 captured and buried, and we get ~ $92/tonne of CO2. Sounds great, right? Canada’s carbon tax is heading to $170/tonne by 2030- very soon the project will be self-funding! Shell further crows on its 5th anniversary of Quest website, that capital costs would drop by 30% next time.
Sadly, we’ve forgotten or ignored a few things. Isn’t that how my papers usually go? It looks so simple, until we get into the nasty details?!
Here’s a better, more accurate version of that first slide I showed you- it represents what’s really going on in an SMR with or without carbon capture.
For every kg of H2 we get out of Quest, by my estimates (based on a Shell presentation whose slides I do not have permission to share with you), we need to feed about 47.5 kWh of methane LHV to the SMR itself. That would, at world average methane leakage rates of 1.5%, result in about 0.055 kg of methane being leaked for every kg of H2 being produced at Quest. Before abatement, Quest generates about 9.7 kg of CO2 per kg of H2 produced, round numbers. Add 0.055 kg of methane at the 20 yr, 84x global warming potential of methane per the IPCC, we’re looking at unabated emissions of 9.7 (direct) plus 4.5 (methane) = 14.2 kg of CO2e per kg of H2, before we do any CCS. Ignoring the methane leakage makes the CO2e emissions look much nicer, doesn’t it?!
Of course if you’re a fossil fuel apologist, you’ll use the 30x CO2 GWP figure for methane from IPCC instead, and knock that 4.5 extra kg of CO2e/kg H2 down to 1.7 kg- only about 18% of the gross CO2 emissions, so not that bad. But remember- we’re considering H2 production from fossils with CCS to be a transitional strategy, because we all know it’s second (or third, or fifth…) best relative to GREEN hydrogen made by electrolysis of water using pure renewable electricity. You can, as some Exorcist head-spinners have done, write off Howarth and Jacobson’s paper (and my analysis) as mere hyperbole because we take the hydrogen-as-a-fuel lobby at its word about “blue” hydrogen being a stop-gap measure. Frankly, anybody using the 100 yr GWP for methane needs a smack upside the head- forgive me, that’s just me being ornery because I’m writing about this, rather than relaxing with a glass of bourbon watching Netflix.
…the CCS system on Quest takes 0.65 MJ of electricity and 2.1 MJ of heat per kg of CO2 captured. Most of the electricity (0.55 of 0.65) is used to run the compressors. The plant also consumes 10 T/month of amine and 1 T/month of triethylene glycol used for dehydration. Forgetting about the reagents and focusing only on the energy inputs, we discover how much it takes to capture 80% of the CO2 in the syngas, which remember is only 45% of the CO2 emitted by the SMR. Taking the (unabated, post combustion) CO2 emissions of that energy into account, but again forgetting about the methane leakage, the net capture of CO2 by Quest drops from 45%, to 35%. Just capturing the easy 45% of emissions, requires 10/35 = about 1/3 more CO2 to be emitted, in a form which is post combustion and hence uneconomical to capture.
“Blue” H2 by SMR, not quite so “redux”- (C) 2021, Spitfire Research
Witnesseth that CCS, even from a high partial pressure stream, takes sh*tloads of energy. Imagine the illegitimi, talking about doing this from 416 ppm CO2 in the atmosphere with direct air capture…
Add in the methane emissions associated with that CCS energy- another 0.44 kg CO2e (20 yr basis) on top of the 0.96 kg (direct) CO2 emissions unabated to run CCS, and Quest’s capture drops to about 21% in net CO2e terms.
$0.85 to $1.3 billion, plus $50 million per year, to capture 21% of net CO2e emissions from hydrogen production.
I’m calling that rather blackish-blue, bruise coloured hydrogen at very best. Because that’s what Quest really produces.
It’s also no model, whatsoever, for the use of fossil hydrogen, especially as a fuel, in a decarbonized future.
What Else Could We Do?
Aside from the obvious, i.e. make green hydrogen from renewable electricity and water, to replace all that black hydrogen we’ll need post-decarbonization?
You may have read previously that my former client Monolith Materials is doing an exciting project in Nebraska and has received a tentative offer of $1 billion USD in loans from USDOE to expand the project to full commercial scale. The project takes natural gas, or perhaps biogas methane (future plans), and converts it using electricity from a disused nuclear power plant (with future wind/solar plans) to pyrolyze it into carbon black and hydrogen. Carbon black is a valuable product itself, made normally from heavy oil by a filthy, emissive partial combustion process. They are making the hydrogen into ammonia, to serve the ~40% of US ammonia consumers who are within 100 miles radius of their plant.
A brilliant project, of great decarbonization benefit- but sadly, not scaleable to ever be a major source of hydrogen in terms of world H2 consumption. Replacing the 90 million tonnes of H2 we’ll need post decarbonization, at least initially, by such a process, would make 270 million tonnes of carbon- more than 10x the world market for both carbon black and graphite combined. While some are betting that throwing away 1/2 the energy and 3/4 of the mass of the feedstock will be paid for by the greater ease with which carbon may be buried relative to burying CO2, the jury’s still out on that. This process has the euphemistic tag of “turquoise hydrogen”. How black that turquoise is, depends on many very project-specific factors including methane leakage and the carbon intensity of the electricity used to run pyrolysis.
We could also switch from SMR to another process- oxy-blown autothermal reforming. That process is commercial already, being used in methanol and Fischer-Tropsch plants to make high CO:H2 syngas mixtures. It is less efficient than SMR if the target is pure hydrogen, but (nearly) all the CO2 ends up at high partial pressure in the syngas stream, making higher % captures much easier than with SMR. I will write about this in part 2.
We could also use electric heating, or burn product hydrogen, in the tube furnace of the SMR. The former, called E-SMR, is already under development by several companies, but again has limitations to the ultimate CO2e achievable due to methane leakage and CO2 generated from gas purification. The latter is just a way to burn up money in my opinion.
Finally, I will leave you with the single most authoritative, comprehensive and accurate review of the objectives, practicality and real motivations of carbon capture and storage that I’ve seen. Warning, it may make you laugh, and then cry, and the language might make even a sailor blush a bit.
Disclaimer: I’m human, and hence can easily misinterpret things, make mistakes, push the wrong button on my calculator etc. If you find errors in what I’ve written, and can show me with references or calculations where I’ve gone wrong, I will correct the text with gratitude.
If you can’t find anything wrong with what I’ve said other than that it makes you unhappy, or worried about your continued employment, then perhaps you need to reconsider things a bit. If it makes you angry enough to yell at my employer, Spitfire encourages you to try!
Most of us find waste to be viscerally repugnant. The smell and even the appearance of garbage revolts us. And yet we all generate it, in seemingly endless quantity.
In “developed” (rich) nations, we therefore have built extensive systems to get this repugnant material out of sight, and hence out of our minds, as quickly as possible. Sure, many of us dutifully separate our waste at source into the compartments required by the local waste authority, and sometimes that makes us feel good. But we’re left with a lot of questions:
1) Is the effort to separate waste at source, worth the bother? Not just the effort, but the energy to collect physically separated materials at source- is that a net environmental improvement?
2) What happens to the waste after we dispose it and it disappears from our consciousness?
3) We hear about waste from developed countries being “shipped overseas”. Is this a disposal strategy? Are we paying others to improperly dispose of our waste?
4) Wouldn’t we be better off to just burn the whole works? Or if that doesn’t sound appealing, couldn’t we do something “smarter” than burning, to harvest the energy contained in that waste for beneficial uses?
I’ve learned that “deep dives” into complex and important topics like this, tend to bore people and don’t get read. So I won’t be diving deep into this one- I’m leaving most of these complex questions unanswered! I just need to make a few quick points, based on decades of experience working on “waste to energy” type schemes of a bewildering variety of sorts. Because frankly, “waste to energy” and “waste to fuels” projects and proposals are popping up now like dog strangling vine on my farm. Whereas the dog strangling vine is just a pure green menace, waste to energy project proposal are seen as a chance to kill two birds with one stone- to deal with the smelly, unsightly, land-consumptive and potentially GHG emissive problem of waste landfills, while at the same time, making some energy that we need.
The problem here is that it sounds too good to be true. Because it’s not true. Or, more accurately, it is true in such a limited set of circumstances that it’s basically the exception rather than the rule.
Municipal solid waste (MSW) is an extremely heterogeneous mixture which varies in composition from place to place, from time to time, and as a result of the policies (or lack thereof) of the municipalities responsible for providing waste disposal as a public service. In places where waste source separation isn’t practiced at all, the waste stream contains a lot of materials that can be, should be, and in most other places, ARE recycled. In some cases, some of these materials aren’t source separated, but rather are separated manually or by machinery at the waste handling facility or transfer station. Let’s talk about groups of materials in general terms and consider them one by one.
Metals are sorted out because all metals are highly recyclable, and recycling them reduces GHG and toxic emissions dramatically relative to making fresh pure metals from their native ores. Of course only reduced metals in solid form (i.e. cans) are readily recovered by sorting.
To give you an idea: a typical aluminum can in 2014 weighed 15 grams. Aluminum has an embodied energy on the order of 200 MJ/kg, which means that a can has about 3 MJ of embodied energy associated with it. That’s about the same as the same can, 1/3 full of gasoline. Even with recycling, aluminum represents about 11.5 kg of CO2 emissions per kg. Anybody wasting aluminum cans needs to have their head examined if they also claim to care about the environment. Those thinking that it makes sense to turn aluminum, made from aluminum oxide by electrolysis, into hydrogen, are really energetic vandals.
There’s a lot of non-degradable, non combustible stuff in the average MSW stream. Lots of dirt, concrete, brick, rock, gypsum etc. from demolition.
There’s also a lot of glass and ceramic materials. The former are recyclable- the latter, aren’t. The best you can do with waste ceramics is grind them up and use them as a replacement for sand or gravel in concrete.
This is food and yard waste, diapers, pet waste and the like. And frankly the only thing that makes sense to do with this stuff is to remove it at source and not landfill it. When you landfill wet organics, you encourage anaerobic degradation which converts the waste into biogas- a nearly equal mixture of CO2 and methane, the latter being 86x worse than CO2 on the 20 yr horizon in terms of global warming potential (GWP).
Paper and Wood
Same deal- paper and wood need to be source separated, but not because of the risk of biodegradation primarily. While paper is compostable, it’s not readily degradable in a typical anaerobic landfill. I’ve personally seen newspapers which were buried in landfill100 yrs ago that were still perfectly legible. While some biodegradation does happen in landfill, landfills are not designed to be bioreactors- quite the opposite in fact.
Paper is, however, of fairly high value for recycling, particularly cardboard and and paperboard. Recycling corrugated cardboard is an environmental no brainer. Cardboard is a high value material and even businesses which really don’t care about the environment at all, collect cardboard separately for recycling because it reduces their waste disposal costs. And wood, meaning lumber and the like rather than yard waste, can always be made into paper. Wood demolition waste is already harvested for this purpose in some locales, if it can be properly source separated. And of course if there’s a surplus, both can be burned as a solid biofuel.
Plastics are nearly 100% of fossil origin. While they can be recycled, how much they are actually recycled depends on the nature of the plastic, the nature of the source collection (which determines how clean and how hard it is to sort), whether it’s a thermoplastic or a thermoset such as those used in composite materials and rubber (thermosets are basically not recycled, because recycling them requires the breaking of chemical bonds which don’t come so easily unstuck), and of course, what purposes the waste can be put to. Recent reports put the average of plastic recycling around 9%, which is far from stellar.
PET, the material used in beverage bottles, is very easily recycled mechanically and is fairly easy to source separate. However, just like with metals, we don’t generally recycle PET bottles back into PET bottles. Rather, we make PET bottles into things like carpet fibre, which don’t care about thinks like leachable content, colour and transparency quite as much as a clear food grade beverage bottle does.
However, the largest volume plastics are polyethylene (PE), polypropylene (PP) and polystyrene (PS). These materials, though readily mechanically recycled, are often used in the form of films, foams or thin-walled goods which can come back from consumers very mixed, dirty, coloured and otherwise hard to separate. And while you can make SOME goods out of PE/PP blends, most uses require quite pure material. Accordingly MOST PE and PP and most PS foam are not in fact recycled, but rather are landfilled.
There are myriad of other plastics, used either alone or in layers with other materials to provide the desired properties. Some like polyvinylchloride (PVC), are at once extremely valuable and useful and also basically a bomb, waiting to go off when you do the wrong thing at the end of life of that plastic material. And there are a myriad of uses for all those plastics, varying from inarguably dumb single uses like tie hangers or individual wrappers for plastic cutlery, to convenient but questionable uses like plastic grocery bags, to life-saving uses like IV bags, catheters, oxygen tubing, disposable syringes and the like.
When you compare the LCA data for plastics against materials they compete against in the marketplace for similar uses (such as paper, glass, aluminum or natural fabrics), plastics tend to come out on top. They use less energy and water, weigh less, have superior properties, and can be recycled. They can also sometimes save giant amounts of other waste- the thin PE wrapping on English cucumbers comes instantly to mind. This wrapping reduces the waste of cucumbers from field to table by at least 50%- a reduction in the mass and impact of waste of around a thousand fold. My friend Chris DeArmitt is a great source of the research into this topic- he knows more about it than just about anyone.
But of course, we all love to hate plastics. With some good reasons, and some bad ones. They are over-exploited in packaging. They have become so extraordinarly inexpensive that they have come to exemplify “cheap”, non-durable, consumptive and wasteful by virtue of the many dumb uses we’ve come up for them- which sadly people in the marketplace have rewarded by buying. Many plastic products are optimized for cost and aesthetic function, not for recyclability. And they enable convenience that suddenly seems mandatory- something you can’t opt out of without trying very hard indeed.
Energy From Waste to the Rescue!
The usual pitch for a “waste to energy” scheme is as follows:
get rid of the cost and inconvenience of source selection
eliminate the problem of methane generation in landfill
offset some fossil burning
save precious land by reducing landfilling
Who couldn’t love those things!
The Devil in the Details
As usual the devil lurks in the details.
First of all, we really need source separation for a few reasons. First, getting people involved in source separation helps them focus on minimizing waste generation. Second, it improves recoveries (dramatically) of the highest value, most energetically favourable materials to recycle, i.e. metals, carboard/paperboard etc.
Secondly, waste to energy isn’t the highest value use of the wet organic content. That material contains some energy, but it also contains a lot of water. Burning, gasifying or pyrolyzing waste (heating it up in the absence of oxygen), requires us to boil off all that water, and that takes a lot of energy. In net terms, most of the energy in the wet organic fraction, is needed just to provide the energy to boil off the water it contains. In net terms therefore, MSW which has been source separated for recyclables in even a prefunctory way, doesn’t contain ANY net energy of biological origin.
There’s an alternative which doesn’t mind the water. It’s anaerobic digestion, to produce biogas. That’s what we do in Toronto with our green bin waste.
But there’s still lots of energy in that MSW, right?
Yes. Even moreso if you don’t source separate the waste…
Sadly, that energy is waste plastic. All of which is of fossil origin.
MSW is therefore, in net calorific (heat content) terms, almost entirely a fossil fuel.
Finally, when you heat up waste materials to burn, gasify or pyrolyze them, you carry out chemical reactions which can produce emissions of significant toxicity, carcinogenicity, and leachate toxicity. It is not uncommon for a waste incinerator to dramatically reduce the volume, but to much less dramatically reduce the mass of feed waste materials- but to also render that material leachate toxic whereas the feed material wasn’t leachate toxic. That means that the process of burning “mobilizes” species that can leave in the (nearly inevitable) leachate which must be collected from the bottom of the landfill and treated prior to being disposed. That adds both cost and environmental impact that could be avoided. Incineration has to be made less energy efficient to cope with these toxic materials, both as a result of manipulating combustion conditions to avoid generating the worst species, and by virtue of energy used to scrub out or adsorb/desorb the toxic materials that are produced.
Waste to Fuels- Incineration in a Sexy Green Dress
Of course people have pictures in their heads of incineration that are very unpleasant, and some of that is unfair to incineration. Modern incinerators can generate quite clean exhausts, well scrubbed of toxic chemicals- if at the cost of generating only a fraction of the energy contained in the feedstock as a result. But that’s not the problem.
The problem is the plastic.
The problem is the energy derived from the fossil origin materials in the waste stream.
The problem is the needless dumping of fossil CO2 into the atmosphere.
Since the net energetic value in the waste is of fossil origin, the waste itself is, in net terms, a fossil fuel.
Accordingly, some greenwashing is needed to recondition the image of waste to energy schemes, by dressing incineration up in a sexy green dress called “waste to fuels”.
By converting part of the energy and maybe even some of the embodied carbon in the MSW feed, into a new fuel, proponents hope to you won’t see incineration under its new clothes.
These schemes are varied, but they usually involve converting waste to simpler chemical by means of endothermic (heat-consuming) reforming reactions by processes called pyrolysis or gasification, which differ only in degree, energy input and desired suite of products. In both cases, heat, usually generated by burning part of the waste feedstock or part of the products or byproducts, is used to break big molecules into smaller ones. Sometimes, oxygen or air is added to the feed to produce some of that energy right in the reactor, and sometimes it is produced outside the reactor and transferred into it via heat exchangers. Sometimes solid materials, often inorganic constituents of the waste itself, are used to help transfer this heat.
The typical strategy is to make either a very light gaseous material called synthesis gas, which is typically a mixture of carbon monoxide, carbon dioxide, hydrogen, sometimes methane, along with water vapour, nitrogen, and acid gases like hydrogen chloride, hydrogen sulphide etc. (remember, waste is very heterogeneous- and it’s going to contain some PVC, some brominated fire retardants, some fluoropolymers…a host of nasty molecules can result when you break these bigger molecules down). The syngas can then either be burned for energy in a turbine after some basic cleanup, or it can be cleaned up much more thoroughly and converted over catalysts to molecules like hydrogen, methanol or Fischer-Tropsch liquids (waxes, diesel etc.). These materials, including the hydrogen, are generally intended for use as fuels. Yields vary depending on the feedstock and process and conditions, but let’s be clear: a considerably smaller fraction of the energy in the feed material is converted to these secondary fuels than would have been obtained if you simply burned the waste in an incinerator.
What happens to the fossil CO2?It all ends up in the atmosphere. When hydrogen is the product, all that CO2 goes to the atmosphere directly, and more CO2 is released per kg of hydrogen produced than if you started with fossil methane instead. That we can say with certainty just from looking at the nature of the feeds: plastics, the major energetic content of MSW, have a typical formula of (-CH2-), whereas methane has a formula of CH4. The higher the C:H ratio of the feed, the higher the CO2:H2 ratio in the products. The result might be better than coal, but is definitely worse than methane. The only difference between the two is that fossil methane comes along with a burden of production and distribution methane leakage that the plastic waste doesn’t. Depending on where you’re making the “black” hydrogen from fossil methane, that leakage can vary between a significant and a very significant CO2e impact, given methane’s GWP of 86x CO2 on the 20 yr timeframe. Sorry folks, but that alone isn’t going to get you my sympathy for making hydrogen from garbage. It’s going to be close to a wash, at best.
Can You Make it Worse?
Sure. You can do the source separation, separate out the waste plastic, and then gasify it. That’s even worse!
Why is it worse?
Because the alternative would be to simply landfill the waste plastic.
Waste plastic degrades when left in the environment, losing mechanical properties and fragmenting into smaller and smaller pieces. Those degradation processes however are driven by two things: oxygen, and sunlight. Sunlight provides the high energy needed to break the bonds in these synthetic materials, that natural processes like biodegradation were not evolved to break apart. And oxygen can react with the polymers both in the dark and in the light.
What happens when we bury plastics in a landfill? Those degradation mechanisms simply stop. There’s no more driving force for the molecules to fall apart, so they don’t. Waste plastics in a proper anaerobic (covered) landfill are stable for millenia. They don’t break down into microplastics. They don’t leach into the groundwater. They just stay there. Their environmental impact simply ends.
As does their fossil carbon content. It stays there. Sequestered. Durably.
What We Should Do Instead
We should stop believing in ideological fantasies of “circular economy”. We should instead begin to think about optimal recycle. And we should focus on making lots and lots of truly renewable and low emissions energy, because as we do that, not only will we feel less need for fuels made from garbage and waste plastic, we will also increase the optimal amount of recycle- because we will reduce the environmental impact arising from the energy used to drive recycling.
What does an optimal recycling system look like for plastic waste?
Here’s my view:
1) You start with good public policy. Policy which looks holistically at the use and end of life of products, using reliable, disinterested 3rd party LCAs as a guide. Stop making decisions on the basis of the “natural is better” fallacy, like the totally idiotic decision to replace polypropylene drinking straws with waxed paper ones. The waxed paper ones are inferior in function, alter the taste of the beverage, aren’t durable, can’t be re-used, use more water and create more emissions in manufacture and transport than their PP cousins, and yet neither PP nor paper degrade in a landfill. The only, minor benefit of mandating paper straws is that the paper ones degrade a little faster when they’re deposited as litter. Litter makes up perhaps 1% of our disposed waste packaging materials in developed nations. Doing worse with 99% of a product to partially mitigate the impact of 1% of its end of life disposal is not sensible.
Better still: don’t offer a straw of any kind unless it is asked for.
2) Mandate “deposit return” for goods which otherwise don’t end up recycled well. This would apply to goods ranging from beverage bottles to cellphones. Users are quite willing to return goods for cash if there’s cash to be had. This generates cleaner, better sorted waste streams for either re-use or recycling.
3) Maximize mechanical recycling of plastics. And don’t concern yourself about the fact that most recycling- of plastics and of metals and other materials, is really “down-cycling”. Just as we don’t recycle pure copper wire into copper wire again, we don’t recycle PET bottles into bottles again. Why not? Because down-cycling copper wire to copper pipe, and PET bottles to carpet fibre, makes more sense energetically.
4) When you have a stream of mixed plastics that are partially degraded, you can do a limited amount of chemical down-cycling to make materials such as waxes, asphalt extenders, printing inks and the like. Doing this makes sense, but will only be a limited endpoint for the plastics.
5) Most of the degraded, mixed and dirty plastics will end up being useless after they’ve been optimally recycled. How should we deal with them? We should bale them and then bury them in properly constructed landfills. They represent the cheapest, lowest impact, lowest risk post-consumer fossil carbon sequestration strategy imaginable. All you have to do is not burn them.
Finally: if you don’t have space for landfill, you have two choices: work harder at steps 1-4, or pay somebody else to landfill the waste plastic for you.
Disclaimer: this is not a thorough examination of the topic, it has been kept brief so that people might read it. This is an enormously complex topic involving the interrelation of society, technology and values, environmental impact, decarbonization and economics.
I’m not at all saying that there can never be a “waste to X” or even a “waste to energy” scheme that makes environmental sense. I’m specifically attacking waste plastic or MSW to energy or fuels schemes, because I think it’s quite clear they aren’t good waste management practice and aren’t in the interest of decarbonization either.
If you don’t like what I’ve said, that’s OK. If you think I’ve materially erred, that’s entirely possible as I’m human and make mistakes like anyone. Provide good references demonstrating where I’ve gone wrong and I’ll correct my piece with gratitude for your input.
Anthropogenic global warming (AGW) is a real risk for future generations including my own children. It’s a risk I’ve personally taken seriously, and have taken personal action against, since the late 1980s when I was in university. And while we’ve seen some extremely positive developments in the past 30 years such as the creation of new industries to generate wind power, solar power, electric vehicles, biofuels, LED lighting etc., this has barely moved the needle on the root causes of AGW: fossil greenhouse gas (GHG) emissions and land use changes made by humans.
Why have we not taken more action? We knew about AGW thirty years ago- the science was quite solid even back then. The reality is, we ignored the science because many people- ordinary voters AND the people in power who report to them- refused to believe it. Many continue to do so to this today. And why is that? Human motivations are complicated, but I see two key root causes. One is that the worst harms from AGW aren’t likely to be experienced by the generation making the emissions, but rather by future generations, i.e. people may love their children, but not enough to avoid spending their inheritance in this sense. The other is that they’ve been fed a series of lies, in part by parties interested in profiting from the status quo as long as possible, which allow people to cling to a shred of doubt about the underlying science which is, frankly, not supportable by the facts.
The risk of AGW hinges ultimately on three facts. These are indeed facts- things we know, based on measurements- generally multiple measurements which compare favourably with one another. Each of the three facts also has sound theoretical underpinning, meaning that we not only know them to be facts, but we know both why they’re facts and also why they’re important. And these three facts are not the subject of credible dispute in the scientific community. They are not the topic of active discussion in the peer-reviewed journals on the subject, which has another name- the repository of the current state of human knowledge on the topic.
Here are the three facts, one by one, along with peer-reviewed scientific references, or more accessible references which themselves refer to the underlying scientific papers, which will allow you to assure yourself that I’m not just making this up.
Fact #1: Atmospheric CO2 Concentrations Have Increased
We started burning fossil fuels in earnest in the 1720s when the first, highly inefficient steam engines were invented. These engines were in part used to power pumps- in coal mines. Steam engines were in that sense a recursive technology, i.e. one that enables and magnifies its own success. The burning of fossils freed us in a sense from what was at the time, the terrible burden of energy sustainability without modern technology. We no longer needed to balance our need to stay warm against the rate at which trees grew to make firewood for us as just one example.
For the millennium before that, atmospheric carbon dioxide (CO2) concentrations were stable, bouncing around near 280 ppm. They did change a bit as we de-forested much of Europe to provide firewood, through the so-called Mini Ice Age and Medieval Warm Period, and then as we hewed down North America’s forests and burned them too. But for 1000 years, the concentrations remained more or less stable.
This means that the carbon cycle was more or less in balance. Flows of CO2 and methane up into the atmosphere from respiration of animals and plants, desorption of CO2 from the oceans, decay of organic matter, emissions from methane seeps and volcanoes etc., were in balance with flows of CO2 out of the atmosphere due to photosynthesis, dissolution into the oceans, soil organic carbon generation, oxidation of methane to carbon monoxide (CO) and CO2 in the upper atmosphere, weathering of silicate rocks and the big final sinks- oceanic sequestration, i.e. the conversion of CO2 into carbonate rocks and the permanent burial of oceanic sediments containing biomass. That both the natural up- and down-flows of CO2 and methane are positively massive in fact doesn’t matter- what matters is that they were in balance.
But when we look at the concentration of CO2 in the atmosphere, as measured in bubbles of air trapped in ice cores primarily, what we see is that CO2 concentration was surprisingly consistent: from the Law Dome ice core data, the precision of the CO2 concentrations is estimated at +/- 1.2 ppm, and the observations over the pre-industrial period to about 1006 AD were 275 to 284 ppm. The concentrations started to rise as we started to burn fossils in earnest.
Since the 1960s, independent groups have been continuously monitoring CO2 concentrations in the atmosphere, most notably at the Mauna Loa observatory in Hawaii. The concentrations show a continual increase year after year, with a “sawtooth” of small seasonal changes up and down each year. The “sawtooth” arises from changes in seasons on Earth- there is more photosynthetic plant life in the northern hemisphere than the southern, so when the north is in summer, CO2 concentrations drop a little- only to rise again in winter.
CO2 concentrations have recently reached 415 ppm- a concentration not encountered in the past approximately 1,000,000 years. CO2 has never been this high since there was anything recognizable as a human on the planet.
CO2 has gone up, rapidly, from a stable level, and continues to increase as I write this. So, sadly, have the concentrations of methane, N20, and other so-called “greenhouse gases” (GHGs).
This is a fact, not something a credible person can argue with.
Fact #2: We Caused The CO2 Increase, Primarily By Burning Fossils
It isn’t sufficient to say that CO2 went up “suspiciously” as our emissions of fossil fuels went up- that is an argument from correlation which does not prove causation. What we can say is that the increase in CO2 concentration is consistent with the theory that fossil fuel burning caused this rise, but though it looks suspicious, this alone absolutely isn’t sufficient proof.
(Aside: if you care at all about AGW, or renewable energy, or both- reading the late David Mackay’s brilliant work at www.withouthotair.com from beginning to end is your first minimally necessary step in educating yourself about the issues we’re up against in my opinion. It’s very accessible and its conclusions very clear: dealing with AGW is absolutely necessary but it will be a very challenging problem because we use a lot of energy and hence burn a lot of fossil fuels right now)
There are however two measured facts which prove conclusively that the new CO2 in the atmosphere is primarily there as a result of our burning of fossil fuels.
The first is simple carbon mass balance accounting. We can fairly accurately estimate how much fossil fuel we’ve burned, since fossil fuels a) cost money and b) are taxed. A scientific accounting of the amount of fossil fuel burning does demonstrate that not only did we produce enough CO2 to cause atmospheric concentrations to rise by the amount they did (proven by measurement above), but we actually emitted TWICE THAT MUCH:
Where did the other half go? Some of it went into the oceans, as would be expected by anyone who understands a little physical chemistry. As CO2 dissolves in water, the pH of the water decreases. Acidification (decreasing pH) of the surface oceans has indeed been measured, and has occurred because of the increased CO2 concentration in the atmosphere. This too is of concern to ocean life such as corals and shelled sealife which rely on carbonate fixation as part of their lifecycle.
This animated visualization of the carbon cycle, showing how carbon moved from fossil reservoirs into the atmosphere and then down again into the oceans and biosphere, is most helpful in demonstrating what happened:
The 2nd proof of the anthropogenic origin of the new CO2 in the atmosphere is isotopic measurements of the carbon in atmospheric CO2. The ratios between stable 12^C and 13^C and radioactive 14^C (continuously generated by cosmic rays and continuously decaying to nitrogen) have long been known to have been affected by the addition of ancient carbon to the atmosphere. Living things, while living, have roughly the same ratio between 14^C and 12^C as the atmosphere. But fossil fuels have been dead and separated from the atmosphere for a long time- many, many half-lives of 14^C, and hence are nearly free of 14^C. The result of our fossil burning has been a gradual decrease of the 14^C to ^12C ratio in the atmosphere. The ratios of 13^C to 12^C also demonstrate the same thing- the new CO2 is of fossil origin- it hasn’t desorbed from the oceans, been released by rotting biomass etc. The following references are provided if you want to learn more.
It should be noted that the CO2 emitted from volcanoes is also low in 14^C- but the amount emitted by volcanoes is actually quite small relative to the amount emitted by humans as a result of burning fossil fuels. Two gigantic volcanic eruptions in the recent past- Mt. Pinatubo and Mt. St. Hellens, as examples, produced barely a blip in global CO2 concentrations measured at Mauna Loa. Let that sink in for a moment: we fossil fuel-burning humans are by far the biggest volcano on earth, in terms of emissions.
Again, that the new CO2 in the atmosphere is primarily a result of us burning fossil fuels and dumping the resulting CO2 into the atmosphere, is not in credible scientific dispute. It’s a fact, based on multiple replicate measurements which agree with one another. It’s a fact that we’ve known about for a long time too.
Fact #3: Extra CO2 (and other GHGs) Causes Climactic Forcing
This one again is not a supposition- it is a fact, arising from both an understanding of basic physics known since the late 1800s, and direct measurements.
Most of the atmosphere is NOT CO2. The atmosphere consists mostly of nitrogen, oxygen, argon and water vapour. CO2 is a minor constituent at only 415 ppm or 0.04%. But everyone should realize that a small concentration of something can have an out-sized effect. If you don’t believe this, breathe some air containing a very small amount – say 400 ppm or 0.04%- of carbon MONoxide…but please don’t do that! You should already know the outcome- and hopefully have a CO detector in your home too, to make sure you don’t do so by mistake.
(Note: CO2 is also toxic- carbon intoxication symptoms start at concentrations above about 5,000 ppm or 0.5%. That’s a far higher concentration than the atmosphere will ever get to, but it certainly can get that high in spaces particularly underground with poor ventilation.)
That is NOT to say, however, that CO2 is an unimportant constituent! It, along with water and solar energy, is one of the three primary fundamental building blocks of life on earth. It isn’t “plant food”, in exactly the same way that cement blocks aren’t food to construction workers. It contains zero useful chemical potential energy- a fact which makes it the desired product of processes intended to liberate chemical energy such as combustion or respiration. It’s merely a material that plants can collect and use solar photochemical energy to convert, along with water, into biomass. Having more building blocks at hand when CO2 concentrations are higher, simply means the plant has to expend less energy to build a given amount of biomass.
CO2 is also a strong absorber in the infrared. The earth receives solar energy at a range of wavelengths ranging from ultra long radio waves to high energy X rays- with the peak of the energy emitted in the visible wavelength band between 400 and 700 nm.
The magnetosphere and the upper atmosphere fortunately filter out a lot of the nastier, most damaging short wavelength radiation from the big fusion reactor in the sky before it gets to us on the surface.
Some is reflected, but the earth absorbs the remaining solar energy, with plants capturing only a small amount of it to store in the form of chemical potential energy. The rest, per the 1st law of thermodynamics, has to go somewhere. And that’s what happens- it ultimately goes back into outer space from whence it came. Because the earth is quite cold relative to the ball of radioactive fusion plasma in the sky, it re-radiates the energy absorbed by the sun not as short-wavelength UV or visible light, but as infrared (IR) light. The average solar energy input, minus whatever is stored in the form of biomass and hidden away from being eaten and respired again to CO2 and water, raises the average temperature of the earth until the amount of heat leaving through the atmosphere equals the amount falling on the earth.
Of course if this were all that was happening, the earth would be a snowball and humans wouldn’t exist…
Instead, the atmosphere absorbs some of the infrared energy radiated by the earth, and re-emits it back to the earth, giving us a 2nd exposure to some IR light that would otherwise be transmitted again to the blackness and cold of space. This partial absorption and re-emission of the radiative emissions of heat from the earth, back to the earth, by gases in the atmosphere, occurs mostly as a result not of CO2, but of water vapour. Water is also a strong IR absorber- and is the earth’s predominant “greenhouse gas” (GHG).
…but before you start to worry about water vapour emissions from your shower or teakettle warming the planet, remember that water vapour is in rapid physical equilibrium with liquid water in the oceans, soils and biosphere. The mean water vapour content of the atmosphere therefore depends on the mean global temperature, and is not meaningfully affected by human emissions of water vapour to the atmosphere.
The absorption spectrum of water vapour and the other “permanent” gases in the atmosphere has a “notch” in it- a range of wavelengths through which IR light can escape unimpeded.
This is in part the reason that frost and condensation (such as dew) can happen when the air is at a bulk temperature higher than the dew point or the frost point. On a clear cloudless night, surfaces such as your car windshield have a narrow wavelength window through which they “see” the blackness of space at 4 kelvin above absolute zero- and hence can lose heat through this window, surprisingly dropping to a temperature lower than the ambient temperature. Special surfaces which enhance this re-emission are an interesting area of study for reducing energy consumption from cooling systems:
This notch in the IR absorption spectra of the normal constituents of the atmosphere is called the IR re-radiative “window”. And it turns out that CO2, methane, N2O and a number of other gases, absorb IR strongly in this range of wavelengths along with others. They always have done. And we’ve known this for a long time- ever since we were able to measure the IR absorption spectra of these molecules- since around the late 1800s. These GHGs narrow the IR re-radiative wavelength window into outer space, making the earth “dimmer” as an IR emitter and requiring the earth’s mean temperature to rise until it can shove enough IR through the remaining portion of the window.
Of course this means that if we add EXTRA CO2, methane, N2O etc. to the atmosphere, this will narrow the IR re-radiative window even further. And that, obviously, forces the climate – it requires earth to warm to satisfy the new balance between the in-flow and out-flow of energy. Input minus output equals accumulation, and if we restrict the out-flow of IR from the earth to space, the earth MUST warm to satisfy the new balance point.
It is also true that a doubling of CO2 does not result in a doubling of the resulting climactic forcing- it’s much more complex than that. Extra CO2 does have somewhat diminishing returns as a greenhouse gas. Contrary to the claims of many denialists, however, the effect of extra CO2 is not “saturated”- as one easily found peer-reviewed reference states:
“We conclude that as the concentration of CO2 in the Earth’s atmosphere continues to rise there will be no saturation in its absorption of radiation and thus there can be no complacency with regards to its potential to further warm the climate.”
Warming also increases the amount of water vapour in the atmosphere- remember, water is still the primary GHG. That is a powerful positive feedback.
Note that so far we’ve discussed only CO2- but in reality, humans have also increased the amount of methane and of N2O and other GHGs too through our actions and inaction.
Three Facts- Not In Scientific Dispute
These three facts: CO2 went up, we caused it by burning fossils, and extra CO2 narrows the IR re-radiative wavelength window, forcing the climate – these are not in credible scientific dispute. Nobody is arguing about the fundamental validity of any one of the three of them in the scientific literature. It’s not that they’re scared to, or worried about losing their funding if they did- it’s simply not worth arguing over because it is so well demonstrated as fact, consistent with all the data we know of. And if you hear anyone denying any of these three facts, the conclusion is clear: they’re either ignorant, or they’re lying – to you, to themselves, or both. It’s that simple.
This is not a matter of orthodoxy, it’s a matter of simple measured fact.
Three Facts = RISK of AGW
The three facts I’ve pointed out lead inescapably to the conclusion that there is a very real, fact-based RISK of global warming caused by human activities (i.e. AGW). There can be no other conclusion.
It is of course perfectly valid to then say, “so what?” Let’s say that we accept that CO2 went up, we caused it, and the extra CO2 forces the climate. The last bit of wiggle-room for either denial or skepticism remaining is to claim that the resulting forcing will be minor, and hence not a problem for humans or the rest of the biosphere.
How does one translate the risk of AGW, which is certain, to a particular amount of warming of the earth? We do know the temperature must go up as a result of the facts we know, but how do we know by how much it is likely to increase? And how quickly that will happen?
The answer is that the earth is a complex system, with many inter-related factors, all of which can affect the climate in one direction or another, to a greater or a lesser degree. Some of these effects occur instantaneously, like the changes in the IR absorption. Some happen on a timescale of days, others years, others centuries, and others, millennia or even longer. And many of them are very much predictable.
Science therefore must have recourse to models, to estimate the effect of the additional radiative forcing on the earth’s mean temperature. Those models have to be very complex to tell us anything meaningful and reliable. And unfortunately, there are no replicate Earths available for us to do tests on, with an accelerated timescale so we can see results soon enough to understand our model’s validity.
And that’s where some skeptics hang their hats- they make the claim that climate models are fundamentally untestable, and hence not reliable enough to use to draw any conclusions from.
That statement however, fails on two basic points:
1) There IS a system on which we can test the validity of our models. That system is our earth in its recent PAST.
2) Merely being unable to precisely estimate a risk, in no way absolves us from the responsibility to act to mitigate that imprecisely-quantified but otherwise certain risk. Any engineer who has ever attended a Hazop review understands this fact intimately (or shouldn’t be involved in Hazop reviews!)
Some throw up their hands and say that it’s impossible to make decisions such as bringing to an end the burning of fossil materials such as coal, petroleum and natural gas, with the discharge of the effluent to the atmosphere, on the basis of something as unreliable as a model of the earth’s climate. But these people do not understand how we engineers make decisions related to risk as we practice our profession. We have a duty to hold the public safety as paramount- and that duty does not allow us to simply throw up our hands and say, “prove it!” before we will take a mitigating action.
We engineers see risk as the probability of a bad outcome, multiplied by the severity of the bad outcome. Merely bad things which are likely to happen very frequently are a high risk. And truly terrible things- like raising the mean temperature of the earth by, say, four degrees Celcius in a period of a century or so? Those things don’t need to have a very high probability of happening before we MUST take action because the risk is too high.
Others argue that global temperature measurements are difficult, or have been “manipulated”, and hence the data against which the models’ predictions are measured are suspect. Global mean temperature is a very hard thing to measure, and a very noisy signal. It’s also slow to respond- something as large as the accessible portion of the Earth has an enormous heat capacity. But those people are either conspiracy theorists, who think scientists are deliberately lying about climate change in order to further their careers (rather than being the one scientist who blows open such a conspiracy and gets a Nobel prize…), or they’re ignorant of just how many different measures of, and proxies for, global mean temperature have been used to check the models.
While my fundamental opinion is that we should defer to the knowledge and experience of the people who actually study the climate as their principal area of study in their area of expertise, I have seen a few compelling things that demonstrate to me that the models are in fact on the right track.
Here’s the first one: a clever animated infographic which shows that alternative explanations of the rise in global mean temperature that HAS in fact been observed, do not explain the increase. This uses, as I mentioned, the recent past of the earth as a means to test these various hypotheses of why the temperatures we’ve observed have in fact increased by the amount they have done:
No, it’s not the sun- the sun’s output DOES affect the climate, as do orbital and earth tilt cycles. But those things doesn’t explain the increases we’ve seen this time around either.
It’s not aerosol emissions from pollution or volcanoes, ground-level ozone, deforestation/land use changes etc. It’s clearly a result of the increased concentration of GHGs in the atmosphere. The risk is real- it’s having results on the earth’s climate.
The 2nd is this graphic for which I can thank @Mark Tingay for posting repeatedly in response to AGW denialists’ comments here on LinkedIn. It shows the consistency of the models with the measured temperature data, expressed as the “anomaly”, i.e. the excess of the reference 1980 temperature that has been measured:
This is most up to date- from Gavin Schmidt’s Twitter feed. (He’s director of NASA’s Goddard Institute)
Both the models and the measurements of temperature have error bars on them, as does the risk resulting from the three facts I’ve discussed above. But here’s the key point: the error bars on these measurements and calculations do not extend to giving us hope of there being no effect, i.e. an effective risk of zero. And they certainly do not extend to giving us hope that we can continue to burn fossil fuels in the profligate way we’ve been doing for the past century or so, without the negative consequences of AGW.
We’re seeing rather clear, obvious evidence of those consequences even now- the temperatures have absolutely risen in a significant way. And it stands to reason, and to commonsense, that adding significantly more energy to a system like the earth’s climate, is going to have some serious negative outcomes- some of which, such as the melting of Arctic permafrost, themselves would be gigantic positive feedbacks on AGW, making it even worse. I’m not going to get into listing those impacts here- if you care to hear about the potential nightmare scenarios we could be generating for our progeny and theirs unless we smarten up and curtail our direct burning of fossils as fuels, you can find lots of that stuff elsewhere. Just ask that little Swedish girl that so many seem to be terrified and angered by. She’ll give you an earful- and yes, we all deserve it.
Many people who accept, perhaps grudgingly, the reality of the risk of AGW, are turned off the whole thing as a result of what they see as hyperbole used by those who want to convince us that AGW risk is a very serious issue which needs our prompt and serious attention. They react badly in particular to what they think is “alarmism”- for instance when someone says that we have less than 20 years to avert “global catastrophe”. I sympathize- the shrill rantings of uninformed people make me angry too, and I spend a lot of my time combatting the untruths, half-truths and distortions they are spreading on the Internet.
But it is important to clarify this one point: that we have limited time to avoid locking in potentially catastrophic warming. That point is consistent with the facts actually- but what isn’t being said, or perhaps isn’t being emphasized, is that they’re not saying that the catastrophic warming itself will be encountered in the next 20 years. It won’t be. The earth has a lot of heat capacity, so it heats slowly. It should therefore be no surprise that young people are most concerned about AGW and its effects- because they and their children will be the ones who live long enough to encounter them. We, their parents and grandparents, are not likely to be alive to experience the worst of those effects.
Final Thoughts About AGW and Conservatism
Sadly, some people are motivated to disapprove of AGW and pretend it’s not a problem out of a misguided notion of what it means to be “conservative” or “skeptical”.
Fossil fuels are a precious, finite resource- on the human timescale rather than the geological one. They are used to make ten thousand molecules and materials that are every bit as essential to modern life as energy is. And as someone who has helped people try to do this for a couple decades, I can say from first-hand professional experience that replacing some (many) of those molecules and materials with alternatives or substitutes derived from renewable resources is very difficult indeed- far more difficult than it is to make energy by renewable or non-emitting/low emission means.
Being “conservative” means valuing conservation, not being stodgy and unwilling to change or adapt, and certainly not being willfully blind toward new information when it comes to light. There is plenty of reason to conserve those precious, finite fossil resources for uses of highest value to humankind rather than squandering them as fuels. This would be true even if AGW were a total crock of horse effluent. Future generations will scold us not just for AGW, but also because we squandered their birthright in such a wasteful way and made their lives more difficult as a result.
Finally, being skeptical doesn’t mean rejecting anything you don’t understand. It certainly doesn’t mean relying on your current worldview entirely to inform you about what information you should accept as true and what you should reject as false. Being skeptical merely means being able to say “I don’t know” until you have sufficient proof to actually know. And there is no room for skepticism, whatsoever, in relation to the three facts which underpin the RISK of AGW. Your choice is to be either in denial of reality, or not. Please choose wisely!
Acknowledgements: thanks to @Mark Tingay and many others active here on LinkedIn for tirelessly chopping mutually inconsistent heads off the 9-headed hydra of climate change denial. Many give up, leaving the discussion floor to the denialists unchallenged and leaving the general public the view that there is room for doubt where there is none. It is a hard fight, but a worthwhile one.
Thanks also to Brian Dunning and this particular Skeptoid episode, which was the inspiration for this line of reasoning on my part. Skeptoid articles are fun to listen to, and give you the option to read instead if you, like me, read faster and more accurately than you listen:
DISCLAIMER: everything I say here on LinkedIn is my own opinion. And my opinion is not infallible- and is subject to change when presented with new data with good references. If I’ve made any errors here- and I likely have- then by all means message me or comment to my article here and let me know where I’ve gone wrong. Do it respectfully though- if you go ad hominem, I’ll block you- life’s too short for that kind of horseshit.
Finally, my employer, Zeton Inc., takes no opinion in these matters, does not endorse my statements, and loves what it does- designing and building pilot plants for the whole breadth of the chemical process industry. If you take issue with anything I say, please take it up with me and leave Zeton out of it.