Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

Why Are Rails Shaped Like That?

October 03, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Maybe more than any other type of infrastructure, railways have a contingent of devoted enthusiasts. “Railfans” as they call themselves; Or should say “ourselves”? Maybe it's the nostalgia of an earlier era or the simple appeal of seeing enormous machinery up close. But railroads and the trains that ride along them are just plain fascinating. Train drivers are often known as engineers, but operating a locomotive is far from the only engineering involved in railways. In fact, building and maintaining a railroad is a big feat full of complexity. And I’d like to share some of that complexity with you, starting where the rubber meets the road, or in this case, where the steel meets the… other steel? It might sound like a simple topic, but don’t say that to the attendees of the annual Wheel Rail Interaction Conference. This stuff is complicated, so this is the first in a series of videos I’m doing on the engineering behind railways. Why do the rails of railroads have such a weird shape? The answer is pretty ingenious. I’m Grady and this is Practical Engineering. In today’s episode, wer’e talking about train wheels and rails.

Why do we build railroads anyway? They might seem self-evident now and even kind of elementary. But modern railroads are the result of hundreds of years of innovation. And like many kinds of innovation, the development of railroads was really just a series of solving problems. For example, how can we move upwards of 100 tons per vehicle without tearing up the road in the process? Well, instead of compacted gravel, asphalt, or concrete, we can build the road out of steel. But steel is expensive, so rather than a ribbon, we can save cost by using two narrow steel rails directly below the wheels. But wooden or rubber tires have a lot of rolling resistance because they deform under load, and that resistance adds up with each individual train car. So, we use steel for the wheels too. I built this model to show exactly how this works. My wheels are plastic and rails are aluminum, but I think you’ll still get the point. Steel wheels on steel rails are just so much more efficient than…[wheel falls off track]

Well, there is the problem of turning, too. Just because you put a rail below a wheel doesn’t mean it will follow the same path. You have to have some way for the rail to correct the direction of the wheel and keep it on track, literally. And, if you look at railway wheels, the answer is obvious: flanges. The wheels on railway vehicles all have them: a lip that projects below the rail to guide the wheel as it rolls along, keeping the position side to side. You could put flanges on the outside of wheels like this, but if a horizontal force like a hard turn caused one of the wheels to lift, the flange won’t help keep the wheel on track. We put flanges on the insides of wheels so they can keep a train from derailing even if one wheel lifts off the track. Let’s put some flanges on my wheels and try that demo again. [wheels bind up on track].

You can see we haven’t fully solved the problem. Unlike a wheel that has a tiny contact point with the rail, a flange is a big surface that creates a lot of friction around every curve. If you’ve heard that characteristic squeal of a train going around a corner, that’s the sound of flanges rubbing and grinding along the side of a rail. Rails on tight curves are often made of higher-grade hardened steel compared to straight portions of the track, and sometimes they’re even greased up to minimize friction between flanges and the edges of rails. But, there’s a bigger problem at play in this demonstration than simple friction.

Instead of independent wheels, most railway cars use solid axles attached to both wheels called a wheelset. They need that design to withstand the incredible loads each axle carries, but it poses a problem around bends. A solid axle means both wheels turn at the same rate, but the length of the outer portion of track in any given curve is longer than the inside of the curve. Two wheels of the same diameter spinning at the same rate will, kind of obviously, have to roll the same distance. Since there’s a mismatch between the distances the wheels need to travel, solid-axeled wheelsets with cylindrical wheels would always experience some degree of slipping around a turn. That would not only create a bunch of additional friction, but also keep the wheels from following the curved path, and a flange can only do so much.

The trick to railway wheels is something that’s not so obvious at first glance. The wheels are actually conical. The profile of the wheel is wider on the inside next to the flange, and gently narrows toward the outside of the wheel. A wheelset with conical wheels will naturally tend to self-center itself between two rails. On a straight section of track, a wheel that rides up higher on one rail will naturally fall back down, keeping the wheelset roughly centered on the road. In a sense, conical wheels want to stay on the tracks. There’s always a little bit of wobble (exaggerated here), so trains actually move down tracks in a sinusoidal side-to-side pattern that you can sometimes feel if you’re paying attention. Incidentally, that helps the wheels wear evenly. But where it really counts is on a curve.

The turning forces on a train cause it to tend toward the outside track. This shifts the wheels over as well. The outer wheel will ride on the thicker part of its tread nearest to the flange, while the inner wheel will ride toward its edge, which has a smaller circumference. This way, the effective diameter of each wheel changes in a curve and solves the slip problem that cylindrical wheels would face. Take a look at the way these conical wheels that I 3D printed behave as they make this corner. You can see the outside wheel rolling on the wider part, effectively increasing its diameter and thus distance traveled per rotation. Conversely, the inside wheel rides on the narrower part of the cone, and so it has a smaller diameter and travels a shorter distance per rotation.

It really is kind of ingenious. Most vehicles have a differential gearbox to deal with this challenge of navigating curves; train cars just use some clever geometry. But that’s not the end of the story. You might even be thinking, “Richard Feynman already taught me this in the 80s… It’s nothing new.” But there’s more engineering involved in how train wheels and rails interact, including the interesting shape of modern rails. Think about that taper angle first. One standard in the US uses a 1:20 ratio. For the main part of the wheel, that means the outside diameter is roughly a quarter inch or 6 millimeters less than the inside diameter, and that difference has a big effect on the allowable radius of curves in a railroad. A steeper cone can navigate sharper curves, since there’s a bigger difference in the circumference from the inside to outside. You can see my wheelset can’t navigate this s-curve, despite the exaggerated conicity.

This challenge is partly solved with trucks, called bogies in the UK. You can kind of think of trucks as big rollerskates under each end of a train car. The trucks can rotate relative to the rest of the car, and they usually have some pretty serious springs and suspension systems to keep a smooth ride rolling. Most trucks keep the wheel sets parallel, but some can even allow them to ride radially with each curve.

However, even with trucks or bogies, wheels can overshoot their optimal orientation on the tracks. When the simple sinusoidal motion created by the tapered wheels is amplified by the speed of the car, the oscillation can violently slam the trucks side-to-side on the rails. This is called hunting behavior. The violent motion can even cause a train to derail. It’s worst with empty cars, and usually only happens at higher speeds, so a lot of engineering goes into developing wheel profiles and truck designs that raise the hunting onset speed so that it doesn’t limit how fast a train can go. That’s a lot of innovation on the wheel side, but what about the rails?

Just like all parts of a railroad, the rails themselves have evolved over time. Turns out there are a lot of shapes they can take and still serve the same basic function, but modern railway rails are shaped that way for a reason. Weight is equivalent to cost for big steel structures, so there’s nothing on these rails that isn’t absolutely necessary. In a sense, rails are I-beams, a shape that is well-known for its strength and something we see in plenty of other heavy load bearing steel structures. But there’s more to it than that. The bottom part of the rail, called the foot, distributes enormous loads, converting the extreme contact pressure of a steel wheel into something that can be withstood by a wooden or concrete tie. The web elevates the train above the ground, giving clearance for the flanges of the wheels and keeping everything clear of small debris that might end up on the tracks.

The head of the rail with where the action happens. This thick rounded section of steel takes an awful lot of abuse over its life, and thus experiences the bulk of the wear. An old rail section, especially on the high side of a curve, looks remarkably different than a newly forged rail. Here’s why: Theoretically, the speed of a spinning wheel exactly matches the speed of the rail at a mathematically precise point. But trains don’t care about math. For one, even steel wheels on steel rails deform a little bit as they roll. Rather than a single point, there is a small contact patch between the two. That tiny area, roughly the size of a small coin, carries all the weight of the train into the rail. But, because the contact patch is spread across the tapered wheel, the wheel is turning at many different speeds on the same piece of rail. Only the center of the contact patch actually moves at the exact speed of the train. This results in a small amount of grinding as the train moves along, slowly wearing down both the wheel and the rail. Eventually they start to conform to each other, and that’s mostly a bad thing.

Wheels can wear down to get a vertical face that wants to climb up the rail or a hollow profile with a quote-unquote “second flange” that takes the wrong direction at a switch. Most rail wheels have some amount of hollow to them, which changes how conical they actually are. Some wheels are even designed to be taken off and machined back into spec to extend their life. The best way to reduce this wear is to use hardened materials and reduce the size of the contact patch by curving the top of the rail so that the wheel only touches a tiny part of it as it rolls by. After that, it’s just a decision about how much wear you want before needing to replace the rail. The more metal you include in the rail head, the more it will cost, but the longer it will last. In fact, not all rails are equal. The lightest rails are used on straight sections and small commuter service lines. The largest rails are used on curves and heavy-haul freight tracks. Once they get worn down on the main line, they often get reinstalled for a second life in a yard or a siding where they can still bear train cars and locomotives at slow speeds.

So, rails are shaped in the funny way for a reason: they’re bulbous both to reduce the size of the contact patch and provide enough steel to wear away before needing to be replaced. And the shape of rails and wheels is still a topic of research and innovation. Just in the past few years, the standard profile of North American freight train wheels was updated to the new AAR-2A standard. Just a tiny change in the shape of the wheel was tested to have 40% less wear than the previous spec. That means trains will start seeing better steering, lower friction, better fuel consumption, and longer lasting infrastructure.

In many ways, railroads might seem like old technology, a solved problem that doesn’t need more engineering. But it’s just not true. Modern railroad companies use sophisticated software, like the Train Energy and Dynamics Simulator, to keep track of all the complexities involved in how wheels and rails interact. Simulators can let you adjust factors like train makeup, different track conditions, operating conditions, suspensions, and more to characterize how trains will handle and how much energy they’ll use. That’s the topic of the next video in this series, so stay tuned if you want to learn more.

In the 19th century, railway engineering was all about how to build railroads, finding routes through difficult terrain and efficient forms of construction. Modern rail engineering is all about getting the most out of the system. It might not look like much when you see a train passing by, but a huge amount of research, testing, and engineering went into the shape of those rails and wheels and we’re still improving them today.

October 03, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 3

September 26, 2023 by Wesley Crump

This is the third episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

September 26, 2023 /Wesley Crump

Every Type of Railcar Explained in 15 Minutes

September 19, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

A train is a simple thing at first glance: a locomotive (or several) pull a string of cars along a railroad. But not all those railcars are equal, and there are some fascinating details if you take minute to notice their differences. I’m about to start a deep dive series on railway engineering, but I thought, before I do that, we should cover some of the basics first. How many of these cars have you spotted before? I’m Grady, and this is Practical Engineering. Let’s get started.

All trains have at least one locomotive that provides the power. They can pull from the front (called the head) of the train, push from the tail, or act as so-called distributed power somewhere in between. There’s a ton of types of locomotives, but they deserve their own video, so today I’ll focus mainly on the unpowered cars they push or pull. We’ll start with passenger cars, move on to freight, and then talk about a few of the more unusual cars you might be lucky enough to spot on the rails.

Unless you work in the railroad industry, passenger trains are the only ones you’ll ever get a chance to interact with. The standard passenger car or coach is what you’ve probably seen the most of: aisle in the center with rows of seats on either side. Some coach cars can be disconnected and rearranged, but most modern passenger cars come in “train sets” that are rarely split up in normal operation.

Some passenger cars are bilevel, also called double-decker. This can double the capacity of a car, but it’s kind of rare. That’s not only because of height and weight restrictions on railroads, but also because the added time it takes to load and unload the cars can cause congestion at busy stations.

Long-haul passenger trains may include a baggage car for checked luggage like the cargo hold of an airliner. In most cases, they’re designed to look like the rest of the passenger cars, although often with fewer windows since bags rarely enjoy the view. Combine cars have a section for passengers and one for luggage or freight.

Although tricky to identify from the outside, a common sight on passenger trains is a diner car, essentially a rolling restaurant. These cars gave rise to the quintessential American restaurant of the same name, many of which are converted railcars themselves. Some passenger routes even include a lounge car, a bar on rails, that sometimes even has live music.

If you’re sleepy after dinner, you might find yourself in a sleeper car. Open section cars have the beds in bunks with only a curtain for privacy. Most modern sleeping cars have private rooms and bathrooms akin to rolling hotels.

These days, especially in the US, passenger rail is used by people who find the journey itself to be the destination. Some passenger trains include dome cars for better sightseeing along the trip. A bulbous glass dome provides a panoramic view from the side of the car. Similarly, observation cars are sometimes included at the end of a train to give passengers a view out the back.

Of course, we can’t forget crew cars. All trains have a team of people who work aboard for operation, maintenance, and other tasks, and they sometimes need their own quarters for breaks or sleep. Especially in areas like Australia where there are huge stretches of rail without stops at cities, a whole second crew might wait in the crew car, ready to swap when the working time limits of the first crew are reached.

Passenger trains are cool, of course, but I’m more of a freight train railfan myself. There’s just something awesome about seeing a single car weighing sometimes more than 100 tons move almost effortlessly down the steel rails. And with the huge variety of types of freight that move overland come a huge variety of railcars.

Boxcars are a common sight with their huge sliding doors. They can be loaded by hand or forklift and accommodate a wide range of sizes and types of cargo that require protection from the elements. And they have a few variations too. A refrigerated boxcar is exactly what it sounds like: a giant insulated fridge or freezer on rails. They usually feature a diesel-powered refrigeration system that’s easy to spot from the outside.

If the goods being transported in a boxcar are relatively light, you end up completely filling the car before coming close to its weight capacity, sometimes called “cubing out” the car. To maximize the use of a boxcar for lightweight cargo, there are taller versions called High Cubes. Not all railroads can fit such a tall car because of tunnels or bridges, so you might see the excess height portion of the car marked in white to make sure it doesn’t inadvertently end up on a route without the necessary clearance.

If you want a train car full of cars, then you’re looking for an autorack, designed to carry consumer cars and trucks. Many have three levels and carry dozens of vehicles at once. Freight rail moves automobiles cheaper and with better protection compared to driving each one individually from factories to distribution centers. A few passenger trains pull autoracks as well, like the autotrain between Washington DC and Orlando. You can take your car on your rail trip and have it at your destination.

When it comes to freight cars, it doesn’t get much more straightforward than a flat car. A simple name for a simple function: just a rolling platform that can be used for all sorts of cargo, especially big stuff that needs to be loaded with a hoist or crane and cargo that can handle a little rain or snow. You might see flat cars used to transport heavy equipment and machinery, pipes or steel beams, or even see multiple flat cars outfitted to transport enormous wind turbine blades. Some flatcars feature bulkheads at the front and rear. These help keep loads like steel plates, pipes, and wood products from shifting forwards or backwards when the train accelerates or brakes.

Another flat car variant is the centerbeam car, used to haul lumber, plywood, wallboard and fencing. The central beam helps stiffen the car, making it possible to stack products higher. It also provides a place to secure the loads from either side of the car. Some centerbeam railcars hold enough lumber to frame out half a dozen houses!

Flatcars are also used for intermodal shipping, or using more than one mode of transportation like trucks, trains, and ships. Trailer-on-flatcar, or TOFC, isn’t exactly a distinct type of railcar, but it is a distinct use of one. A semi-trailer is lifted or driven onto a flatcar at one terminal, and it’s ready to connect back to a truck once it reaches the next intermodal facility to be driven to its final destination. This is sometimes called piggy-backing and it can be a cheaper alternative than trucking the trailer for its entire route.

Most intermodal freight these days comes in containers, standardized steel boxes that fit on trucks, trains, and ships. Container-on-flatcar, or COFC, again isn’t a different kind of car but simply a specialized use. The cast corners of steel containers have holes that make them easy to secure with latches or twist lock devices so they can be quickly loaded and unloaded.

One of the great advantages of containerization is that modern intermodal containers can be stacked. An interbox connector slots between the corner castings and holds each box together. But, you don’t see double-stacked containers on flatcars very often, because of height restrictions and issues with center of gravity. Instead, well cars recess the bottom of a container between the wheels, lowering the top of a double-stack and making it safer at speed. Not every line has the clearance, but well cars have made it possible to double-stack intermodal freight on a lot more routes than before.

Coils of sheet metal are used in countless manufacturing processes, so you can see them on freight railroads fairly frequently in coil cars. Steel coils are challenging to load and unload, and challenging to secure as well, so that’s why they get their own specialized cars. Many are covered with a hood to protect the steel or other metal cargo from the elements.

Gondola (GON-dola) cars, or gon-DO-la, depending on where you live, are used for bulk materials like scrap metal, sand, ore, and coal. They’re basically enormous wagons. Gondolas have to be loaded and unloaded from the top with a crane or bucket. Some can be turned upside down and unloaded using a rotary dumper. Look for the different color of paint on the side with the rotary coupler.

Hopper cars are like gondolas in that they’re loaded from the top, but they have sloped sides and bottoms that funnel material so they can be unloaded through hatches at the bottom. Hoppers can have open tops when carrying loads that aren’t sensitive to the weather, but covered hoppers are used for cargo that needs protection from the elements like sugar and grains.

Another option for unloading bulk goods is to tip it sideways. This is a side dump car, not very common to see. They’re mostly used to maintain the railroad itself, rather than move and deliver bulk goods to customers.

This next car is very rare, but it’s so cool I just had to include it. Behold the behemoth that is a Schnabel car. There are actually two cars with far more axles than normal, each sporting a heavy lift arm for truly enormous cargo, such as power transformers used in substations. One of the largest of these is used in the US to transport nuclear reactor containment vessels on 36 axles.

Tank cars are used to carry liquids and gases on rails. Like all railcars, there are plenty of variations, but in general, they’re split up into two types. Non-pressurized tank cars handle all kinds of liquids from milk to oil. They may have specialized coatings that match their specific cargo needs, can be insulated or even refrigerated, and they usually have a bottom outlet so that they can unload by gravity.

Pressurized cars are designed to transport liquids and gases under pressure. These tanks have thicker walls and higher standards for containment of cargo. Pressurized cars always have protective housings covering the fittings on top of the tank. But, some non-pressurized cars have them too, so you'll have to look for other subtle clues (or memorize the DOT classification numbers) to know which type each one is for sure. Tank cars designed for hazardous cargo are heavily regulated and have special features like reinforced ends called head shields, specialized couplings that reduce the impacts of a derailment, and pressure relief valves to minimize the chances of an explosion.

I can’t be totally comprehensive for this short video. If you can dream it, there’s probably a freight railcar of it somewhere, but that should be all of what you’re likely to see in the wild, plus a few that you’d be really lucky to spot. But passenger and freight cars aren't the only things you'll see on the tracks. Non-revenue cars are those used by the railroad companies themselves. After all, building and maintaining railroads is a complicated and expensive endeavor, and it takes a lot of interesting equipment to do it well. I’ll rattle some of these off, but every railroad is different in the type of equipment they use to keep things running smoothly.

Ballast is the name for the gravel bedding that railroad ties sit on. It distributes the enormous pressure of trains to the subgrade, provides lateral support to keep tracks from sliding side-to-side, and facilitates drainage to keep the subgrade from getting soggy. Ballast tampers shake and pack the ballast under the tracks, restoring the support if the ballast has settled and sometimes correcting the rail alignment too. Ballast regulators use blades and brushes to distribute the ballast material evenly around the tracks and keep excess ballast from covering the ties. A ballast cleaner picks up all the rock, separates it from any dirt, and replaces it on the tracks to improve its ability to drain water and lock together to support the railroad.

Rail Grinders do just that: grind the rails to restore their shape and remove irregularities that show up as rails wear down. A tie exchanger takes out the old ties and inserts new ones without having to remove the rails. A spiker drives the spikes that hold the rails tightly to the ties. A railroad crane is used for heavy lifting along the rails where it might be difficult to access with an overland crane. Some railways in the north use a rotary snowplow during severe winter weather to keep the tracks clear.

Sometimes you might see a work truck driving around on regular old paved roads with an extra set of flanged metal wheels. This is a road-rail vehicle also called hi-rail (since they can run both on the highway and the railroad). There’s a whole host of hi-rail vehicles out there, really any kind of work truck setup you can imagine on the highway could find itself doing work on the railroad. And this is probably the only rail vehicle you’ll have a chance of seeing without also seeing a railroad itself!

Railroads depend on large scales to measure the weight of equipment and cargo. And of course, if you’ve got a scale, you need a way to calibrate it, which is where the scale test car comes into play. These cars are basically rolling hunks of metal with very precisely known weights, kind of like a huge railroad version of the little weights you might have used in school science classes.

A particularly rare car that you’d be lucky to see is a track geometry car. They carefully measure the gauge, position, curvature, and alignment of the railroad, helping to ensure the safety and smoothness of tracks without interrupting service. Unlike manual measurements of rail geometry, the measurements of track geometry cars account for loading conditions since the car itself is a full-scale railroad car.

And finally, bringing up the rear, a train car we’ve all heard of, but one you won’t really see too much of any more: the caboose. Historically, cabooses housed crewmembers who had a host of jobs, from helping with switching and shunting cars around, to looking for damaged cars, dangling equipment, monitoring brakeline air pressure, and spotting overheating bearings and axles. With the advent of roller bearings and wayside defect detectors, the role of the caboose was diminished and eventually the laws requiring them on trains were relaxed. Today the last car of a freight train is often just a regular cargo car, but with a small device on the back called an End-Of-Train Device. The most sophisticated versions monitor brake line pressure and movement of the back of the train, relaying the information to the engineer at the head. And a flashing red light lets anyone know that that’s the whole train and there aren’t any cars inadvertently left behind on the tracks.

Trains are one of the most fascinating engineered systems in the world, and they’re out there, right in the open for anyone to have a look! Once you start paying attention, it's pretty satisfying to look for all the different types of railcars that show up on the tracks, and in future videos, I’m going to show you a lot more. If you’ve been inspired to keep your eye out, we put together a checklist that you can use to keep track of the cars you’ve seen. It’s linked below in the description, but that’s not all.

If you’ve watched my channel for any length of time, you know that almost every video I make is connected to something you can see in your own surroundings. You might even know I released a book about it: Engineering in Plain Sight: An Illustrated Field Guide to the Constructed Environment. And now, I’m launching a companion game too. This is Infrastructure Road Trip Bingo. Our brains have a stupendous capacity to ignore all the fascinating details that are hidden in plain sight, and road trips are the perfect opportunity to open your mind’s eye.

Infrastructure Road Trip Bingo is just what it sounds like: a spotting game to play with your fellow passengers. Each sheet has 24 engineered structures that you might see on a typical road trip. Some you’re sure to spot. Some you might need to try and influence the driver to take a special detour. Get a line of 5 before anyone else, and you win. All the icons were designed by the illustrator for my book, and there’s a cross reference table inside the cover if you want to learn more about a particular square. 100 tear-off sheets mean you’ll have plenty of chances to play and win, and the squares are randomized so that no game ends the same.

Is this a silly idea? Of course it is. But, what I’ve learned from you over all these years is that you’re enthusiastic about the built environment just like me. Engineering In Plain Sight hit the Publisher’s Weekly best seller list, and it’s still topping out categories on Amazon nearly a year later. So I wanted to give you a chance to put those observation skills to the test. Infrastructure Road Trip Bingo goes on pre-sale today, only on my website, and they’ll start shipping later this year. And if you still don’t have my book, you can get a copy bundled with your game for a huge discount as well. You can get it from any retailer, but if you buy from my website, I signed every single copy in our warehouse. These are awesome gifts, or treat yourself with something fun and cool, and support what we’re doing on Practical Engineering while you’re at it. That link’s in the description. Thank you for watching, and let me know what you think!

September 19, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 2

September 12, 2023 by Wesley Crump

This is the second episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

September 12, 2023 /Wesley Crump

Do Droughts Make Floods Worse?

September 05, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Do you remember the summer of 2022 when a record drought had gripped not only a large part of the United States, but most of Europe too? Reservoirs were empty, wildfires spread, crop yields dropped, and rivers ran dry. It seemed like practically the whole world was facing heatwaves and water shortages. But there was one video that warned against hoping for rain, at least not for big storm right at first. Rob Thompson, a meteorologist and professor at the University of Reading, shared a little backyard experiment: cups of water being inverted on top of grass with varying moisture levels in the soil. The results seemed to show that the dry soil absorbed the water much more slowly than the wet grass or normal summer conditions. This video was shared across the internet as a viral reminder that, contrary to what you might think, droughts can increase the impact of flooding. But is that actually true? Does dry soil absorb moisture more slowly than wet soil, and could a storm after a drought cause more runoff and worse damage than if the ground was already wet? No matter what your intuitions say, the answer’s a little more complicated than you might think. And of course, I built some garage demonstrations to show why. I’m Grady, and this is Practical Engineering. Today we’re exploring the relationship between droughts and floods.

Of all of the natural disasters we face, floods are among the worst. There have been more than 30 floods in the US since 1980 that caused over a billion dollars in damages each! And that’s not including hurricanes. In fact, floods are so impactful that I’ve already made a whole series of videos about how dangerous they are and many of the ways that engineers work to reduce the risk of flooding or at least reduce the damage they cause. Many of those flood infrastructure projects are based on a “design storm,” essentially a made up flood used to set the capacity or height of a structure. For example, the storm gutters on your street might be designed to carry the 25-year storm. Many spillways for dams are designed for a flood that is unlikely to ever occur called the Probable Maximum Flood. Of course, we just can’t run full-scale tests on flood infrastructure. Despite architects and contractors saying we always rain on the parade, civil engineers can’t call down a flood of a particular magnitude and duration from the heavens. And even if they could, it would be an ethical gray area. So, engineers who design water infrastructure instead use models to help estimate various magnitudes of flooding and predict how the built environment will respond.

There are all kinds of hydrologic models that can simulate just about every aspect of the water cycle you could imagine, but modeling basic storm hydrology is actually pretty simple. It’s usually broken into three steps. Precipitation is exactly what you would expect: how much rain actually falls from the sky and hits the surface of the earth? Transformation describes what actually happens to those raindrops as they run along the ground and the timing of how they combine and concentrate. But in between precipitation and transformation, there’s a third step. Because not all those rain drops run off and reach a river or stream. Some of them get stuck in puddles and ponds (called abstractions), some evaporate, and some soak into the ground.

I say all this to point out that the engineers and scientists who study flooding have put a great deal of thought and research into the how, where, how much, and why rainfall soaks into the ground. It’s the third leg of the “estimating how bad floods can be” stool (a stool, by the way, I spent a good part of my education and professional experience sitting on). And of course, there’s a litany of factors that affect how much precipitation is lost to infiltration into the earth versus how much runs off into rivers and creeks: temperature, vegetation, season, land use, soil type, and more. But one of the factors is more important than any other: soil moisture. And it shouldn’t be that surprising. How much water is being held between those tiny grains of silt, sand, or clay plays a pretty big role in how much more water can flow in.

Maybe you’re starting to see what I’m getting at here (and I promise the demos are coming but I think it’s important to know the theory first). One of the most beloved mathematical expressions of hydrologists everywhere is Horton’s equation. Looks a little intimidating, but it’s much simpler as a graph. This logarithmic curve shows the infiltration rate we can expect during a rain event of a given magnitude over time. At first, when the soil is driest, the rate of infiltration is highest. As rainfall continues to soak the soil, less water is absorbed, and the infiltration rate slowly approaches a steady state.

The inputs to Horton’s equation are fine for a laboratory, but they’re not really easy to estimate in a real world scenario, so most hydrologic models don’t use it. One of the simpler infiltration models actually used in engineering is the Curve Number method, originally developed by the Soil Conservation Service in the 1950s. Here, instead of esoteric laboratory variables, infiltration rates are tied to actual soil types and land uses we can estimate in the field, and this is meant to be dead simple. You too can be a civil engineer by simply picking the right number from a table and feeding it into a model. In fact, let’s try it out. My backyard is an open space, I would say in good condition, with mostly clay which is hydrologic soil group D. So my curve number should be 80.

I won't make you go through the calculations, because we can make the computer do them. This is the Hydrologic Modeling System, a free piece of software available from the US Army Corps of Engineers. I’ll plug in my backyard curve number, plug in a storm with a constant rainfall over a day, and push go. The bars show the total amount of precipitation for each time step. The red portion shows the losses and the blue portion shows the runoff. At first, all the precipitation goes toward losses as the rainfall gets caught in abstractions. But once the puddles fill up, some runoff starts to occur. You can see that, for a constant rate of precipitation, runoff increases over time, and infiltration goes down, just like we saw with Horton’s equation. I know we’re in the weeds just a bit, but I think it’s important to know that we have technically rigorous ways to describe our intuitions of how floods work. The Curve Number method (along with many others) are used across the world by engineers to characterize floods and even to calibrate hydrological models to actual floods. Of course models are never perfect, but at least they’re based on real science. Water fits into the spaces, the interstices, between soil particles. The more water there already, the harder it is for more water to flow in. But you don’t need a graph. You can see it for yourself.

I hammered a clear tube into my Curve Number 80 backyard, and we can watch the water flow into that clay soil with grass of quote-unquote “good condition.” This is actually a crude version of an actual scientific test apparatus called an infiltrometer, but this isn’t strictly scientific. The real test involves hammering the tube deeper to prevent lateral spread and maintaining a constant level to remove water pressure as a variable. But, hey, this is just a youtube demo, and I wanted to push my kid on the swing instead of babysitting the water level in a clear tube for 45 minutes.

I did take the time to graph the water level for the duration of the experiment so you can see the results more clearly. The level drops quickly at first and slows down to roughly a constant rate, just as the theory predicted. Some of that slowdown is because of the decreased water pressure over time, the variable I didn’t control, but it’s mostly because the soil became saturated, making it harder for water to infiltrate.

Just for fun, I ran another experiment in the garage with a tube full of sand. FYI, that’s roughly equivalent to “Natural Desert Landscaping” with an associated curve number of 63. Are you feeling like an expert at this yet? It’s a little harder to tell in the sand because the water flows so quickly, but it does in fact flow more quickly at the beginning before the sand is saturated. Once it saturates, the infiltration is more or less constant, just like we would expect. The reason for the sand demo is this: we’ve left out a key consideration so far which is the initial conditions. How much water is in the soil at the start of the event? If it’s a lot, you would assume there would be less infiltration. If the soil is dry, you would assume infiltration would be greater. Is it true? Let’s try it out!

Again, the sand is maybe a little bit too porous for this demonstration, and my method for adding the water isn’t so precise either. But, just paying attention to how quickly the tube fills up with water with the valve fully opened, the dry sand takes longer. That’s because more of the water is infiltrating into the soil. The wet sand is like starting halfway down the Horton curve. But that wasn’t a super satisfying result, so I put some potting soil into the tube next (Curve Number 86). I ran it once dry, then ran it again after the soil was saturated, and lined the shots up side by side. This time you can clearly see the difference. Water infiltrates into the unsaturated soil much more quickly, but once it does, it infiltrates about the same rate as the already wet example.

We have a word for this: antecedent conditions. Most of the factors we talked about that affect infiltration rates don’t stay the same over time. They change. Many hydrologic models use average conditions as a starting point, but the real world isn’t very average. Vegetation is seasonal; temperatures fluctuate; watersheds experience fires and droughts (hint hint). How wet a watershed was before the storm is an important factor in determining how much runoff will occur. According to all the theory and practical examples I’ve shown, a wet watershed will absorb less precipitation, so flooding will be worse. And the opposite is true for a dry one. More water will soak in, making flooding less impactful. But, that seems contrary to the video I showed in the introduction, and do you really think I would make a video called “Do Droughts Make Floods Worse” if the answer was just, “no”?

It turns out that certain kinds of soil, when they become very dry, also become hydrophobic. They actually repel water. This is not a super-well-understood phenomenon, but it seems that under very dry conditions, waxes, plant root excretions, and the action of bacteria and fungi create a layer at the surface that reduces a soil’s affinity to water. If you’ve ever forgotten to water a houseplant for a while, you may have experienced this yourself. It’s hard to get the water to soak in at first, and many gardeners will actually fully submerge a potted plant to properly water it.

Because it’s a finicky phenomenon, I had a little trouble creating water repellent soil in the garage, but luckily, hydrophobicity is interesting enough to be a fun kids toy. I bought some hydrophobic sand and put a layer of it on top of my regular sand to simulate this effect of soil water repellency. You can see clearly that the repellent layer slows down the infiltration of the water. It still gets through, but it happens a lot more slowly compared to if it weren’t there. So, why doesn’t this effect show up in the theory (or least the theory of flood modeling)?

There are a few reasons: number 1 is that most hydrophobic soil effects disappear pretty quickly after the soil gets wet. It just doesn’t last that long, as you know if you’ve dealt with it in your potted plants. Number 2 is that it’s a phenomenon that hasn’t been well-characterized in terms of what soils experience repellency and under what conditions. There’s no nice table for an engineer to look up values. But number 3 is the biggest one: there are other antecedent factors that just end up being more important. Very high soil moisture before a flood is much more likely to lead to severe flooding than very low soil moisture in most cases. The extreme example of this is rain-on-snow flooding, which contributed to the 2022 flooding in Yellowstone National Park. But there is one big exception to this rule: fires.

When organic stuff burns, some of that volatile material creates hydrophobic properties in the underlying soil, reducing its ability to absorb rainfall. That effect plus the loss of vegetation on the surface means that the potential for flooding after a fire increases dramatically. Storms after wildfires are known to create massive floods, mudslides, and erosion, so there is a lot of research into understanding this phenomenon.

So what’s the answer? Are floods worse after a drought? Dry conditions do kill plants and grasses that slow down runoff, they create hydrophobic soils that briefly keep water from soaking into the ground. And, they often make fire conditions worse which in turn, can lead to more impactful floods. But droughts also leave the soil drier than average, increasing its ability to soak up rainfall. In many cases, a flood after a good soaking rainfall is going to generate far more runoff than a flood after a drought.

Rob told me he was completely surprised by the response to his video, especially since he only spent a few hours making it. His goal was to show that, under certain conditions, flash floods can be worse when the underlying soil is very dry. But I suspect if his demo lasted a little bit longer (and his setup was a little more rigorous), the results may have looked a little different. And on the other hand, most models used by engineers to estimate floods assume that infiltration always goes up as soil moisture goes down, completely neglecting the fact that some soils lose their affinity for water at very low moisture levels. One statistician famously said that, “All models are wrong, but some are useful.” And even something as simple as the flow of water into the soil has so many complexities to keep track of. Like most answers to simple questions in engineering and in life: the answer is that’s it’s complicated.

September 05, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 1

August 30, 2023 by Wesley Crump

Check out our new series! This is the first episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

August 30, 2023 /Wesley Crump

Every Construction Machine Explained in 15 Minutes

August 15, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

We talk about a lot of big structures on this channel. But, it takes a lot of big tools to build the roads, dams, sewage lift stations, and every other part of the constructed environment. To me, there’s almost nothing more fun than watching something get built, and that’s made all the better when you know what all those machines do. So, in this episode, we’re going to try something a little bit different. I’m Grady, and this is Practical Engineering. Let’s get started!

A big part of construction is just shifting around soil and rock. If you’ve ever had to dig a hole, you know how limited human effort is in moving earth. Almost no major job site is complete without at least one excavator because they’re just so versatile. Depending on size, the heavy steel bucket of an excavator can match an entire day’s digging of one guy or girl with a single scoop. But excavators get used for more than just digging. They are a lifter, pusher, crane, and hammer all in one.

A skid steer is second only to an excavator when it comes to versatility. These little machines are often equipped with a bucket, but you can attach almost any type of tool as well. While there are often purpose built machines that can do the same job, none of them can convert from loader to mower to forklift to drill rig quite so quickly, and in tight confined spaces, a skid steer is the perfect tool.

A loader is one in many machines meant to carry soil and rock across a distance. They’re often articulated in the center for tighter turns and use a large bucket on the front for lifting and dumping. They’re meant to carry materials over short distances, like the length of a construction site.

Longer hauls use a dump truck. These trucks feature a large open-topped tub meant to withstand repeated loading with various heavy materials. A typical dump truck features a hydraulic cylinder that can lift the bed, tilting it at a steep angle and allowing material to dump out of the back.. Since dump trucks carry heavy loads, lots of them have auxiliary axles that can be lowered to distribute the weight over more tires and keep the truck in compliance with roadway and bridge weight limits. Articulated haulers are dump trucks used in off-road and difficult terrain.

If you want to move a lot of soil around a large construction site, another option is a scraper. Rather than loading from the ground into a dump truck, these machines do it all in one. A huge blade scrapes directly from the ground into a hopper. It’s carried directly to where it’s needed and unloaded with a hydraulic ejector, and these are often used on large embankments like for highways and dams.

Another Swiss army knife of the construction yard is the backhoe that is kind of a combination excavator and loader. Great for small sites where it doesn’t make sense to have two pieces of equipment.

And don’t forget the bulldozer that specializes in moving material at ground level. They can’t move material over large distances, but they can spread out literal tons with their tank-like tracks.

The last stop on the digging train is the trencher. There are a huge variety of styles and sizes, but ultimately they all specialize in digging long holes for pipes and utilities. Many use a tooth chain like a giant chainsaw for the Earth!

By the way, there are about a hundred different colloquial names for almost every piece of large equipment. Different sites, suppliers, regions, and countries use different words for the same machine; it’s part of the fun. One easy tip to sound like a pro is just to add the drive style to the front of the name. It’s not a loader, it’s a wheel loader, or a tracked excavator and so on.

Now let’s hit the road. Roadwork is something we’ve all seen, and while it can be a bit frustrating if you’re stuck in a traffic jam from it, roads might be the largest engineered structures on earth. Our modern lives depend on them, and it takes some pretty cool tools to get them built.

A grader is technically an earthwork tool, but it’s used mostly on roadways. The extra long wheelbase makes it well suited for precisely leveling surfaces and evening out bumps, leaving a nice even grade.

Once all that soil is in the right place, it needs to be solidified so it doesn’t settle over time. A roller compactor is the main tool for this job. There are a few varieties of these depending on the material being compacted. Smooth drums are used for most soils and asphalt. Sheep’s foot and padded drums have protrusions that work best on clay and silt. Pneumatic tire rollers are best to knead and seal the surface. And a lot of roller compactors have a vibration feature to shake the soil into place.

An asphalt paver is the machine where the road meets the road. Hot asphalt is loaded into the machine, which spreads it into an even layer onto the subgrade using a screed. Many paving machines have a wand that follows a stringline as a reference to the exact elevation required for the roadway.

If we’re talking about making a road out of concrete, then the tool for the job is a slip former. It’s usually more efficient and produces better quality of work when paving, curbs, and highway barriers are installed continuously rather than building forms and casting them in batches. Careful control of the mix makes it possible for a slip form machine to create long concrete structures without any formwork at all.

If we just added another layer of pavement to the road every time it started to wear out, pretty soon, we’d have walls! Roads are designed to be extraordinarily tough, so removing the top layer isn’t easy. That’s a job for an asphalt mill or planer. These specialized tools grind and remove the surface with a large rotating drum. The material is routed up a conveyor system and can be loaded into a following dump truck.

It’s actually fairly common to see multiple vehicles following one another in roadwork like this. An interesting example is the so-called paving train. On one end, we have a dump truck full of asphalt fresh from the plant. This is loaded into the asphalt paver, which continuously lays a layer of asphalt that is then compacted by one or more rollers. Workers on the ground also continuously monitor the process to ensure a nice even road surface.

Not everything at a construction site is a machine with wheels or tracks. A lot of equipment gets hauled in on a trailer, or is a trailer itself. A light tower lets you work outside of daylight hours, illuminating the site so you can work at night or underground. An air compressor enables the use of lots of tools on a job site, like jackhammers, sandblasters, and painting rigs. If you need electric power instead of compressed air, diesel generators offer access to power when grid service isn’t available.

So far, the actual material we’ve seen is in bulk like earth or asphalt. Often in construction, the materials we need to lift or move are objects like girders or concrete pipes. For that you need a crane or similar material-handling equipment.

This is a pipe layer. The name is a bit confusing since the workers that operate them are also often called pipe layers. And it's no surprise what kind of jobs they do. They specialize in handling large sections of pipe and precisely lowering them and placing them into trenches.

A telescopic handler, or a telehandler or teleporter is like an all-terrain forklift. The boom can have attachments like a bucket, pallet forks, or a winch, and it telescopes to make it easy to deliver materials and equipment exactly where you need it.

If you happen to be the load that needs elevating, then you’ll need a boom lift or its cousin, the scissor lift. The operator of these controls the platform while standing on it, allowing for very positioning of people that’s much more precise, and usually safer, than a ladder. Another relative of the boom lift is a bucket truck which has a boom lift in the back, used a lot of electric and utility work on poles.

Stepping up in size, we have road-rated all-terrain cranes. If you’ve passed a giant crane driving down the highway, it was one of these, since most other types of cranes have to be hauled to a site in pieces and assembled.

As the name implies, all-terrain cranes don’t require perfectly level, paved surfaces to get to work. However, if your job site is particularly rough, you need a rough-terrain crane. The giant rubber tires on these mean you’ll need to have them transported, but once rolling, they can go where highway-rated vehicles might struggle.

If the crane you’re looking at is mounted on tracks, you’ve got a crawler crane. These heavy-duty cranes, while slower and bulkier than all-terrain cranes and also requiring modular transport to job sites, can carry immense loads and extend to even greater heights than any of the cranes we’ve seen so far. Most crawler cranes can be configured according to the job with different lengths of booms, amounts of counterweight, and extensions called jibs. A particularly fun configuration is for demolition where a crawler crane might be fitted with a wrecking ball.

Most can move from place to place, but not all. Tower cranes use large counterbalanced horizontal booms with an integrated operator cab on top of a large, well… tower. Like most of the cranes we’ve seen so far, these come in a wide range of sizes but can be absolutely enormous, almost a construction project themselves requiring other cranes for assembly.

One way to build bridges uses a specialized crane called a launching gantry. You may have heard the term gantry before for a bridgelike overhead crane. These are in all kinds of industries. A launching gantry uses the existing structure of the bridge as a base and often lifts whole pre-built sections of the bridge.

Turning from the sky and looking underground, let’s talk about a few foundation-specific machines.

The biggest and heaviest structures are supported on bedrock or some deeper geological layer. Even if the usable soil is just clay for hundreds of feet, sinking deep subterranean columns or piles below a heavy structure can keep it from settling too much over time. One way to install a pile is to dig a very deep hole, place a reinforcing steel cage in the hole, then fill the whole thing with concrete. This is the exact job that a pile drill rig is designed to do. These large-scale drills are pretty closely related to the machines used for oil and gas exploration.

Another way to install piles is to drive them into the earth, the job of a pile driver. Just like the name implies, they repeatedly strike wooden, steel, or concrete piles to sink them to the required depth.

Speaking of concrete, there’s a whole subset of construction machines that are specifically designed to handle, transport, and place this important material. You’ve probably seen a mixer truck before, and I’ll forgive you for calling them cement trucks, even though cement is just one of the ingredients of a concrete mix. The truck can be loaded with dry materials and water, and the mixing occurs en route to the job site, since concrete generally has a limited time before it begins to cure.

Concrete is often placed directly from the truck using a chute, but that’s not always the easiest way. Concrete pumps are used to pump concrete to job site locations that are hard to access with a truck, often with a huge overhead boom. Since concrete is more than twice as dense as water, these pumps operate at extremely high pressures, sometimes over 100 times atmospheric pressure!

Finishing concrete is mostly a hand-tool job, but there are some machines for big jobs, like ride-on trowels, that speed up the job of floating a slab smooth once it has started to set up.

Big jobs with lots of concrete might just mix it onsite with a mobile batching plant. This is helpful if you need to produce vast volumes of concrete over a long period in a way that would be too inconvenient or maybe even impossible for mixer trucks to handle.

Sometimes concrete needs to be placed on a sloped or vertical surface to stabilize a rock face, shore up a tunnel, or even just install a pool! The catch-all term for the various varieties of sprayed concrete is shotcrete (although some pool installers might disagree). Shotcrete machines use compressed air to apply concrete to all kinds of surfaces in the construction world.

When projects require the installation of new or additional utility lines in areas that are already built up, the traditional method of digging trenches isn’t feasible. This kind of job calls for a directional drilling machine. While these are technically boring tools, they are anything but uninteresting. I actually have a dedicated video just to talk about how they work, and specifically how they steer that bit below the ground. Go check that out after this if you want to learn more.

Hopefully there have been a few machines in the list so far that are new to you, but if not, I have a few more specialized machines you might be lucky enough to spot on a site:

Fans of the channel might recognize a soil nail rig, a specialized machine that drills out more or less horizontal shafts in an earthen slope and then adds soil nails to greatly enhance stability.

Jobs that require grout often use mobile batch plants, called grout plants. You can even inject ground into the ground at high pressures using a hydraulic pump to fill voids and stabilize soils.

A wick drain machine installs prefabricated vertical drains into the soil at regular intervals to speed up drainage of water in clay soils which helps speed up the inevitable settling of the soil so construction can get started faster.

One option for repairing existing pipelines in place without trenching is cured-in-place pipe lining. Inverting a liner impregnated with epoxy-resin into an existing pipeline using air pressure essentially puts a brand new pipe inside an old or damaged line.

One of the least boring machines that you’d be really lucky to see above ground is a tunnel boring machine. These behemoths use a complicated face of various cutting tools followed by a material removal and shoring installation apparatus to efficiently bore full scale tunnels!

Obviously I can’t be exhaustive here. The construction industry is just full of machines. There is such a variety in the type and scale of projects that manufacturers are always coming up with new and improved equipment that can get a particular job done better. And lots of industries outside of construction use heavy machinery, including mines, oil and gas, and railroads. Let me know what you think I missed or if you want a similar list within a different industry. But I think this is a good starting point for any burgeoning construction spotter, and I hope it’s exhaustive enough that if you see something that didn’t make the list, you can puzzle out its purpose on your own. That part of the satisfaction of construction spotting anyway, so get out there and see what kinds of machines you can find.

August 15, 2023 /Wesley Crump

Where Does Grounded Electricity Actually Go?

August 01, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Imagine this scenario: You have a diesel-powered generator on a stand that is electrically isolated from the ground. Run a wire from the energized slot of an outlet to an electrode driven into the ground. Don’t connect anything to the ground or neutral slots. Now imagine starting the generator. What happens? Does current flow from the energized wire into the ground or not? Your answer depends completely on your mental model of what the earth represents in an electrical circuit. After all, the idea of a circuit is just an abstraction of some really complicated electromagnetic processes, and that’s even more true on the grand scale of the power grid. Grounding is one of the most confusing and misunderstood aspects of the grid, so you can be pardoned for being a little perplexed.

For example, if I run a wire from the positive side of a battery into the ground, nothing happens. But, when an energized power line falls from a pole, there’s definitely current flowing into the ground then. Cloud-to-ground lightning strikes move huge electrical currents into or out of the earth, but my little thought experiment of a generator connected to a grounding electrode won’t create any current at all. I’ll explain why in a minute. Even on a electrical diagram, ground is just this magical symbol that hangs off the circuit willy nilly. But, connections between an electrical circuit and the ground serve quite a few different and critical purposes. And I have some demonstrations set up in the studio to help explain. I think you’re going to look at the power grid in a whole new way after this, but just don’t try these experiments at home. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about electrical grounding.

Why do we ground electrical circuits in the first place? Maybe the easiest way to answer that question is to show you what happens when we don’t. For as much importance as it gets in the electrical code, it might surprise you that it’s not always such a big deal, and in some cases, can even be beneficial. After all, lots of small electrical circuits lack a connection to the ground, even if part of the circuit is literally called, “ground.” In that case, that term really just refers to a common reference point from which voltages are measured. That’s one thing that can be confusing about voltage: it doesn’t actually refer to a single wire or trace or location, but the difference in electrical potentials between two points. For convention, we pick a common reference point, assume it has zero potential to make the math simple, and call it ground, even if there’s no reference to the actual ground below our feet. On small, low voltage devices (like battery powered toys), the difference in potential between components on the circuit board and the actual earth isn’t all that important, but that’s not true for high voltage systems connected to the grid. Let me show you why:

This is a diagram of a typical power system on the grid. The coils of a generator are shown on the left. When a magnetic field rotates past these coils, it generates electric current on the conductors, and (very generally) this is how we get the three phase AC power that is the backbone of most electric grids today. Look at nearly any transmission line, and you’ll see three main conductors that (again, very generally) correspond to this diagram. But what you don’t see here is a connection to ground. Let me put another diagram underneath where distance is equal to voltage. You can see our three conductors all have the same phase-to-phase voltage, and they have the same phase-to-ground voltage too. Everything is balanced. But, in this example, that connection to the ground isn’t very strong, resulting just from the electromagnetic fields of the alternating current (called capacitive coupling).

Watch what happens during a ground fault. This could be a tree branch knocking down a power line or a conductor being blown into contact with a steel tower or any other number of problems that lead to a short between one phase and ground. Now, all of the sudden, that weak coupling force keeping the phase-to-ground voltages balanced is overpowered, and all the phases experience a voltage shift with respect to the ground. But, the phase-to-phase voltages don’t change. In fact, a ground fault on an ungrounded power system usually doesn’t cause any immediate problems. The motors and transformers and other loads on the system don’t really care about the phase-to-ground voltage because they’re hooked up between phases. This is one benefit of an ungrounded power system: in many cases it can keep working even during a ground fault. But, of course, there are some downsides too.

In the example I showed, the phase-to-ground voltages of the two unfaulted conductors rise to almost twice what they would be in a balanced condition. Here’s why that matters: Higher voltage requires more insulation which means more cost. Especially on large transmission lines where insulation means literally holding the conductors great distances away from each other and the ground, those costs can add up quick. It might seem like an esoteric problem for an electrical engineer, but in practice, it just means that ungrounded power systems can be a lot more expensive (a problem anyone can understand). But that’s just the start.

Look back at our diagram and you can see the faulted phase potential is equal to ground potential. In other words, their difference is zero. There’s no voltage, and when you have zero voltage, you also have zero current. No electricity is flowing from the conductor into the ground. Or at least not very much is. You still have the capacitive coupling between the unfaulted conductors that allows a little bit of current to flow, but it’s not much. And that matters because nearly all the devices that would protect a system from a problem (like a ground fault) need some current to flow.

If you know much about wiring in buildings, you might be familiar with the classic example of a toaster with a metal case. It could be any appliance, but let’s use a toaster. Under normal conditions, current flows from the live or hot wire through a heating element and into the neutral wire to return to the grid, completing the circuit. But, if something comes loose inside the toaster, the live or energized side of your electrical supply could come into contact with that metal case, making it energized too. This could start a fire, or in the worst case, shock someone who touches the case. So, many appliances are required to have another conductor attached to the housing, giving the current a parallel, low-resistance return path. That low resistance means lots of current will flow, triggering a breaker to shut off the circuit.

And, it’s not just the breakers in your house that work this way. Nearly all the protective devices, called relays, that monitor parts of the power grid for problems rely on fault current to tell the difference between normal electrical loads and short circuits. The simplest way to do that is make sure the fault current is much higher than the normal loads. In the case of the damaged toaster, that fault current flowed through a conductor that is called “ground” (but is actually just a parallel wire that connects to the neutral in your electrical panel). But, in the case of substations and transmission lines, the fault current path is the actual ground.

Let’s look back at the diagram and convert it to a grounded system. If I add a strong bond to ground at the generator, things don’t look much different in the unfaulted condition. But as soon as you add a phase-to-ground short circuit, the diagram looks much different. First, the other phases don’t experience a shift in their phase-to-ground potential. But secondly, there’s now a path for fault current to flow through the ground back to the source. And that’s the answer to the question in the title of this video: electrical current (in nearly all cases) doesn’t flow into the earth; it flows through the earth. The ground is really just another wire. Although not a great one. Let me show you an example.

I have a narrow acrylic box full of dry sand. I put a copper rod into the sand on either side of the box and connected a circuit with a lightbulb so that the current has to flow across the sand from one electrode to the other. When I turn on the switch, nothing happens. It turns out that dry sand is a pretty good insulator. In fact, soil and rock vary widely in how well they conduct electrical current. The resistivity changes with soil type, seasons, weather, temperature, and moisture content. For example, let’s try to wet this sand and see if it makes a difference. Still nothing. Even completely saturating the sand with tap water, only a tiny current flows. You can barely see anything in the lightbulb, but the current meter shows a tenth of an amp now.

Soil resistivity also changes with the chemical constituents in the soil, which is why I’m having trouble getting any current to flow through the sand. There just aren’t enough electrolytes. Even with a layer of standing water on top of the sand doesn’t conduct much current at all. If I add just a little bit of salt water to that standing water, immediately you see that the resistivity goes down and the lightbulb is able to light. And if I let that salt water soak into the soil, now the sand is able to conduct electricity too.

This resistivity of soil to conduct current is pretty important. Earth isn’t a great wire, but what it lacks in conductivity, it makes up for in size. You can kind of image current flowing from a ground electrode into the surrounding soil as a series of concentric shells, each representing a drop in voltage between the faulted conductor and the ground potential. Each shell has more surface area for current to flow and so has lower resistance until eventually there’s practically no resistance at all. But up close to the electrode, the shells are spaced tightly together toward a single point or line. That spacing is related to the resistance of the soil, and it can represent a pretty serious safety issue. Here’s a little demonstration I set up to show how this works.

This is a length of nichrome wire connected between mains voltage with a few power resistors in between to limit the current. When I flip the switch, electrical current flows through the wire, simulating a ground fault. This length of NiChrome wire is resistive to the flow of current just like the soil would be in a ground fault condition. You can see it heat up when I flip the switch. That means the electric potential along this wire is different at every point. I can show that just by measuring the voltage with a meter at a few different locations.

Remember that voltage is the difference in potential between two points, or in the case of Zap McBodySlam here, between two feet. When Zap steps on the wire, his legs are are at two different electric potentials, and unfortunately, human bodies are better conductors than the ground. That difference in electric potential creates a voltage that drives current up into one leg and down out of the other. In this case, I just have that voltage turning on a little light, but depending on how high that voltage is, and how well Zap is insulated from it, this step potential can be a matter of life or death. In fact, power line technicans are often encouraged to hop on one foot away from a ground fault to reduce the chance of a step potential. It sounds silly, but it might save their life.

Similarly, power technicians often come into contact with the metal cases around equipment regularly. So, if a ground fault happens on a piece of equipment, and the resistance of the grounding system is too high, there can be a voltage between the ground and the metal case, again creating the possibility of a voltage across a person’s body, called touch potential. The engineers who design power plants, substations, and transmission lines have to consider what touch potentials and step potentials can be safely withstood by a person and design grounding systems to make sure that they never exceed that level. For example, most substations are equipped not just with a single grounding electrode but a grid of buried conductors to minimize resistance in the earth connection. You might also notice that many substations use crushed rock as the ground surface. That’s not just because linesmen don’t like to mow the grass. It’s because the crushed rock, like the dry sand in my demo, doesn’t conduct electricity well and minimizes the chance of standing water.

But, not all power systems use the ground just as a safety measure. There are systems where the earth is actually the primary return path for current to flow. The ground is essentially the neutral line. Electrical distribution systems called “Single Wire Earth Return” or SWER are used in a few places around the world to deliver electrical power in rural areas. Using the earth as a return path can save cost, since you only have to run a single wire, but of course there are safety and technical challenges too.

Similarly, there are some high voltage transmission lines across the word that use direct current (like a battery) instead of AC. We’ll save a detailed discussion of these systems for another day, because there is a lot of fascinating engineering involved. But, I did want to mention them here, because many of these lines are equipped with really elaborate grounding systems. Although most High Voltage DC transmission lines use two conductors (positive and negative), some only use one with the return current flowing through the earth or the sea. And, even the bipolar lines often include grounding systems so they they can use ground return during and outage or emergency if one pole is out of service. For example, the Pacific DC Intertie that carries power from the pacific northwest into Los Angeles has elaborate grounding systems at both ends. In Oregon, over 1000 electrodes are buried in a ring with a circumference of 2 miles or 3.2 kilometers. In California, the grounding system consists of huge electrodes submerged in the Pacific Ocean a few miles off the shore.

Unlike AC return currents that generally follow a path that matches the transmission line, DC currents can flow through the entire earth. In essence, the electrodes are completely decoupled. That does mean they’re susceptible to some environmental issues though. They create magnetic fields that can affect compass readings and magneto-sensitive fish like salmon and eels. In ocean electrodes, the current can cause electrolysis, breaking down seawater into toxic chemicals like chloroform and bromoform. And, stray electrical currents in the ground can flow into pipelines and other buried structures, causing them to corrode. This is also a problem with some electric trains that use the rail as a return path. You may have heard that electricity takes the path of least resistance, but that’s not really true. Electricity takes all the paths it can in accordance with their relative conductivity. So, even though a big steel rail is a lot more conductive than the earth, return current from traction motors can and does flow into ground, sometimes corroding adjacent pipelines, and occasionally interfering with buried telecommunication lines too.

I’ve conveniently left out lightning from this discussion until now. Unlike a conventional circuit where current is alway moving, lightning is a type of static electricity. It’s not flowing… until it is. And unlike fault current that only uses the ground as a conduit, the current from a lightning strike really does just flow into the ground, or most frequently, out of the ground and into the atmosphere, restoring an imbalance of charge created by the movement of air or water… or something else. We really don’t understand lightning that well. But an additional and vital reason we ground electrical systems is so that, if lightning strikes, that current has a direct path to the ground. If it didn’t, it might arc across gaps or build up charge in the system, creating a fire or damaging equipment.

It’s not just lighting, ground faults, and circuit return current that flows through the earth. Lots of other natural mechanisms cause current to flow below our feet, including solar wind, changes in earth’s magnetic field, and more. These are collectively known as a telluric currents, and they intermingle below the surface with the currents that we send into the ground.

A common question I get about the electrical grid is how to know specifically which power plant serves a city or a building. It’s kind of like asking what tree or plant created the oxygen that you breath. Technically, it’s more likely to be one close to you than very far away, but that’s not quite how it works. Power gets intermingled on the grid - that’s why it’s called the grid in the first place - and it just flows along the lines in accordance with the difference in potential. And the ground works in a similar way. You can’t necessarily draw lines of current flow between sources and loads, lightning strikes, and telluric phenomena. The truth of how current flows in the ground is a little more complicated than that; it all kind of mixes together down there to some extent. But above the surface, it really isn’t so complicated. Current doesn’t flow to the ground; it flows through the ground and back up. If there is electricity moving into the ground from an energized conductor, go back to the source of that conductor and see what’s happening. For the grid, it’s probably a transformer or electrical generator, in either case, a simple coil of wire. And, the electrical current flowing out of the coil has to be equal to the electrical current flowing into it, whether that current is coming from one of the other phases, a neutral line, or an electrode buried in the ground.

August 01, 2023 /Wesley Crump

Philadelphia I-95 Bridge Collapse Explained

July 18, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On Jun 11, 2023, a fuel tanker truck caught fire on an exit underneath Interstate 95 in Northeast Philadelphia. The fire severely damaged the northbound bridge, eventually causing it to collapse. Sadly, the driver of the truck was killed in the crash, but fortunately there were no other injuries or deaths. Although it didn’t collapse, PennDOT officials said that the southbound bridge was also compromised in the fire and had to be demolished. All of I-95 through a major part of Philly was shut down for a couple of weeks, and (as of this writing) the off-ramp underneath it will likely will be closed for the near future as the bridges are rebuilt. Fires at bridges haven’t really been a major concern for transportation engineers in the past, but increasingly, they’re becoming a more serious problem. The cost to rebuild I-95 may pale in comparison to the indirects costs of having the highway shut down for so long. Or maybe not - it’s hard to say. Let’s talk about what happened and how engineers think about fire hazards at bridges. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the I-95 bridge collapse.

The details in the intro are really all the details we know at the moment. A tanker truck crashed below the bridge, eventually leading it to collapse. There are some wild videos taken by motorists on I-95 during the fire, probably only minutes before the bridge fell, with the road deck sagging significantly. Fortunately, emergency crews were able to shut down the highway before anyone was seriously injured. The National Transportation Safety Board has had a crew on site to begin their investigation, but knowing the meticuluous pace at which they work, it will likely be a year or more before we get their report. But, the basics are pretty clear already. And in fact, even though we don’t know all the details of this particular event, we’ve seen similar collapses on several occasions. And the sequence of events is almost always the same.

In 2002, fire caused the main span of the I-20 interchange in Birmingham, Alabama to sag by 3 meters or 10 feet, necessitating replacement of the bridge. Cause of the fire? A crashed fuel tanker. In 2006, a temporary part of the Brooklyn Queens Expressway in New York collapsed during a fire. Again, the cause of the fire was crashed tanker truck under the bridge. 2007: The MacArthur Maze Interchange in Oakland, California collapsed during a fire from a crashed fuel tanker. 2009: A bridge over I-75 in Detroit collapsed after a tanker truck crashed into the overpass. 2013: A diesel tanker crash damaged a bridge in Harrisburg, Pennsylvania that had to be demolished. 2014: A gasoline tanker exploded on I-65 in Tennessee, destroying two overpass bridges. Of course, this isn’t just a US phenomenon. In 2012, a tanker overturned in Rouen (roo-AHN), France damaging the Mathilde bridge over the Seine (sehn) River and requiring part of it to be replaced. And of course, bridge fires don’t only come from tanker truck crashes. In 2017, a massive fire under I-85 in Atlanta, Georgia that resulted in collapse happened because someone set fire to construction materials stored below the bridge.

Incredibly PennDot was able to reopen this bridge a mere two weeks after it collapsed with a pretty clever solution. Rather than wait until the original bridges could be rebuilt to get I-95 back open, they decided to simply build a temporary embankment instead. After the demolition of the fire-damaged bridges was complete, the less-critical off-ramp below the bridges remained closed so that crews from PennDOT’s emergency contractor, Buckley & Company, could fill the area in and simply pave over the top. My friend Rob, built a little model of this on his channel you should check out after this.

This temporary highway wasn’t built using soil or crushed rock, the typical backfill material used in roadway embankments (at least not mostly). That stuff is heavy, so most roadway embankments have to be built slowly to allow time for the foundation to settle as each layer is added to the top, a process that can take months or even years. (Not an option in this case.) Plus there are sewer lines below the existing road that could have been overloaded by a mountain of backfill on top. Instead, the design called for lightweight backfill called foamed glass aggregate. I have a whole video we produced earlier this year about different types of lightweight backfills and how they work, so check that out if you want to learn more. This foamed glass aggregate is not cheap, many times the cost of standard backfill. But, it’s strong enough in compression to support the overlying roadway without overloading the foundation below which would lead to settlement over time or damage to underground pipes. I actually have some of it here in the studio. It feels kind of like floral foam, just a lot stronger.

The other innovative design aspect of the temporary embankment is that it leaves room on either side for the permanent repairs to the bridge. Eventually the City needs this off-ramp back open for travel, after all. The emergency embankment is sited in the center of the right-of-way to give as much space as possible for the next phase of the repairs that will replace the bridges. That also required that both sides of the embankment have a retaining wall, in this case mechanically stabilized earth walls that use reinforcing elements between each layer of backfill to keep the tall structure from collapsing. I’ve also done a few videos explaining MSE retaining walls if you want to learn more about them. The basics are easy to see in this drone footage. The reinforcement turns the backfill itself into a stable wall, making it able to both withstand vertical loads and hold back the rest of the embankment backfill. I built a little MSE cube many years back and put one of my car tires on top to show how strong it really is. Looks like the cube built by PennDOT will hold up even more cars than mine!

To their credit, PennDOT kept a live feed of construction going for most of the project. You can see the flurry of activity as workers and equipment build the embankment up to the level of the highway on either side. Traffic was rerouted onto the temporary embankment starting June 24th. But, why did a fire cause so much damage in the first place?

We, collectively, put a tremendous amount of research and engineering into the fire resistance of buildings and tunnels, but when it comes to fires at bridges, we know a lot less. In fact, most bridges in the world are designed with little, if any, consideration for fire resistance. Neither the Eurocode or the US bridge design criteria address fires or have any guidelines or requirements for how or when to engineer against them. Of course, we think about thermo-mechanical behavior of bridges all the time. I have a video all about thermal expansion and contraction of large structures. But, when you get above a few hundred degrees, there just hasn’t been much consideration. And the reasons for that are kind of obvious, at least at first glance. Less then 3% of US bridge failures between 1980 and 2012 resulted from fire. Compare that to hydraulic damage from scour and flooding that makes up nearly 50% of all failures. That alone isn’t enough reason to ignore fires in the design codes. After all, earthquakes make up only 2% of those failures, and we spend considerable resources and engineering to design bridges against seismic loads. But, you also have to consider safety. Even when bridges collapse due to fire, people are rarely injured because most places have robust emergency response capabilities. Roads are closed well before a fire is able to significantly weaken a structure. The relative infrequency of serious fires at bridges and their unlikelihood of causing a public safety issue mean that we just don’t devote a lot of resources to the problem right now… at least not proactive resources.

The National Fire Protection Association does have some guidance for fires at bridges, but it’s nebulous. They don’t recommend what fire loads should be considered, how to protect a bridge against fire, or how to analyze a structure after a fire. And, the guidelines only apply to bridges longer than 1000 feet or 300 meters. When you think about bridges, you often think about these long-span structures over major valleys or waterbodies. They’re iconic, but they’re also just the tip of the iceberg when it comes to bridges. In the US alone, there are over half a million bridges in service today, and nearly all of them are short-span bridges used mainly for grade separation (to let streams of traffic cross each other without interruption). They’re overpasses, structures you traverse every day without even noticing. But you definitely notice when one of these bridges is taken out of service. Bridges used for grade separation are more vulnerable to fires because, unlike the ones over waterways, a tanker truck can crash underneath one where the fire is most likely to cause damage. But protecting them is not as easy as it might seem.

A robust engineering guideline for design of bridges against fires would actually be pretty complicated. There are so many different variables and scenarios, and we really don’t have any collective agreement about what level of protection is appropriate. What would be the fuel source, footprint, flame height, intensity, and duration of the fire? With that information, we can try to predict the response. How does the heat transfer from the fire to the structural elements through radiation and convection, and how much do the structural elements increase in temperature as a result? These are tough questions to answer on their own, but they still don’t get to the heart of the matter, because what we really care about is how those structural elements respond. What happens to the material properties of steel and concrete when they increase in temperature way beyond what they were designed to handle? And more importantly, how does the overall structure behave? You have thermal expansion, weakening of materials, loss of stiffness, load redistribution, and a lot more. This is an extremely complicated scenario just to characterize through engineering, let alone to design protections against.

And the biggest question right now seems to be “Should we?” Bridge fires are primarily economic problems. As I mentioned, they rarely result in injuries or life safety concerns because the roadway is closed ahead of failure. But that doesn’t mean there aren’t impacts, and if you regularly drive on I-95 in Philadelphia (or any of the other roadways I mentioned before), you definitely know what I’m talking about. Replacing a bridge is an expensive endeavor, but the indirect costs that come with having a major highway closed are often higher. When the MacArthur Maze in Oakland collapsed from a tanker fire, the indirect costs of having the bridge out was estimated in the millions of dollars per day, way more than the cost of reconstruction. In fact, the rebuilding job was bid with a bonus to the contractor for each day ahead of schedule they were able to finish the job. SFGate has a great story about how they got that bridge reopened in just 26 days that I’ll link below.

Road construction often seems slow, and part of the reason is to keep the costs down. It’s not very efficient to dedicate expensive resources like equipment, engineers, and specialty construction crews to a single project. Instead, resources get spread across many jobs so that people, crews, vendors, and equipment can stay busy. Even if seemingly slow progress is often frustrating to see, it’s usually less a result of incompetence or corruption and more just government agencies trying to be good stewards of limited public resources. But a major bridge failure changes that math. Fabricators, equipment suppliers, painters, truckers, operators, and laborers are all willing to set aside their other obligations for the right price. And government agencies will happily devote their engineers and inspectors to sit and wait for questions or problems to arise on a single job if the politicians can deliver the funds for it. In the industry, they call it “accelerated construction.” It comes at a steep price, but sometimes that price is worth it.

Like the MacArthur Maze, I-95 is a busy stretch of roadway, carrying roughly 150,000 vehicle trips per day. Some of that traffic was redistributed to other routes, but some of the capacity was simply lost while the roadway was out. That means deliveries were cancelled, workers had trouble reaching their jobs, emergency response times went up, and more. The gridlock was not as apocalyptic as predicted, but there were still some major slowdowns. In most large American cities, unexpectedly closing a major highway has real economic consequences through lost commercial shipping, lost productivity, lost retail sales, more wear and tear on roadways not meant to accommodate the detour traffic, and a lot more. And those indirect costs play into the consideration in whether or not its worth it to include fire protection in the design of highway bridges.

But what’s on the other side of that equation? Of course it would have been worth the cost to protect this one bridge in Philly from a tanker fire if we knew it was going to happen, but would it have been worth the cost of protecting all the bridges just in case? Or is that gold-plating our infrastructure where it’s not really needed? We know adding highway capacity induces traffic demand, but we also know the corollary. Reducing capacity decreases traffic demand as people find alternatives to making trips in cars, and maybe a highway bridge outage isn’t quite as big a deal as the politicians and news coverage suggest. And maybe investing in some diversity in our transportation infrastructure and giving people better alternatives to driving can do more good than putting that money toward protecting bridges against the unlikely event of a fire.

Like a lot of things in engineering, the costs and risks and alternatives aren’t that easy to weigh out. Your answer might depend on how many fuel tanker trucks you see on your everyday commute. The International Associaiton for Bridge and Structural Engineering has a group working on guidelines for designing bridges against fire hazards. That’s a long way from incorporating fire protection in the design codes, but it will at least give engineers some tools to include fire resistance in designs where the situation calls for it. That group is scheduled to finish their work later in 2023, but hopefully PennDot is able to get I-95 fully repaired before then.

July 18, 2023 /Wesley Crump

Why Is Desalination So Difficult?

July 05, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Carlsbad Desalination Plant outside of San Diego, California. It produces roughly ten percent of the area’s fresh water, around 50 million gallons or 23,000 cubic meters per day. Unlike most treatment plants that clean up water from rivers or lakes, the Carlsbad plant pulls its water directly from the ocean. Desalination, or the removal of salt from seawater, is one of those technologies that has always seemed right on the horizon. It might surprise you to learn that there are more than 18,000 desalination plants operating across the globe. But, those plants provide less than a percent of global water needs even though they consume a quarter of all the energy used by the water industry.

I live like 100 miles away from the nearest sea, so it’s easier for me to mix up my own batch of seawater right here in the studio. There are two main ways we use to desalinate water, and I’ve got some garage demonstrations to show you exactly how they work. Will the dubious chemistry set or the cheapest pressure washer I could find work better? Let’s track the energy use and other complications for both these demos so we can compare at the end of the video. Dumping that salt into a bucket of water may seem like no big deal, but reversing the process is a lot more complicated than you might think. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about desalination.

Earth is a watery place. Zoom out and the stuff is practically everywhere. It doesn't seem fair that the word “drought” is even in our lexicon. And yet, the scarcity of water is one of the most widespread and serious challenges faced by people around the world. The oceans are a nearly unlimited resource of water with this seemingly trivial caveat, which is that the water is just a little bit salty. It’s totally understandable to wonder why that little bit of salt is such an enormous obstacle.

How much salt is in seawater anyway? You’ve heard of “percent,” but have you ever heard of “per mille”? Just add another circle below the slash and now, instead of parts per hundred, this symbol means parts per thousand, which is the perfect unit to talk about salinity. The salinity of the ocean actually varies a little bit geographically and through the seasons, but in general, every liter of seawater usually has around 35 grams of dissolved salt. In other words, 35 parts per thousand or 35 permille. That means, for this bucket, I need about this much salt to match the salinity of seawater.

I didn’t get it dead on, but this is close enough for our demo. Looks like a lot of salt, but I could dissolve about 10 times that much in this water before the solution becomes saturated and won’t hold any more. So, compared to how salty it could be, seawater isn’t that far from freshwater. But, compared to how salty it should be (in order to be okay to drink and such), it has a ways to go. Normal saline solution used in medicine is 9 parts per thousand because it’s approximately isotonic to your blood. That means it won’t dehydrate or overhydrate your cells. But (unless it’s masked by a bunch of sugar) even that concentration of salt in water isn’t going to taste very good.

Most places don’t put legal limits on dissolved solids for drinking water, but the World Health Organization suggests anything more than 1 part per thousand is usually unacceptable to consumers. It doesn’t taste good. 500 parts per million (or half permille) is generally the upper limit for fresh water (and that includes all dissolved solids combined, not just salt). But that means seawater desalination has to remove (or in industry jargon, reject) more than 98 percent of the salt in the water. That’s the reason why there are really only two main technologies in desalination. But neither of them are particularly sophisticated, at least in their simplest form, so I’m going to try some do-it-yourself desalination to show you how this works.

The oldest and most straightforward way to separate salt and water is distillation, and this is my very basic setup to do just that. All you chemists and laboratory professionals are probably shaking your heads right now, but this is just to illustrate the basics. On the left, I have a flask of my homemade seawater sitting in sand, in a pot, on a hot plate. Salt doesn’t like to be a gas, at least not under the conditions we normally live in on earth. Water, on the other hand, can be convinced into its gaseous state with some heat from a conventional hotplate. And that’s what I’m doing here, just adding some heat to the system. And I’m tracking exactly how much heat using this Kill A Watt meter.

Once the water is converted to steam, it is effectively separated from the salt. All I have to do is condense the vaporized water back into its liquid form. This pump moves ice water through the condenser to encourage that process… if the tube doesn’t slip out of the beaker and spill ice water all over the table.

In my receiving flask on the right, I should have distilled water that is nearly salt free. Testing it out with the meter, the dissolved solids are practically nil, just a few parts per million. But it took nearly 2 hours to get only 200 milliliters of water, and right about a kilowatt-hour of electricity too.

Water usage in the US varies quite a bit, but a rough estimate is 300 gallons (or 1,100 liters) per day per household. To produce that much water using my distillation setup here, I would have to scale it up nearly 500 times this size, and it would consume nearly 6,000 kilowatt-hours in a day (assuming the same efficiency I got in the demo). At the average residential US electricity price, it’s roughly 800 dollars per day! That’s an expensive shower. Could this be made more efficient? I don’t think so.

No, obviously it can. My garage demo has very little going for it in terms of efficiency. It’s about as basic as distillation gets. There’s lost heat going everywhere. Modern distillation setups are much more efficient at separating liquids, especially because they can take advantage of waste heat. In fact they are often co-located with coal or gas-fired power plants for this exact reason. And there’s a lot of technology just in minimizing the energy consumption of distillation, including reuse of the heat released during condensation, using stages to evaporate liquids more efficiently, and using pumps to lower the pressure and encourage further evaporation through mechanical means. But the thermal efficiency isn’t the only challenge with distillation.

Take a look at the flask that held the seawater after all the water boiled away and you can see the salt deposits building up, even after distilling only a small amount of water. These scale deposits reduce the efficiency of boiling because heat doesn’t transfer through them very easily, which means they would have to be cleaned off regularly. One alternative is a flash evaporator that sends the liquid stream through an expansion valve to force it to evaporate at temperatures lower than boiling, which minimizes the buildup of scale. Flash evaporators are the workhorses of desalination plants that use distillation, and especially in the middle east, plants like this have been reliably producing fresh water for decades now, but they’re not the only way to get the job done.

The other primary type of desalination uses membranes. You may have heard of the phenomenon called osmosis, where a solution naturally diffuses through a barrier. But you can reverse the osmotic process, moving a solution from high concentration to low with pressure… usually a lot of pressure. Let me show you what I mean. Luckily there are commercially available seawater membranes that don’t cost an arm and a leg. That’s because these systems are frequently used in boats and ships to make freshwater while at sea. But why spend thousands of dollars on a working watermaker when you have the rudimentary plumbing skills of a civil engineer?

Here’s the membrane I’m using for this demo. It’s wrapped in a spiral so you get lots of surface area in a small package. It is kind of like a filter that lets water pass through while holding back the dissolved solids, but at a much tinier scale. It’s generally a lot more efficient than thermal distillation, so most modern desalination plants use reverse osmosis (or RO) for primary separation. But, as you’ll see, it still uses a lot of energy, way more than a typical raw water treatment plant.

It takes a lot of pressure to force seawater through a membrane, in my case about 600 psi or 40 times normal atmospheric pressure. Even small RO systems use high-pressure pumps designed for continuous use, because this is not a fast process. Instead of springing for a nice pump well-suited for the application, I’m using the cheapest power washer I could find at the local hardware store. The instructions didn’t say not to run saltwater through it.

The membrane sits inside this high pressure housing that keeps it from unraveling under the immense forces inside. That’s if you hook everything up correctly… I had to redo a few connections when the housing sprung a leak during early testing.

A booster pump delivers the seawater from the bucket to the pressure washer, then the pressure washer sends it into the housing. Unlike a typical filter, not all the feed water flows through the membrane. Instead, most of it flows past the membranes and comes out on the other side just a little bit more concentrated with salt. This is called the brine and we’ll talk more about it in a minute. The water that does make it through the membrane, called the permeate, comes out in the center of the housing. You can see on my flow meters that, if I close the valve on the brine discharge line, it increases the pressure in the housing, forcing more of the water through the membrane. The meter on the left is brine discharge, and the one on the right is the permeate line. As I close the valve, the brine flow goes down and the permeate flow goes up. Of course I could close the brine flow all the way down, but you still need some water to carry the salt away or it will just foul up the membrane.

Typically you need to run water through these membranes for several hours before they settle into their best performance. My little power washer wasn’t quite up to the task of running for that long, but even after roughly half an hour, I was getting water with one to two parts per thousand of dissolved solids through this crude setup. That’s not high quality drinking water, but it’s definitely drinkable!

I ran this experiment a few times at different pressures, but the results didn’t vary too much. For this run, the combined power for the booster pump and the pressure water was around 1200 watts, and it took about five minutes to produce a liter (or quarter of a gallon). Going back to our residential household, it would take four pressure washers running non-stop and consume more than 100 kilowatt-hours in a day. That’s a huge improvement over the distillation demo, even considering the water quality wasn’t quite as good, but it’s still 15 dollars a day or more than 5,000 dollars per year just to separate salt from water.

It won’t surprise you to learn that, just like my crude distillation demo, my reverse osmosis via pressure washer demo is also not nearly as efficient as it could be on a larger scale. Modern RO plants use huge racks of high quality membrane units and high efficiency pumps. They also recover the energy from the brine stream before it leaves the system back out to sea, saving the precious kilowatt-hours already consumed by the pumps. To separate a cubic meter or 264 gallons, of seawater from its salt, my power washer RO system would take about a hundred kilowatt-hours. The newest RO plants can do it with just one or two.

But, even though the separation step is energy intensive, it’s not the only energy requirement in a seawater desal plant, and it’s definitely not the only cost. I’m using tap water in my demonstration, but these plants don’t start with that. Raw seawater not only has salt, but also dirt, algae, organic matter, and other contaminants too. Those constituents can foul or damage evaporators or membranes, so all desal plants use a pretreatment process to remove them first. That takes energy and cost to keep up with the various chemical feeds and filters before the water even reaches the salt separation process. And, even with good pretreatment, the RO membranes or evaporators have to be taken out of service for cleaning regularly, and eventually they have to be replaced. Additionally, you usually can’t send RO permeate or distilled water directly to customers. It’s too clean! It normally goes through a post-treatment process to add minerals, since most people prefer the taste over just pure water. Plus it gets disinfectant so that it can’t be contaminated on its way through the distribution system. And don’t forget about that brine.

All that salt that didn’t come out of the product stream is now packed into a smaller volume of water, making it more concentrated than before. Modern desalination plants generally recover about half of the intake flow, which means their brine stream is about twice the concentration of normal seawater. It’s a waste product that is actually pretty tough to get rid of. You can’t just discharge that super-saline waste directly back into the sea because of the environmental impacts, particularly on the plants and animals near the sea floor (since the concentrated solution usually sinks). To avoid environmental impacts, most brine discharge lines either use diffusers to spread out the salty solution so it dilutes faster or they blend the brine with some other stream of water like power plant cooling lines or wastewater effluent so it’s diluted before being released. When that’s not possible, some plants have to inject the saltwater into the ground (an expensive endeavor that only adds to operational costs).

With all the complications of separating salt from seawater, it’s easy to let one’s mind drift toward alternatives like harnessing renewable sources of energy. Like, what if we could use solar power to not only distill seawater but also carry it inland toward major cities and release it onto the ground where it could easily be collected. But now we’ve just re-invented the water cycle, which is already how we humans get the vast majority of the water we use to drink, cook, and bathe. It’s not like dams, reservoirs, canals, pumping stations, and surface water intakes don’t have their own enormous costs and environmental impacts. But, if mother nature isn’t dropping enough water for your particular populated area, you can build and operate a pretty long pipeline for the immense costs and energy required to desalinate seawater.

And that’s the problem with desalination. It’s kind of like the nuclear power of water supply. It seems so simple on the surface, but when you add up all the practical costs and complexities, it gets really hard to justify over other alternatives. It’s also harder to compare costs between those alternatives because of desal’s unique problems. It’s just a newer technology, so it’s harder to predict hidden technical, legal, political, and environmental challenges. For example, because of the high energy demands, desalination can strongly couple water costs with electricity costs. During a drought, the cost of hydropower goes up because there’s less water available, increasing overall energy costs and thus making desalination less viable right when you need it most.

Of course, desalination is a viable solution in many situations, especially in places with large populations and severe water scarcity. All the biggest plants are in middle eastern countries like Saudi Arabia and the UAE. That’s because they really have no choice. But it can also be viable in areas with a lot of variability in climate like California, Texas, and Florida. In these cases, a desalination plant is just one element in a diverse portfolio of resources, all with different risk profiles. Yes, the desalinated water is more expensive than other options like rivers, reservoirs, and groundwater supplies. But it can be more reliable too, providing water during drought conditions when the other sources are limited or completely unavailable. And, a lot of these costs and complexities get simpler when you’re not pulling salt out of seawater. There are sources of water that have some salt (but not as much as the ocean) like estuaries and brackish groundwater. In places where such a supply is available, desalination can be a much more cost effective source of fresh water.

Another way to make desal projects more viable is to let the private sector take on the risks. Many of the largest desalination plants are partnerships with private water companies rather than being financed, built, and operated by the utility like what’s done for a typical treatment plant. Partnering with a private company allows a utility to offload the financing costs and operational risks in return for the stability of a simple water purchase agreement. You pay for it, build it, and operate it, and we’ll just buy the water from you. This type of arrangement also keeps government boards from having to weigh in on complicated technical issues and innovations where there’s just not as much precedence to lean on as there is with more established types of water infrastructure projects.

The private company running the Carlsbad plant in San Diego County I mentioned earlier is working on a major project scheduled to finish in 2024: a new standalone seawater intake required after the power plant next door shut down in 2018. Bonds issued for the project were upgraded to rating of triple-B by Fitch, meaning the facility has a relatively stable outlook with a lower chance of defaulting. That’s just one rating agency’s assessment of just one project on just one membrane plant, but it gives some confidence that the technology of desalination is making progress, and that it might become a bigger and bigger part of the world’s limited supply of fresh water in the future.

July 05, 2023 /Wesley Crump

Was Starship’s Stage Zero a Bad Pad?

June 20, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On April 20, 2023, SpaceX launched it’s first orbital test flight of its Starship spacecraft from Boca Chica on the gulf coast of Texas. You probably saw this, if not live, at least in the stunning videos that followed. Thanks to NASA Space Flight for giving me permission to use their footage in this video. Starship launched aboard the Super Heavy first stage booster, and was the tallest and most powerful rocket ever launched at the time. There was no payload; this was a test flight with the goal of gathering data not just on the rocket, but all the various systems involved. The rocket itself was exciting to watch: some of the engines failed to ignite, and a few more flamed out early during the launch. About 40 kilometers above the ground, the rocket lost steering control and the flight termination system was triggered, eventually blowing up the whole thing.

But a lot of the real excitement was on the ground. Those Raptor engines put out about twice the thrust of the Saturn V rocket used in the Apollo Program. And, all that thrust, for several moments, was directed straight into the concrete base of the launch pad, or as SpaceX calls it, Stage Zero. And that concrete base wasn’t really up for the challenge. Huge chunks of earth and concrete can be seen flying hundreds of meters through the air during the launch, peppering the gulf more than 500 meters away. A fine rain of debris fell over the surrounding area, and the damage seen after the road opened back up was surprising. Tanks were bent up. Debris was strewn across the facility. And the launch pad itself now featured an enormous crater below.

Although the FAA’s mishap report hasn’t been released yet, there’s plenty of information available to discuss. Rocket scientists and aeronautical engineers get a lot of well-deserved attention on youtube and around the nerdy content-sphere, but when it comes to the design and construction of launchpads like stage zero, that’s when civil engineers get to shine! What happened with Stage Zero and how do engineers design structures to withstand some of the most extreme conditions humans have ever created? I’m Grady, and this is Practical Engineering. Today we’re talking about launch pads.

Humans have been launching spacecraft for over 65 years now. And so far, pretty much the only way we have to propel a payload to the incredibly high speeds and altitudes that task requires, is rockets. Rockets produce enormous amounts of thrust by burning fuel and oxidizer in what amounts to a carefully (or not so carefully) controlled explosion. By throwing all that mass out the back, they’re able to accelerate forwards. But what happens to that mass once it’s expelled? When the rocket is flying through the sky, the gasses from its engines eventually slow and dissipate into the atmosphere. But, most rockets (especially the big ones), don’t get to start in the sky. Instead, they’re launched from the earth’s surface, and the small part of the earth’s surface directly below them can take a heck of a beating. Hot and corrosive gasses move at incredible speeds and often carry abrasive particles along with them. To call a rocket launch “thunderous” is often an understatement, because the sound waves generated are louder than a lightning strike and they last longer too.

Dealing with these extreme loading conditions isn’t your typical engineering task. It’s niche work. You’re not going to find a college course or textbook covering the basics of launch pad design. Instead, engineers who design these structures work from multiple directions. They use first principles to try and bracket the physics of a launch. They look at what’s worked and what hasn’t worked in the past. They use computational fluid dynamics, in other words, simulations, that help characterize the velocities and temperatures and sound pressures so that they can design structures to withstand them. But eventually, you have to use tests to see if your intuitions and estimations hold up in the real world.

It’s no surprise that one of the world leaders in successful launchpad engineering is NASA. And their historic Launchpad 39A in Cape Canaveral, Florida is a perfect case study. This pad, and it’s sister 39B, were originally built for the enormous Saturn V rocket, the cornerstone of the Apollo Space Program that first sent astronauts to the moon. Just like the SpaceX facility in Boca Chica, 39A is situated on a coast with the water out to the east. Most rockets launch in that direction to take advantage of earth’s rotation. The earth itself is already rotating to the east, so it makes sense to go ahead and take advantage of that built-in momentum. But some rockets blow up before they make it into space. So it’s best to choose a launch location with a huge stretch of unpopulated area to the east, like an ocean!

Launch Complex 39 was constructed on Merrit Island, a barrier island east of Orlando. NASA decided early on that water was the best way to move the first and second stages of the Saturn V rocket, so several miles of canals were dredged out. Over three quarters of a million cubic yards of sand and shells were produced by this dredging and used as fill for construction. Some of that material was used to build a special road called a crawlerway connecting the Vehicle Assembly Building to the launchpad. But a lot of it was used to construct a flat topped pyramid 80 feet or 24 meters tall. This structure would ultimately become the launchpad. If you’re a fan of the channel, you might already be thinking what I’m thinking. Huge piles of material like this settle over time, and I have a video all about that you can check out after this! NASA engineers let this structure sit before the rest of the launchpad was built. It’s a good thing too, because it settled about 4 feet, well over a meter!

Why did NASA bother building such a massive hill when they could have simply built the pad on the existing ground? It was all about the flame deflector: a curved steel structure that would redirect the tremendous plume of rocket exhaust exiting the Saturn V during launch into a monumental concrete trench. This would keep the plume from damaging the sensitive support structures around the pad or undermining its foundation.

But why not put the trench into the existing ground rather than building a massive artificial hill? The answer is groundwater. Siting a launch pad so near to the coast comes with the challenge of being basically at sea level. If you’ve ever dug a hole at the beach, you know the exact problem the launchpad engineers were facing. Imagine trying to install expensive and delicate technology inside that hole. Of course we build structures below the water table all the time, and I have a video about that topic too. But with the cost and complexity of dewatering the subsurface, especially considering the extreme environment in which pumps and pipes would be required to operate, it just made more sense to build up. On top of that gigantic hill, thousands of tons of concrete and steel were installed to bear the loads of the launch support structures, the weight of the rocket itself while filled with thousands of pounds of fuel and oxidizer, and of course the dynamic forces during a full scale launch.

But that’s not all. Along with the enormous flame trench, and the associated flame diverters, which have gone through various upgrades throughout the years, NASA employed a water deluge system. This is a test of the current system on pad 39B. During a launch, huge volumes of water are released through sprayers to absorb the heat and acoustic energy of the blast, further reducing the damage it causes on the surrounding facility. Check out this incredible historical slow-motion footage of a Space Shuttle Launch. You’ll notice a copious flow of water both under the main engines on the right, and under the enormous solid rocket boosters on the left. In fact, a lot of the billowing white clouds you see during launches are from the deluge system as water’s rapidly boiled off by the extreme temperatures.

39A has seen a lot of launches over the years, more than 150. The first launch was the unmanned Apollo 4 in 1967, the first ever launch of the Saturn V. The bulk of the moon missions and space shuttles launched from 39A, and more recently SpaceX themselves have launched dozens of their Falcon 9 and several Falcon Heavy rockets from the historic pad! But when you compare it to the Stage Zero structure in Boca Chica, at least its configuration during the first orbital test, the differences are obvious: No flame diverter; no water deluge system; just the world’s most powerful rocket pointed square at a concrete slab on the ground. And, I think the results came as a surprise to no one who pays attention to these things. Elon himself tweeted in 2020 that leaving out the flame diverter could turn out to be a mistake.

That concrete, by the way, isn’t just the ready-mix stuff you buy off the shelf at the hardware store. I have a whole video about refractory concrete that’s used to withstand the incredible heat of furnaces, kilns, and rocket launches. This concrete has to be strong, erosion resistant, insulating, resistant to thermal shock, and immune to exposure from saltwater since launchpads are usually near the coast. NASA used a product called Fondu Fyre at 39A and SpaceX uses Fondag. But even that fancy concrete was no match for those raptor engines. Even during the static test fire, there was some damage to the concrete pad, and that was only at about half power. The orbital test and the full force of the rocket completely disintegrated the protective pad and cratered the underlying soil, spraying debris particles for miles.

In a call after the launch, Elon said that, although things looked bad on the surface, the damage to the launch pad could be repaired pretty quickly, noting that the outcome of the test was about what he expected. And even though many might have expected the extensive damage to the pad and surrounding area, it sure wasn’t mentioned in the Environmental Assessment required before SpaceX could get a license to perform the test, whose sole purpose was to document all the environmental impacts that would be associated with building the facility and launching rockets there. Nowhere in the nearly 200-page report is a discussion of the enormous debris field that resulted from the test, and yet there are actually quite a few laws against stuff like this.

For just one example, there are federal rules about filling in wetlands, of which there are many surrounding the launch facility. If you can’t do it with a bulldozer, you probably can’t do it with a rocket, and spraying significant volumes of soil and concrete into the surrounding area likely has the regulator’s attention for that reason alone, not to mention the public safety aspects of the showering debris. The launch also caused a fire in the nearby state park. The FAA has effectively grounded Starship pending their mishap investigation, and several environmental groups have already sued the FAA over the fallout of the launchpad’s destruction.

Even if the FAA comes back with no required changes moving forward, SpaceX themselves aren’t planning to do that again, and they’ve already shared their plans for the future. An enormous, watercooled steel plate design is already well under construction as of this writing. This design is, again, very different than what we see at other launch pads, basically an upside-down shower head directly below the vehicle. That’s the nature of SpaceX and why many find them so exciting. Unlike NASA that spends years in planning and engineering, SpaceX uses rapid development cycles and full-scale tests to work toward their eventual goals. They push their hardware to the limit to learn as much as possible, and we get to follow along. They’re betting it will pay off to develop fast instead of carefully. But this wasn’t just a test of the hardware. It was also a test of federal regulations and the good graces of the people who live, work, play, and care about the Boca Chica area. And, SpaceX definitely pushed those limits as well with their first orbital test. It’s still yet to be seen what they’ll learn from that.

June 20, 2023 /Wesley Crump

How Flood Tunnels Work

June 06, 2023 by Wesley Crump


[Note that this article is a transcript of the video embedded above.]

This is Waterloo Park in downtown Austin, Texas, just a couple of blocks away from the state capitol building. It’s got walking trails, an ampitheater, Waller Creek runs right through the center, and it has this strange semicircular structure right on the water. And this is Ladybird Lake, formerly Town Lake, about a mile away. Right where Waller Creek flows into the lake, there’s another strange structure. You saw the title of this video, so you know what I’m getting at here. It turns out these two peculiar projects are linked, not just by the creek that runs through downtown Austin, but also by a tunnel, a big tunnel. The Waller Creek Tunnel is about 26 feet (or 8 meters in diameter) and runs about 70 feet or 21 meters below downtown Austin. It’s not meant for cars or trains or bikes or buses or even high voltage oil filled cables, and it’s not even meant to carry fresh water or sewage. Its singular goal is to quickly get water out of this narrow downtown area during a flood. It’s designed with a peak flow rate of 8,500 cubic feet per second. That’s 240 cubic meters per second, or enough to fill a cubic olympic sized swimming pool in about 10 seconds. And the way it works is pretty fascinating.

Most major cities use underground pipes as drains to get rid of stormwater runoff so it doesn’t flood streets and inundate populated areas. But, a storm drain only has so much capacity, and a lot of places across the world have taken the idea a few steps further in scale. As I always say, the only thing cooler than a huge tunnel is a huge tunnel that carries lots of water and protects us from floods. And I built a model flood tunnel from acrylic, so you can see how these structures work and learn just a few of the engineering challenges that come with a project like this. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about flood tunnels.

Floods are natural occurrences on earth, and in fact, in many places they are beneficial to the environment by creating habitat and carrying nutrient rich sediments into the floodplain, the area surrounding a creek or river that is most vulnerable to inundation. But, floods are not beneficial to cities. They are among the most disruptive and expensive natural disasters worldwide. If a flood swells a creek or river in a scattered residential neighborhood, it’s not ideal for the few homeowners who are impacted, but if a flood strikes the dense urban core of a major city, the consequences can be catastrophic with millions of dollars of damage and entire systems shutdown. What that means in practice is that we’re often willing to spend millions of dollars on flood infrastructure to protect densely populated areas, opening the door to more creative solutions. And heavily developed downtown areas demand resourceful thinking because they lack the space for traditional protection projects and they often predate modern urban drainage practices.

We can’t change the amount of water that falls during a flood, so we’re forced to develop ways to manage that water once it’s on the ground. The main way we mitigate flooding is just to avoid development within the floodplain. Don’t build in the areas of land most at risk of inundation during heavy storms. Seems simple, but it’s not an option for most downtown areas that have been developed since well before the advent of modern flood risk management. Another way we manage flooding is storing the water in large reservoirs behind dams, allowing it to be released slowly over time. Again, not an option in downtown areas where creating a reservoir could mean demolishing swaths of expensive property. A third flood management strategy is bypassing - sending the water around developed areas where it will cause fewer impacts. Once again, not an option in downtown areas where there is no alternative path for the water to go… unless you start thinking in the third dimension. Tunnels allow us to break free from the confines of the earth’s surface and utilize subterranean space to allow floodwaters past developed areas to be released further downstream. Let me show you how this works.

This is my model downtown business district. It’s got buildings, landscaping, and a beautiful river running right through the center. I have a flow meter and valve to control how much water is moving through that beautiful river, and here on the downstream side is a little dam to create some depth. Take a look at many major cities that have rivers running through them, and you’ll often see a weir or dam just like this to maintain some control over the upstream level, keeping water deep enough for boats or in some cases, just for beauty like the RiverWalk in downtown San Antonio. I put some blue dye and mica powder in the water to make it easier to visualize the flow.

I also have a big clear pipe with an inlet upstream of the developed area and an outlet just below the dam. Looking at this model, it might seem like a flood bypass tunnel is as simple as slapping a big pipe to where you want the flood waters to go, but here’s the thing about floods: most of the time, they’re not happening. In fact, almost all of the time, there isn’t a flood. And if you’re the owner of a flood bypass tunnel, that means almost all of the time you’re responsible for a gigantic pipe full of water below your city that has no real job except to wait. Watch what happens when I turn down the flow rate in my model to something you might see on a typical day. If we just leave the city like it is, all the flow goes into the tunnel, draining the channel like a bathtub and leaving the water along the downtown corridor to stagnate.

Standing water creates an environmental hazard. Without motion, the water doesn’t mix, and so it loses dissolved oxygen that is needed for fish and bacteria that eat organic material. Without dissolved oxygen, rivers become dead zones with little aquatic life and full of smelly, rotting organic material. Stagnant water also creates a breeding ground for mosquitoes, and is just unpleasant to be around. It’s not something you want in an urban core. The answer to this issue is gates, a topic I have a whole other video about. I can show how this works in my model. If you equip your gigantic flood bypass tunnel with gates on the inlet, you can control how much water goes into the tunnel versus what continues in the river. I just used this piece of foam to close off most of the tunnel entrance. I still have some water moving through there, but most continues in the river, keeping it from getting stagnant. This is why, on many flood bypass tunnels, you’ll see interesting structures at the inlets. Here’s the one in Austin again, and here’s the one just down the road in San Antonio. In addition to screening for trash and debris (and keeping people out) the main purpose of these structures is to regulate how much water goes into the tunnel.

But, some creeks and rivers don’t just have low flows during dry times, they have no flows. Intermittent streams only flow at certain times of the year and ephemeral streams only flow after it rains. Take a look at the stream gage for Waller Creek in Austin. Except for the days with rain, the flow in the creek is essentially zero. But, if you’re worried about stagnant water and lack of habitat on the surface, you want more water running in the river. You definitely don’t want to divert any of the scarce flows available into the tunnel. But you can’t just close the tunnel off completely, because then the water inside the tunnel will stagnate instead. You might think, “So what? It’s down there below the ground where we don’t have to worry about it.” Well, as soon as the next big flood comes and you open the gates to your tunnel, you’re going to push a massive slug of disgusting stagnant water out the other end, creating an environmental hazard downstream. So, in addition to gates on the upstream end, some flood tunnels, including the one in Austin, are equipped with pumps to recirculate water back upstream. I put a little pump in the model to show how this works. The pump pulls water from the river downstream and delivers it back upstream of the tunnel entrance. This allows you to double dip on benefits during low flows: you keep water moving in the tunnel so it doesn’t stagnate and you actually increase the flow in the river, improving its quality.

That’s 99 percent of managing a flood bypass tunnel: maintaining the infrastructure during normal flows. But of course, all that trouble is worth it the moment a big flood comes. Let’s turn the model all the way up and see how it performs. You can see the tunnel collecting flows, moving them downstream, and delivering them below the dam away from the developed area. The tunnel is adding capacity to the river, allowing a good proportion of the flood flows to completely bypass the downtown area. Of course, the river still rose during the flood, but it hasn’t overtopped the banks, so the city was protected. Let’s plug the tunnel and see what would happen without it. Turning up the model to full blast causes the stream to go over the bank and flood downtown. In this case, it’s not a huge difference, but even a few inches of floodwaters backing up into buildings is enough to create enormous damages and huge costs for repairs. Without any margin for increased flows, a big peak in rainfall can even wash buildings and cars away.

So, comparing flood levels between the two alternatives flowing at the same rate, it’s easy to see the benefits of a flood bypass tunnel. It resculpts the floodplain, lowering peak levels and pulling property and buildings out of the most vulnerable areas, making it possible to develop more densely in urban areas, not to mention creating habitat, improving water quality, and maintaining a constant flow in the river during dry times. Of course, a tunnel is an enormous project itself, and flood bypass tunnels are truly one of the most complicated and expensive ways to mitigate flood risks, but they’re also one of the only ways to manage flood risks in heavily populated areas.

I’ve been referencing projects in central Texas because that’s where I live, but despite their immense cost and complexity, flood bypass tunnels have been built across the world. One of the most famous is the Tokyo Metropolitan Area Outer Underground Discharge Channel that features this enormous cathedral of a subsurface tank. Unlike my model that works by gravity alone, the Tokyo tunnel needs huge pumps to get the water back out and into the Edogawa River. And some tunnels aren’t just for stormwater. Many older cities don’t have separated sewers for stormwater and wastewater, so everything flows to the treatment plants. That means when it rains, these plants see enormous influxes of water that must be treated before it can be released into rivers or the ocean. One of the largest civil engineering projects on earth has been in design and construction in Chicago since the 1970s and isn’t scheduled for completion until 2029. The Tunnel and Reservoir Plan (or TARP) includes four separate tunnel systems that combine with a number of storage reservoirs to keep Chicago’s sewers from overflowing into and polluting local waterways. And we keep finding value in tunnels where other projects wouldn’t be feasible. After record breaking floods from Hurricane Harvey in 2017, Houston started looking into the viability of using tunnels to reduce the impacts from future downpours. A 2.5-million-dollar engineering study was finished in 2022 suggesting that a system of tunnels might be a feasible solution to remove tens of thousands of structures from the floodplain. If they do move forward with any of the eight tunnels evaluated, that will complete the superfecta of major metropolitan areas in Texas with large flood bypass tunnels, but represent just one more of the many cities across the world that that have maximized the use of valuable land on earth’s surface by taking advantage of the space underneath.

June 06, 2023 /Wesley Crump

Merrimack Valley Gas Explosions: What Really Happened?

May 16, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On September 13, 2018, a pipeline crew in the Merrimack Valley in Massachusetts was hard at work replacing an aging cast iron natural gas line with a new polyethylene pipe. Located just north of Boston, the original cast iron system was installed in the early 1900s and due for replacement. To maintain service during the project, the crew installed a small bypass line to deliver natural gas into the downstream pipe while it was cut and connected to the new plastic main line. By 4:00 pm, the new polyethylene main had been connected and the old cast iron pipe capped off. The last step of the job was to abandon the cast iron line. The valves on each end of the bypass were closed, the bypass line was cut, and the old cast iron pipe was completely isolated from the system. But it was immediately clear that something was wrong.

Within minutes of closing those valves, the pressure readings on the new natural gas line spiked. One of the fittings on the new line blew off into a worker's hand. And as they were trying to plug the leak, the crew heard emergency sirens in the distance. They looked up and saw plumes of smoke rising above the horizon. By the end of the day, over a hundred structures would be damaged by fire and explosions, several homes would be completely destroyed, 22 people (including three firefighters) would be injured, and one person would be dead in one of the worst natural gas disasters in American history. The NTSB did a detailed investigation of the event that lasted about a year. So let’s talk about what actually happened, and the ways this disaster changed pipeline engineering so that hopefully something like it never happens again. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the 2018 Merrimack Valley natural gas explosions.

Like many parts of the world, natural gas is an important source of energy in homes and businesses in the United States. It’s a fossil fuel composed mostly of methane gas extracted from geologic formations using drilled wells. The US has an enormous system of natural gas pipelines that essentially interconnect the entire lower 48 states. Very generally, gathering lines connect lots of individual wells to processing plants, transmission lines connect those plants to cities, and then the pipes spread back out again for distribution. Compressor stations and regulators control the pressure of the gas as needed throughout the system. Most cities in the US have distribution systems that can deliver natural gas directly to individual customers for heating, cooking, hot water, laundry, and more. It’s an energy system that is in many ways very similar to the power grid, but in many ways quite different, as we’ll see.

Just like a grid uses different voltages to balance the efficiency of transport with the complexity of the equipment, a natural gas network uses different pressures. In transmission lines, compressor stations boost the pressure to maximize flow within the pipes. That’s appropriate for individual pipelines where it’s worth the costs for higher pressure ratings and more frequent inspections, but it’s a bad idea for the walls of homes and businesses to contain pipes full of high-pressure explosive gas. So, where safety is critical, the pressure is lowered using regulators.

Just a quick note on units before we get too far. There are quite a few ways we talk about system pressures in natural gas lines. Low pressure systems often use inches or millimeters of water column as a measure of pressure. For example, a typical residential natural gas pressure is around 12 inches (or 300 millimeters) of water, basically the pressure at which you would have to blow into a vertical tube to get water to raise that distance: roughly half a psi or 30 millibar. You also sometimes see pressure units with a “g” at the end, like “psig.” That “g” stands for gauge, and it just means that the measurement excludes atmospheric pressure. Most pressure readings you encounter in life are “gauge” values that ignore the pressure from earth’s atmosphere, but natural gas engineers prefer to be specific, since it can make a big difference in low pressure systems.

The natural gas main line in the Merrimack Valley being replaced had a nominal pressure of 75 psi or about 5 bar, although that pressure could vary depending on flows in the system. Just for comparison, that’s 173 feet or more than 50 meters of water column. But, the distribution system, the network of underground pipes feeding individual homes and businesses, needed a consistent half a psi or 30 millibar, no matter how many people were using the system. The device that made this possible was a regulator. There are lots of different types of regulators used in natural gas systems, but the ones in the Merrimack valley use pilot-operated devices, which are pretty ingenious. It’s basically a thermostat, but for pressure instead of temperature. The pilot is a small pressure regulating valve that supports the opening or closing of the larger primary valve. If the pilot senses an increase or decrease in pressure from the set point, it changes the pressure in the main valve diaphragm, causing it to open or close. This all works without any source of outside power just using the pressure of the main gas line.

Columbia Gas’s Winthrop station was just a short distance south of where the tie-in work was being done on the day of the event. Inside, a pair of regulators in series was used to control the pressure in the distribution system. One of these regulators, known as the worker, was the primary regulator that maintained gas pressure. A second device, called the monitor, added a layer of redundancy to the system. The monitor regulator was normally open with a setpoint a little higher than the worker so it could kick in if the worker ever failed, and, at least in theory, make sure that the low-pressure system never got above its maximum operating level of about 14 inches of water column or 35 millibar. But, in this worker/monitor configuration, the pilots on the two regulators can’t use the downstream pressure right at the main valve. For one, the reading at the worker would be affected by any changes in the downstream monitor. And for two, measuring pressure right at the valve can be inaccurate because of flow turbulence generated by the valve itself. It would be kind of like putting your thermostat right in front of a register; it wouldn’t be getting an accurate reading. So, the pilots were connected to sensing lines that could monitor the pressure in the distribution system a little ways downstream of the regulator station.

The worker and monitor regulators were both functioning as designed on September 13, and yet, they allowed high pressure gas to flood the system, leading to a catastrophe. How could that happen? The NTSB’s report is pretty clear. Tying a natural gas line while it’s still in service, called a hot tie-in, is a pretty tricky job that requires strict procedures. Here are the basic steps: First a bypass line was installed across the upstream and downstream parts of the main line. Then balloons were inserted into the main to block gas from flowing into the section to be cut. Once the gas was purged from the central section, it was cut out and removed while the bypass line kept gas flowing from upstream to downstream. The line to be abandoned got a cap, and the new plastic tie in was attached to the downstream main. Once the tie-in was complete, the crew switched the upstream gas service from the old cast iron line over to the new plastic line and deflated the last balloon so that gas could flow. The upstream cast iron line was still pressurized, since it was still connected to the in-service line through the bypass. But, as soon as the crew closed the valves on the bypass, the old cast iron line was fully isolated, and the pressure inside the line started to drop, as planned.

What that crew didn’t know is that when that plastic main line was installed 2 years back, a critical error had been made. The main discharge line at the regulator station had been attached to the new polyethylene pipe, but the sensing lines had been left on the old cast iron main. It hadn’t been an issue for the previous 2 years, since both lines were being used together, but this tie-in job was the first of the entire project that would abandon part of the original piping. Within minutes of isolating the old cast iron pipe, its pressure began to drop. To a regulator, there’s no difference between a pressure drop from high demands on the gas system and a pressure drop from an abandoned line, and they respond the same way in both cases: open the valves. In a normal situation, the increased gas flow would result in higher pressure in the sensing lines, creating a feedback loop. But this was not a normal situation. It’s the equivalent of putting your thermostat in the freezer. Even as pressure in the distribution system rose, the pressure in the sensing lines continued to drop with the abandoned line. The regulators, not knowing any better, kept opening wider and wider, eventually flooding the distribution system with gas at pressures well above its maximum rating.

By the time things went sideways, the crew at the tie-in had taken most of their equipment out of the excavation. But as one worker was removing the last valve, it blew off into his hand as gas erupted from the hole. The crew heard firefighters racing throughout the neighborhood and saw the smoke from fires across the horizon. The overpressure event had started a chain of explosions, mostly from home appliances that weren’t designed for such enormous pressures. The emergency response to the fires and explosions strained the resources of local officials. Within minutes, the fire departments of Lawrence, Andover, and North Andover had deployed well over 200 firefighters to the scenes of multiple explosions and fires, and help from

neighboring districts in Massachusetts, New Hampshire, and Maine would quickly follow. The Massachusetts Emergency Management Agency activated the statewide fire mobilization plan, which brought in over a dozen task forces in the state, 180 fire departments, and 140 law enforcement agencies. The electricity was shut off to the area to limit sources of ignition to help prevent further fires, and of course, natural gas service was shut off to just under 11,000 customers.

By the end of the day, one person was dead, 22 were injured, and over 50,000 people were evacuated from the area. And while they were allowed back into their homes after three days, many were uninhabitable. Even those lucky enough to escape immediate fire damage were faced with a lack of gas service as miles of pipelines and appliances had to be replaced. That process ended up taking months, leaving residents without stoves, hot water, and heaters in the chilly late fall in New England.

NTSB had several recommendations stem from their investigation. At the time of the disaster, gas companies were exempt from state rules that required the stamp of a licensed professional engineer on project designs. Less than three months after NTSB recommended the exemption be lifted, a bill was passed requiring a PE stamp on all designs for natural gas systems, providing the public with better assurance that competent and qualified engineers would be taking responsibility for these inherently dangerous projects. And actually, NTSB issued the same recommendation and sent letters to the governors of 31 states with PE license exemptions, but most of those states still don’t require a PE stamp on natural gas projects today. There were recommendations about emergency response as well, since this event put the area’s firefighters through a stress test beyond what they had ever experienced.

NTSB also addressed the lack of robustness of low pressure gas systems where the only protection against overpressurization is sensing lines on regulators. It’s easy to see in this disaster how a single action of isolating a gas line could get past the redundancy of having two regulators in series and quickly lead to an overpressure event. This situation of having multiple system components fail in the same way at the same time is called a common mode failure, and you obviously never want that to happen on critical and dangerous infrastructure like natural gas lines. Interestingly and somewhat counterintuitively, one solution to this problem is to convert the low-pressure distribution system to one that uses high pressure. Because, in this kind of system, every customer has their own regulator, essentially eliminating the chance of a common mode failure and widespread overpressure event.

Most importantly, the NTSB did not mince words on who they found at fault for the disaster. They were clear that the training and qualification of the construction crew, or the condition of the equipment at the Winthrop Avenue regulator station were NOT factors in the event. Rather, they found that the probable cause was Columbia Gas of Massachusetts’ weak engineering management that did not adequately plan, review, sequence, and oversee the project.

To put it simply, they just forgot to include moving the sensing lines when they were designing the pipeline replacement project, and the error wasn’t caught during quality control or constructability reviews. NiSource, the parent company of Columbia Gas (of Massachusetts), estimated claims related to the disaster exceeded $1 billion, an incredible cost for weak engineering management. Ultimately, Columbia Gas pleaded guilty to violating federal pipeline safety laws and sold their distribution operations in the state to another utility. They also did a complete overhaul of their engineering program and quality control methods.

All those customers hooked up to natural gas lines didn’t have a say in how their gas company was managed; they didn’t have a choice but to trust that those lines were safe; and they probably didn’t even understand the possibility that those lines could overpressurize and create a dangerous and deadly condition in the place where they should have felt most safe: their own homes. The event underscored the crucial responsibility of engineers and (more importantly) the catastrophic results when engineering systems lack rigorous standards for public safety.

May 16, 2023 /Wesley Crump

Why Bridges Need Sensors (and other structures too)

May 02, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Almost immediately after I started making videos about engineering, people started asking me to play video games on the channel. Apparently there’s roughly a billion people who watch online gaming these days, and some of them watch silly engineering videos too! And there’s one game that I get recommended even more than minecraft: Polybridge. So I finally broke down one evening after the kids went to bed and gave it a try. I’m really not much of a gamer, but I have to admit that I got a little addicted to this game (hashtag not-an-ad). I admit too that there really is a lot of engineering involved. You have different materials that give your structure different properties. The physics are RELATIVELY accurate. You get a budget to spend on each project. And your score is based on the efficiency of your design. But there’s one way this game is not like real structural engineering at all: if your bridge collapses, you get to try again!

In the real world, we can’t design a dam, a building, a transmission line pylon, or a bridge, spend all that money to build it, watch how it performs, tear it down, and build it back better if we’re not happy with the first iteration. Structures have to work perfectly on the first try. Of course we have structural design software that can simulate different scenarios, but it’s only as powerful as your inputs, which are often just educated guesses. We don’t know all the loads, all the soil conditions, or all the ways materials and connections will change over time from corrosion, weathering, damage, or loading conditions. There are always going to be differences between what we expect a structure to do and what actually happens when it gets built. Hopefully engineers use factors of safety to account for all that uncertainty, but you don’t have to dig too deep into the history books to find examples where an engineer neglected something that turned out to matter a lot, sometimes to the detriment of public safety. So what do you do?

We can’t build a project then watch the cars and trucks drive over with the pretty green and red colors on each structural member to see how they’re performing in real time… except you kind of can, with sensors. It turns out that plenty of types of infrastructure, especially those that have serious implications for public safety, are equipped with instruments to track their performance over time and even save lives by providing an early warning if something is going wrong.

I love sensors. To me, it’s like a superpower to be able to measure something about the world that you can’t detect with just your human senses. Plus I’m always looking for an opportunity to exercise my inalienable right to take measurements of stuff and make cool graphs of the data. So I have a bunch of demonstrations set up to show you how engineers employ these sensors to compare the predicted and actual performance of structures, not just for the sake of delightful data visualization, but sometimes even to save lives. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about infrastructure instrumentation.

And what better place to start than with a big steel beam? In fact, this is the biggest steel beam that my local metals distributor would willingly load on top of my tiny car. One of the biggest questions in polybridge and real world engineering is this: How much stress is each structural member experiencing? Of course, this is something we can estimate relatively quickly. So let’s do the engineer thing and predict it first. Beam deflection calculations are structural engineering 101, so we can do some quick recreational math to predict how much this thing flexes under different amounts of weight. And we can use my weight as an example: about 180 pounds or 82 kilograms. The calculation is relatively simple. You can choose your preferred unit system and pause here if you want to go through them. Standing at the beam’s center, I should deflect it by about 2 thousandths of an inch or about 60 microns, around the diameter of the average human hair. In other words, I am a fly on the wall of this beam (or really a fly on the flange). I’m barely perceptible. In fact, it would take more than 100 of me to deflect this beam beyond what would normally be allowed in the structural code. And it would take a lot more than that to permanently bend it. But 2 thousandths of an inch isn’t nothing, so, let’s check our math.

I put my dial indicator underneath the beam, and added some weight. I started with 45 pound or 20 kilogram plates. Each time I add one, you see the beam deflect downward just a tiny bit. After three plates, I added myself, bringing the total up to around 315 pounds or 143 kilos of weight. And actually, the deflection measured by the dial indicator came pretty close to the theoretical predictions made with the simple formula. Here they are on a graph, and there’s the point at my weight, with a deflection of around 2 thousandths of an inch or 60 microns, just like we said. But, we can’t always use dial indicators in the real world because they need a reference point, in this case, the floor. Up on the superstructure of a bridge, there’s no immovable reference point like that. So an alternative is to use the beam itself as a reference. That’s how a strain gauge works, and that’s the cylindrical device that I’m epoxying to the bottom flange of my beam.

A strain gauge works by measuring the tiny change in distance between two parts of the steel. You might know that when you apply a downward load to a beam, it creates internal stress. At the top, the beam feels compression, and at the bottom it feels tension. But it doesn’t just feel the stress, it also reacts to it by changing in shape. Let me show you what I mean. When I put one of the plates on top of the beam, we can see a change in the readout for the strain gauge. (Of course, I had the gauge set to the wrong unit, so let me overlay the proper one with the magic of video compositing.) For each plate I add to the beam, we see that the flange actually lengthens, in this case by about 3 microstrain. That’s probably not a unit of measure you’re familiar with, but it really just means the bottom of the beam increased in length by 0.0003%. When I add another weight, we make it 0.0003% longer again. Same with the third weight. And then when I stand on top of the whole stack, we get a total strain of about 0.002%, a completely imperceptible change in shape to the human eye, but the strain gauged picked it up no problem.

Imagine how valuable it would be to an engineer to have many of these gauges attached to the myriad of structural members in a complicated bridge or building and be able to see how each one responds to changes in loading conditions in real time. You could quickly and easily check your design calculations to make sure the structure is behaving the way you expected. In my simple example in the studio, the gauge is measuring pretty much exactly what the predictions would show, but consider a structure far more complicated than a steel beam across two blocks, in other words, any other structure. What factors get neglected in that simple equation I showed earlier?

We didn’t consider the weight of the beam itself; I’m not actually a one-dimensional single point load, like the equation assumes, but rather my weight is spread out unevenly across the area of my sneakers; Is the length exactly what we entered into the equation? And, what about three-dimensional effects? For example, I put another strain gauge on the top flange of the beam. If you just follow the calculations, you would assume this flange would undergo compression, getting a tiny bit shorter with increased load. But really what happens in this flange depends entirely on how I shift my weight. I can make the strain go up or down simply by adjusting the way I stand on top, creating a twisting effect in the beam, something that would be much more challenging for an engineer to predict with simple calculations. Putting instruments on a structure not only helps validate the original design, but provides an easy way to identify if a member is overloaded. So it’s not unusual for critical structures to be equipped with instruments just like this one, with engineers regularly reviewing the data to make sure everything is working correctly.

Of course, we don’t only use steel in infrastructure projects, but lots of concrete too. And just like steel, concrete structures undergo strain when loaded. So I took a gauge and cast it into some concrete to measure the internal strain of the material. This is just a typical concrete beam mold and some ready-mix concrete from the hardware store. And even before we applied any load, the gauge could measure internal strain of the concrete from the temperature changes and chemical reactions of the curing process. Shrinkage during curing is one of the reasons that concrete cracks, after all. Luckily my beam stayed in one piece. Once the beam had cured and hardened for a few weeks, I broke it free from the mold. Compared to steel, concrete is a really stiff material, meaning it takes a lot of stress to cause any kind of measurable strain. So I got out my trusty hydraulic press for this one. I slowly started adding force from the jack, then letting the beam sit so the data logger could take a few readings from the strain gauge inside. After the fourth step, at just over 50 microstrain, the beam completely broke. Hopefully you can see how useful it might be to have an embedded sensor inside a concrete slab or beam, tracking strain over time, and especially when you know about the amount of strain that corresponds to the strength of the material. This is information that would be impossible to know without that sensor cast into the concrete, and there’s something almost magical about that. It’s like the civil engineering equivalent of x-ray vision.

One of the most amazing things about these sensors is their ability to measure tiny distances. 1 microstrain means one millionth of the original length, which on the scale of most structures, is a practically impossible distance for a human to perceive. But in addition to tiny distances, they also are excellent in measuring changes that happen over a large period of time. A perfect example is a crack in a concrete structure. You can look at grass, but you probably can’t perceive it growing, and you can watch paint, but you won’t perceive it drying. And, you can watch a crack in a concrete slab, like this one in my garage, but you’ll probably never see it grow or shrink over time. So how do you know if it’s changing? You could use a crack meter like this one, and take readings manually over the course of a month or year or decade. But in many cases, that’s not a good use of any person’s time, especially when the crack is somewhere difficult or dangerous to access. So, just like strain gauges measure distance, you can also get crack meters that measure distance electronically. I put this one across the crack in my garage slab and recorded the changes over the course of a few months.

And, I know why this crack exists. It’s because the soil under the slab is expansive clay that shrinks and swells according to its moisture content. I thought it would be fun to use some soil moisture sensors to see if I could correlate the two, but my sensors weren’t quite sensitive enough. However, just looking at the rainfall in my city, you can get a decent idea about what might be driving changes in the width of this crack, which grew by about half a millimeter over the course of this demonstration. Cracking concrete isn’t always something to be concerned about, but if cracks increase in size over time, it can be a real issue. So, using sensors to track the movement of cracks over long durations can help engineers assess whether to take remedial measures.

And, there are a lot of parameters in engineering that change slowly over time. Dams are among the most dangerous civil structures because of what can happen when one fails. Because of that, they’re often equipped with all kinds of instruments as a way to monitor performance and make sure they are stable over the long term. One parameter I’ve talked about before is subsurface water pressure. When water seeps into the soil and rock below a dam, it can cause erosion that leads to sinkholes and voids, and it also causes uplift pressure that adds a destabilizing force to a dam. Instruments used to measure groundwater pressure are called piezometers. They often resemble a water well with a long casing and a screen at the bottom, but instead of taking water out, we just measure the depth to the water level. That’s made a lot easier with electronic sensors, like this one, but I don’t have a piezometer in my backyard. So, to show you how this works, I’m just hooking my pressure transducer to the tap so we can see how the city’s water pressure changes over time. I hooked this up to a laptop and let it run for about a day and a half, and here are the results.

The graph is a little messy because of the water use in my house throwing off the readings every so often, but you can see a clear trend. The pressure is lowest when water demands are high, especially during the evenings when people are watering lawns, cooking, and showering. In the middle of the night, the pumps fill up the water towers, increasing the local pressure in the pipes. This information isn’t that useful, except that it gives you a new perspective of thinking about real-world measurements. Recently I had a plumber at my house who took a pressure reading at the tap, which seemed like a totally normal thing at the time. But now, seeing that the pressure changes by around half a bar (or nearly 10 psi) over the course of a day, it seems kind of silly to just take a single measurement. And that’s the value of sensors, giving engineers more information to make important decisions and keep people safe after a structure is built.

By the way, the engineering of these instruments is pretty interesting on its own. Most of the sensors I’ve used in the demos were sent to us by our friends at Geokon, not as a sponsorship but just because they enjoy the channel and wanted to help out. These devices rely on a wire inside the case whose tension is related to the force or strain on the sensor. The readout device sends an electrical pulse that plucks the wire and then listens to the frequency that comes back. You can see the pluck and the return signal on my oscilloscope here. Just like plucking a guitar string, the wire inside the instrument will vibrate at a different frequency depending on the tension, and you can even hear the sound of the vibration if you get close enough. Of course civil engineers use lots of different kinds of sensors, but vibrating wire instruments are particularly useful in long-term applications because they are incredibly reliable and they don’t drift much over time. They’re also less vulnerable to interference and issues with long cables, since they work in the frequency domain. In fact, there are vibrating wire instruments that have been installed and functioning for decades with no issues or drift.

And the demos I’ve shown in this video just scratch the surface. We’ve come up with creative ways to measure all kinds of things in civil engineering that don’t necessarily lend themselves to garage experiments, but are still critical in performance monitoring of structures. Borehole extensometers are used to measure settlement and heave at excavations, dams, and tunnels. Load cells measure the force in anchors to make sure they don’t lose tension over time. Inclinometers detect subtle shifts in embankments or slopes by measuring the angle of tilt in a borehole along its length. Engineers keep an eye on vibrations, temperature, pressure, tilt, flow rate, and more to make sure that structures are behaving like they were designed and to keep people safe from disaster.

May 02, 2023 /Wesley Crump

East Palestine Train Derailment Explained

April 18, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On the evening of Friday, February 3, 2023, 38 of 149 cars of a Norfolk Southern Railway freight train derailed in East Palestine, Ohio. Five of the derailed cars were carrying vinyl chloride, a hazardous material that built up pressure in the resulting fires, eventually leading Norfolk Southern to vent and burn it in a bid to prevent an explosion. The ensuing fireball and cloud brought the normally unseen process of hazardous cargo transportation into a single chilling view, and the event became a lightning rod of controversy over rail industry regulations, federal involvement in chemical spills, and much more. I don’t know about you, but in the flurry of political headlines and finger pointing, I kind of lost the story of what actually happened. Freight trains, like the one that derailed in East Palestine, are fascinating feats of engineering, and the National Transportation Safety Board (or NTSB) and others have released preliminary reports that contain some really interesting details. I’m not the train kind of engineer, but I think I can help give some context and clarity to the story, now that some of the dust has settled. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the East Palestine Train Derailment.

Modern freight trains are integral to daily life for pretty much everybody. Look around you, and chances are nearly every human-made object you see has, either as bulk raw materials or even as finished goods, spent time on the high iron. One of the reasons trains are so integral to our lives is because there’s nothing else that comes even close to their efficiency in moving cargo over land at such a scale. Steel wheels on steel rails waste little energy to friction (especially compared to rubber tires on asphalt). Locomotives may look huge, but their engines are almost trivial compared to the enormous weight they move. If a car were so efficient, its engine could practically fit in your pocket.  And yet, the trains those locomotives pull are not so much a just a vehicle as they are a moving location, larger and heavier than most buildings.

With this scale in mind, you can see why the crew in a locomotive can’t monitor the condition of all the cars behind them without some help. A rear-view mirror doesn’t do you much good when part of your vehicle is a half hour’s walk behind you. There was a time not too long ago when every freight train had a caboose. Part of their purpose was to have a crew at the end of the train who could help keep a lookout for problems with the equipment. Now modern railways have replaced that crew with wayside defect detectors. These are computerized systems that can monitor passing trains and transmit an automated message to the crew over the radio letting them know the condition of their train in real time. Defect detectors look for lots of issues that can lead to derailment or damage, including dragging equipment, over height or over width cars (a hazard if the train will be passing through tunnels or under bridges), and, important in this case, overheating axles and bearings. Depending on the railway operator and line, these detectors are often spaced every 10 or 20 miles (or 15 to 30 kilometers).

The freight train that derailed in East Palestine, designated 32N, passed several defect detectors along its way, and NTSB collected the data from each one. The suspected wheel bearing responsible for the crash was located on the 23rd car of the train. At mile post 79.9, it registered a temperature of 38 degrees Fahrenheit above the ambient temperature. Ten miles later, the bearing’s recorded temperature was 103 degrees above ambient. That might seem kind of high, but it is still well below the threshold set by Norfolk Southern that would trigger the train to stop and inspect the bearing. Twenty miles later, the train passed another defect detector that recorded the bearing’s temperature at 253 degrees above ambient (greater than the 200-degree threshold), triggering an alarm instructing the crew to stop the train. But, it was too late.

Freight trains are equipped with a fail-safe braking system powered by compressed air. There are two main connections between cars on a train: one is the coupler that mechanically joins each car, and the other is the air line that transmits braking control pressure. As long as this line is pressurized, the brakes are released, and the cars are free to move. But if one of these air lines is severed, like it would be during a derailment, the loss of pressure triggers the brakes to engage on every single car of the train. That’s what happened shortly after that defect detector recorded the over-temperature bearing. When the defect detector notified the crew of an issue, they immediately applied the brakes to slow the train. But before they could reach a controlled stop, the train’s emergency braking system activated.  A security camera nearby caught this footage showing significant sparks from what is presumably the failing car moments before the derailment. Understanding the severity of the situation, the crew immediately notified their dispatcher of the possible derailment. They applied handbrakes to the two railcars at the head of the train, uncoupled, and moved the two locomotives at the head end (and themselves) about a mile down the line away from the fire and damage, not knowing the events that would quickly follow.

A train’s “consist” defines the collection of cars that make it up. 32N’s consist included 2 locomotives at the head, a locomotive near the center of the train called distributed power, and 149 railcars. 38 of those 151 cars had come off the tracks, forming a burning pile of steel and cargo. Of those 38 cars that derailed, 11 were carrying hazardous materials including isobutylene, benzene, and vinyl chloride. Local fire crews and emergency responders worked to put out the fires and address the immediate threats resulting from the derailed cars. But despite the firefighting efforts, five of the derailed cars transporting vinyl chloride continued to worry authorities due to rising temperatures. Norfolk Southern suspected that the chemical was undergoing a reaction that would continue to increase in temperature and pressure within the tanks, eventually leading to an uncontrolled explosion and making an already bad situation much worse.

The cars carrying vinyl chloride were DOT-105 tank cars. These are not just steel cylinders on wheels. The US Department of Transportation actually has very specific requirements for tank cars that carry hazardous materials. DOT105 cars have puncture-resistant systems at either end to keep adjacent cars from punching a hole through the tank. They have a thermal protection system with insulation and an outer steel jacket to protect against fires. They are tested to pressures much higher than they would normally see, and they include pressure relief devices, or PRDs, that automatically open to keep the tank from reaching its bursting pressure. The PRDs on some of the vinyl chloride cars did operate to limit the pressure inside the tanks, but the temperature continued to increase.

As fires continued to burn, state and federal officials noted the temperature in one of the vinyl chloride cars was reaching a critical level. Rather than trust the PRDs to keep the tanks safe from bursting, they decided to perform a controlled release of the chemical to prevent an explosion. While they were still making the decision, the Ohio National Guard and the Federal Emergency Management Agency were running atmospheric models to estimate the extent of the resulting plume. Local emergency managers used these models to evacuate the area most likely to be affected by the release. On February 6, crews dug a large trench in the ground, vented the five vinyl chloride tanks into the trench and set the chemical on fire to burn it off. Despite being done on purpose to reduce the danger of the situation, the resulting fireball and pillar of smoke have become symbolic of the disaster itself.

You might be wondering, like I did, why the controlled burn was necessary if the tank cars were fitted with PRDs. While the NTSB’s full report hasn’t been released yet, they have released some details about their inspections of the vinyl chloride cars. Three of the cars were manufactured in the 1990s with aluminum hatches that cover the valves (as opposed to the more updated standard steel hatches). During the initial fires and “energetic pressure reliefs”, it seems that the aluminum may have melted and obstructed the relief valves, impacting their ability to reduce the building pressure.

You might also be wondering why a train passing through a populated area would be carrying so much vinyl chloride in the first place. Vinyl chloride might sound familiar to some of you as it is the ‘VC’ in PVC. This channel makes a lot of use of PVC demonstrations. It’s a material used in a lot of applications, so we produce it in vast quantities, and railways are usually how we move vast quantities of bulk materials and chemicals. But, vinyl chloride is a toxic, volatile, and flammable liquid, not something you want a big pool of near your city, so officials decided to burn it off. Flaring or burning chemicals is a pretty common practice for dealing with dangerous gases or liquids that can’t easily be stored. It’s essentially a lesser evil, a way to quickly convert a hazardous material to something less hazardous or at the very least, easier to dilute. 

While the byproducts of burning vinyl chloride are far from ideal, combusting it into the atmosphere was intended to be a way to quickly address the concern of it harming people on the ground or polluting a larger area. In fact, the US Environmental Protection Agency flew a specially-equipped airplane after the burnoff to measure chemical constituents of the resulting plume. They found low detections of any chemicals of concern and concluded in their report that the controlled burn of the railcars was a success.

But “success” is a strong word for an event like this, and I might have chosen a different word. While there were no immediate fatalities resulting from the crash, the impacts are far-reaching. Chemical pollutants were not only released into the air, but also washed into local waterways during the firefighting efforts. Hazardous substances reached all the way to the Ohio River, and the Ohio Department of Natural Resources estimated that roughly 40,000 small fish and other aquatic life were killed in the local creek that flows away from East Palestine. Between the contamination of water and soil, it’s impossible to say what the long term impact on the local ecology will be.

As for the residents, both the state and federal EPAs have been heavily involved in all aspects of the cleanup, monitoring air quality and water samples from wells and the city’s fresh water supply. So far, they haven’t detected any air quality levels of health concern after the derailment. As for the area’s groundwater, out of 126 wells tested, none have shown evidence of significant contamination. But as you’ve seen in some of my previous videos, it can take a while for contamination to move through groundwater.

The EPA has ordered Norfolk Southern to conduct all cleanup actions associated with the East Palestine train derailment. The company itself has pledged to “meet or exceed” regulatory requirements in regards to the cleanup. Cleaning up after such a disaster is no easy feat, from air, water, and soil testing, to disposal of huge volumes of contaminated water and soil, the whole thing is a mess, literally. The cleanup is still underway as I’m releasing this video, but so far they’ve removed over 5,000 tons of contaminated soil and collected about 7 million gallons or 26 million liters of contaminated water from rain falling on the site and washing off trucks working on the cleanup. The response has been robust, but we know how these cleanups can go. The EPA’s list of almost 1,800 hazardous waste sites of highest priority only has 450 examples of sites cleaned up enough to be taken off of the list!

The whole situation has also sparked policy discussions among several agencies. The NTSB is opening a special investigation into the safety culture and practices of Norfolk Southern. From congressional testimonials, to public statements from the Department of Transportation, to political posturing from a huge variety of public officials, one thing seems clear to me: this disaster will have an impact on the way railroading is conducted in America for years to come.

The residents of East Palestine have a long road ahead of them. While all the preliminary testing so far paints a relatively safe and healthy picture of the town after the event, many have reported symptoms and effects. Even if there really are no residual compounds present at dangerous levels, the anxiety and unease of living near a high profile chemical spill is hard to escape. The economic impact of just the perception of contamination is also very real, and things like home values and local agricultural businesses have already taken a direct hit. I live really close to a freight line myself, something that is a unique joy for my two-year-old. But now, when I see those tanker cars roll by, I can’t stop myself from just wondering what’s inside them and what might happen if they came off the rail in my neighborhood.

But I also recognize that much of the lifestyle I enjoy depends on those trains rolling by my house, and despite the tragedy of events like East Palestine, the DOT recognizes rail transportation to be the safest overland method of moving hazardous materials. Even with the bulk of hazardous materials being transported over rails, highway hazmat accidents result in more than 8 times as many fatalities! So, freight rail isn’t going away anytime soon. It’s the only feasible way to move the mountains of materials required for all of the industries in the US, and really, the world. And the fact that we rarely have to consider the incredible engineering details of tanker cars, defect detectors, and hazardous material cleanup operations is a testament to the hard work that goes into regulating and operating these lines.


But freight rail in the US is unlike any other industry. There are only seven companies that operate Class I railroads that make up the vast majority of rail transportation in the country. The US rail market essentially consists of two duopolies: CSX and Norfolk Southern in the east and Union Pacific and BNSF in the west. That gives these companies enormous political power, as we’ve seen in recent news. So, we have to ask ourselves, are accidents like East Palestine, however rare they may be, just a part of doing business, or is there more that can be done? And I think the answer in this case is clear. I expect we’ll see some changes to safety regulations in the future to make sure something like this never happens again. And hopefully the next Practical Engineering video on railway engineering will have a more positive light.

April 18, 2023 /Wesley Crump

Why Engineers Can't Control Rivers

April 04, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Old River Control Structure, a relatively innocuous complex of floodgates and levees off the Mississippi River in central Louisiana. It was built in the 1950s to solve a serious problem. Typically rivers only converge; tributaries combine and coalesce as they move downstream. But the Mississippi River is not a typical river. It actually has one place where it diverges into a second channel, a distributary, named the Atchafalaya. And in the early 1950s, more and more water from the Mississippi River was flowing not downstream to New Orleans in the main channel, but instead cutting over and into this alternate channel. 

The Army Corps of Engineers knew that if they didn’t act fast, a huge portion of America’s most significant river might change its path entirely. So they built the Old River Control Structure, which is basically a dam between the Mississippi and Atchafalaya Rivers with gates that control how much water flows into each channel on the way to the Gulf of Mexico. It was certainly an impressive feat, and now millions of people and billions of economic dollars rely on the stability created by the project, the now static nature of the Mississippi River that once meandered widely across the landscape. That’s why Dr. Jeff Masters called it America’s Achilles’ Heel in his excellent 3-part blog on the structure.

You see, the Atchafalaya River is both a shorter distance to the gulf and steeper too. That means, if the structure were to fail (and it nearly did during a flood in 1973), a major portion of the mighty Mississippi would be completely diverted, grinding freight traffic to a halt, robbing New Orleans and other populated areas of their water supply, and likely creating an economic crisis that would make the Suez Canal obstruction seem like a drop in the bucket. Mark Twain famously said that "ten thousand river commissions, with all the mines of the world at their back, cannot tame that lawless stream, cannot curb it or confine it, cannot say to it, Go here, or Go there, and make it obey;" And engineers have spent the better part of the last 140 years trying to prove him wrong.

In my previous video on rivers, we talked about the natural processes that cause them to shift and meander over time. Now I want to show you some examples of where humans try to control mother nature’s rivers and why those attempts often fail or at least cause some unanticipated consequences. We’ve teamed up with Emriver, maker of these awesome stream tables, to show you how this works in real life. And we’re here on location at their headquarters. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about the intersection between engineering and rivers.

One of the most disruptive things that humans do to rivers is build dams across them, creating reservoirs that can be kept empty in anticipation of a flood or be used to store water for irrigation and municipal supplies. But rivers don’t just move water. They move sediment as well, and just like an impoundment across a river stores water, it also becomes a reservoir for the silt, sand, and gravel that a river carries along. That’s pretty easy to see in this flume model of a dam. Fast flowing water can carry more sediment suspended in it than slow water. The flow of water rapidly slows as it enters the pool, allowing sediment to fall out of suspension. Over time, the sediment in the reservoir builds and builds. This causes some major issues. First, the reservoir loses capacity over time as it fills up with silt and sand, making it less useful. Next, water leaving on the other side of the dam, whether through a spillway or outlet works, is mostly sediment-free, giving it more capability to cause erosion to the channel downstream. But there’s a third impact, maybe more important than the other two, that happens well away from the reservoir itself. Can you guess what it is? 

In the previous video of this series, we talked about the framework that engineers and the scientists who study rivers (called fluvial geomorphologists) use to understand the relationship between the flow of water and sediment in rivers. This diagram, called Lane’s Balance, simplifies the behavior of rivers into four parameters: sediment volume, sediment size, channel flow, and channel slope. You can see when we reduced the volume of sediment in a stream, like we would by building a dam, Lane’s Balance tips out of equilibrium into an erosive condition. In fact, according to Lane’s Balance, any time we change any of these four factors, it has a consequence on the rest of the river as the other three factors adjust to bring the stream back into equilibrium through erosion or deposition of sediments. And we humans make a lot of changes to rivers. We want them to stay in one place to allow for transportation and avoid encroaching on property; we want them to drain efficiently so that we don’t get floods; we want them to be straight so that the land on either side has a clean border; we want to cross over them with embankments, utilities, electrical lines, and bridges; we want to use them for power and for water supply. Oh and rivers and streams also serve as critical habitat for wildlife that we both depend on and want to preserve. All those goals are important and worthwhile, but, as we’ll see (with the help of this awesome demonstration that can simulate river responses), they often come at a cost. And sometimes that cost is borne by someone or someplace much further upstream or downstream than from where the changes actually take place.

One of the classic examples of this is channel straightening. In cities, we often disentangle streams to get water out faster, reduce the impacts of floods, and force the curvy lines of natural rivers to be neater so that we can make better use of valuable space. I can show it in the stream table by cutting a straight line that bypasses the river’s natural meanders.

The impact of straightening a river is a reduction in a channel’s length, necessarily creating an increase in its slope. Water flows faster in a steeper channel, making it more erosive, so the practical result of straightening a channel is that it scours and cuts down over time. It’s easy to see the results in the model. This is compounded by the fact that cities have lots of impermeable surfaces that send greater volumes of runoff into streams and rivers. That’s why you often see channels covered in concrete in urban areas - to protect against the erosion brought on by faster flows. And this works in the short term. But, making channels straight, steep, and concrete-covered ruins the stream or river as a habitat for fish, amphibians, birds, mammals and plants. It also has the potential to exacerbate flooding downstream, because instead of floodwaters being stored and released slowly from the floodplain, it all comes rushing as a torrent at once instead. And it’s not just cities. Channels are straightened in rural areas too to reduce flooding impacts to crops and make fields more contiguous and easy to farm. But over the long term, channelizing streams reduces the influx of nutrients to the soils in the floodplain by reducing the frequency of a stream coming out of its banks, slowly making the farmland less productive.

Stream restoration is big business right now as we have begun to recognize these long-term impacts that straightening and deepening natural channels has and reap the consequences of the mistakes of yesteryear. In the US alone, communities and governments spend billions of dollars per year undoing the damage that channelization projects have caused. Even the most famous of the concrete channels, the Los Angeles River, is in the process of being restored to something more like its original state. The LA River Ecosystem Restoration project plans to improve 11 miles (18 km) of the well-known concrete behemoth featured in popular films like Grease and Dark Knight Rises. The project will involve removing concrete structures to establish a soft-bottom channel, daylighting streams that currently run in underground culverts, terracing banks with native plants, and restoring the floodplain areas, giving the river space to overbank during floods. Thanks to fluvial geomorphologists, projects like this are happening all around the world. But, straightening channels isn’t the only way humans impact rivers and streams.

Another impactful place is at road crossings. Bridges are often supported on intermediate piers or columns that extend up from a foundation in the river bed. Water flows faster around the obstruction created by these piers, making them susceptible to erosion and scour. Engineers have to estimate the magnitude of this scour to make sure the piers can handle it. You don’t have to scour the internet very hard to find examples where bridges met their demise because of the erosion that they brought on themselves. In fact, the majority of bridges that fail in the United States don’t collapse from structural problems or deterioration; they fail from scour and erosion of the river below.

But, it’s not just piers that create erosion. Both bridges and embankments equipped with culverts often create a constriction in the channel as well. Bridge abutments encroach on the channel, reducing the area through which water can flow, especially during a flood, causing it to contract on the upstream side and expand on the downstream side. Changes in the velocity of water flow lead to changes in how much sediment it can carry. Often you’ll see impacts on both sides of an improperly designed bridge or culvert; Sediment accumulates on the upstream side, just like for a dam, and the area downstream is eroded and scoured. Modern roadway designs consider the impacts that bridges and culverts might have on a stream to avoid disrupting the equilibrium of the sediment balance and reduce the negative effects on habitat too. Usually that means bridges with wider spans so that the abutments don’t intrude into the channel and culverts that are larger and set further down into the stream bed.

Just like bridges or culvert road crossings, dams slow down the flow of water upstream, allowing sediment to fall out of suspension as we saw in the flume earlier in the video. The consequences include sediment accumulation in the reservoir and potential erosion in the downstream channel, but there’s one more consequence. All that silt, sand, and gravel that a dam robs from the river has a natural destination: the delta. When a river terminates in an ocean, sea, estuary, or lake, it normally deposits all that sediment. Let’s watch that process happen in the river table. River deltas are incredibly important landscape features because they enable agricultural production, provide habitat for essential species, and they feed the sand engines to create beaches that act as a defensive buffer for coastal areas. Wind and waves create nearly constant erosion along the coastlines, and if that erosion is not balanced with a steady supply of sediment, beaches scour away, landscapes are claimed by the sea, habitat is degraded, and coastal areas have less protection against storms.

And hopefully you’re seeing now why it’s so difficult, and some might even say impossible, to control rivers. Because any change you make upsets the dynamic equilibrium between water and sediment. And even if you armor the areas subject to erosion and continually dredge out the areas subject to deposition, there’s always a bigger flood around the corner ready to unravel it all over again. So many human activities disrupt the natural equilibrium of streams and rivers, causing them to either erode or aggrade, or both, and often the impacts extend far upstream or downstream. It’s not just dams, bridges, and channel realignment projects either. We build levees and revetments, dredge channels deeper, mine gravel from banks, clear cut watersheds, and more. Historically we haven’t fully grasped the impacts those activities will have on the river in 10, 50, or 100 years.

In fact, the first iteration of the stream tables we’ve been filming were built by Emriver’s late founder, Steve Gough (goff) in the 1980s. At the time, he was working with the state of Missouri trying to teach miners, loggers, and farmers about the impacts they could have on rivers by removing sediment or straightening channels. These people who had observed the behavior of rivers their entire lives were understandably reluctant to accept new ideas. But, seeing a model that could convey the complicated processes and responses of rivers was often enough to convince those landowners to be better stewards of the environment. Huge thanks to Steve’s wife, Katherine, and the whole team here at Emriver who continue his incredible legacy of using physical models to shrink down the enormous scale of river systems and the lengthy time scales over which they respond to changes down to something anyone can understand to help people around the world learn more about the confluence of engineering and natural systems. Thank you for watching, and let me know what you think!

April 04, 2023 /Wesley Crump

Why Construction Projects Always Go Over Budget

March 21, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Boston, Massachusetts is one of the oldest cities in America, founded in 1630, more than a few years before the advent of modern motor vehicles. In the 1980s, traffic in downtown Boston was nearly unbearable from the tangled streets laid out centuries ago, so city planners and state transportation officials came up with what they considered a grand plan. They would reroute the elevated highway and so-called “central artery” of Interstate 93 into a tunnel below downtown and extend Interstate 90 across the inner harbor to the airport in another tunnel. Construction started in 1991, and the project was given the nickname Big Dig because of the sheer volume of excavation required for the two tunnels. In terms of cost and complexity, the Big Dig was on the scale of the Panama Canal or Hoover Dam. It featured some of the most innovative construction methods of the time, and after 16 years of work, the project was finished on time and under budget…[Grady makes a skeptical face into the camera]

Actually, no. You might know this story already. Of course, the Big Dig did make a big dent in the traffic problem in Boston, but that came at a staggering price. The project was plagued with problems, design flaws, fraud, delays, and of course, cost overruns. When construction finished in 2007, the final price tag was around fifteen-billion dollars, about twice the original cost that was expected.

It’s a tale as old as civil engineering: A megaproject is sold to the public as a grand solution to a serious problem. Planning and design get underway, permits issued, budgets allocated (that took a lot longer than we expect), construction starts, and then there are more problems! Work is delayed, expenses balloon, and when all the dust settles, it’s a lot less clear whether the project’s benefits were really worth the costs.

Not many jobs go quite as awry as the Big Dig, but it’s not just megaprojects that suffer from our inability to accurately anticipate the expense and complexity of construction. From tiny home renovations to the largest infrastructure projects in the world, it seems like we almost always underestimate the costs. And the consequences of missing the mark can be enormous. Well, I’ve been one of those engineers trying to come up with cost estimates for major infrastructure projects, and I’ve been one of those engineers who underestimated. So I have a few ideas about why we so consistently get this wrong.  I’m Grady, and this is Practical Engineering. In today’s episode, we’re trying to answer the question of why construction projects always seem to go over budget.

Major projects are often paid for with public funds, so it’s important (it’s vital) that the benefits we derive from them are worth the costs. And the only way we can judge if any project is worth starting is to have an accurate estimate of the costs first. And, of course, this is not just a problem with civil infrastructure but with all types of large projects paid for with public funds like space programs and defense projects. They have to be justified. Most projects have benefits, and you do get those benefits at the end, no matter the cost, but if they aren’t worth the costs, you’d rather not go through with the project at all. This is especially true for projects like streets and highways where not only costs get underestimated but the benefits are often overestimated too. Check out my friend Jason’s videos on the Not Just Bikes channel for more information about that.

One of the biggest issues we face with large projects is a chicken-and-egg problem: you don’t know how much they’ll cost until you go through the design, but you don’t want to go through a lengthy and expensive design phase and end up with a project you can’t afford. Budgeting and securing funds are usually slow processes, plus you need to know if the job is even worth doing in the first place, so you can’t just wait until the bids come in to find out how much a project is going to cost. You need to know sooner than that, which usually means you need your design professional to estimate the cost. For an infrastructure project, that’s the engineer, and engineers are notoriously not good at estimating costs. 

We don’t know which contractors are busy and which ones aren’t, what machinery they have, or whether or not they’ll bid on your project. We don’t know the sales reps at the concrete and asphalt plants or keep track of the prices of steel, aggregates, pumps, and piping. We don’t have a professional network full of subcontractors, material suppliers, and equipment rental companies. We didn’t study construction cost estimating in college, and most of us have never built anything in the field. And the people who have, those who are most qualified to do this job (the contractors that will actually bid on the project), usually aren’t allowed to participate in the cost estimating during design because it would spoil the fair and transparent procurement process. It would give one or more contractors a leg up on their competition. Because, (here’s a little secret), they aren’t always so good at estimating costs either. When those bids come in, there’s often a huge spread between them, meaning one of the most significant uncertainties of an entire project is sometimes simply which contractors will decide to bid the job.

Of course, there are some alternatives to the normal bidding process that many infrastructure projects use, but even those often require early cost estimates from people who are necessarily limited in their ability to develop cost estimates. In fact, the industry term for the cost estimate that comes from an engineer is the Opinion of Probable Construction Cost or OPCC. Take a look at that mouthful. Two qualifiers: opinion of probable construction cost. And still, agencies and municipalities and DOTs will write down that number on a folded piece of paper, slide it surreptitiously to their governing board, and whisper, “This is how much we need.” And the next day, the journalists who were at the meeting will publish that number in the news. And now, every future prediction of the project’s cost will be compared to that OPCC, no matter how early in the process it was developed. All this to say: estimating the cost of a construction project is hard work (especially early on in the project’s life cycle), it takes highly skilled and knowledgeable people to do well, and even then, it is a process absolutely chock full of uncertainties and risks that are really hard to distill down to a single dollar value. But construction cost estimates aren’t just imprecise. If that were true, you would expect us to overestimate as frequently as we come under. And we know that’s not the case. Why is it always an underestimate?

One hint is in the fact that you often just hear a single number for a project’s cost. What’s included in that 15 billion dollars for the Big Dig or the cost estimate you see for a major project in the news? The truth is that it’s different for every job, to the point where it’s almost a meaningless number without further context. Large infrastructure projects are essentially huge collaborations between public and private organizations that span years, and sometimes decades, between planning, design, permitting, and construction. Land acquisition, surveying, environmental permitting, legal services, engineering and design, and the administration to oversee that whole process all cost money (sometimes a lot of money), and that’s before construction even starts. So if you think that bid from a contractor is the project’s cost, you’re missing out on a lot. And if those pre-construction costs get included in one estimate (for example, the final tally of a project’s cost) when they weren’t included in an earlier estimate (like the engineer’s OPCC), of course it’s going to look like the project came in over budget. You’re not comparing apples to apples.

Another reason for underestimation is inflation. The main method we use to estimate how much something will cost is to look back at similar examples. We consult the Ghost of Construction Past to try and predict the future. It’s not unusual to look at the costs of projects 5 or 10 years old to try and guess the cost of a different project 5 or 10 years into the future. The problem with that is dollars or euros or yen or pounds sterling don’t buy the same amount of stuff in the future that they did in the past. The cost of anything is a moving target, and it’s usually moving up. That’s okay, you might think, just adjust the costs. There are even inflation calculators online, but they normally use the consumer price index. That’s a figure that tracks the cost of a basket of goods and services that a typical individual might buy. Prices vary widely across locations and types of goods, so the idea is that, if you monitor the dollar price of groceries, electricity, clothing, gasoline, et cetera, it can give you a broad measure of how the value of money changes over time for a normal consumer. But there’s not much concrete and earthwork in that basket of goods, which means the consumer price index is generally not a good measure of how construction costs change over time. 

There are a few price indices that track baskets full of labor hours, structural steel, lumber, and cement and even separate those baskets by major city. You have to pay to get access to the data, and they can help a wayward engineer adjust past construction costs to the present day. But they can’t help them predict how those prices will change in the future. And that’s important because large infrastructure projects take a long time to design, permit, and fund. So if there are 2 or 5 or 10 years between when an estimate was prepared and when it’s being used or even discussed, there’s a good chance that it’s an underestimate simply because the value of money itself slid out from underneath it. Cost estimates have an expiration date, a concept that gets overlooked, sometimes even by owners, and often by the media who report these numbers.

That slow time scale for construction projects creates another way that costs go up. Designing a big project is just like navigating a big ship. If things start moving in the wrong direction, the time to fix it is already past. So, we don’t do it all in one fell swoop. You have to have a bunch of milestones where you stop and check the progress because going back to the drawing board is time-consuming and expensive. The issue with this process is that, the further a project matures, the more people get involved. Once you’ve established feasibility, the bosses and boss’ bosses start to weigh in with their advice. Once you have a preliminary design, it gets sent out to regulators and permitting agencies. Once you have some nice renderings, you hold public meetings and get citizens involved. And with all those cooks in the kitchen participating in the design process, does the project get simpler and more straightforward? Almost never.

There is no perfect project that makes everyone happy. So, you end up making compromises and adding features to allay all the new stakeholders. This may seem like a bunch of added red tape, but it really is a good thing in a lot of ways. There was a time when major infrastructure projects didn’t consider all the stakeholders or the environmental impacts, and, sure, the projects probably got done more quickly, efficiently, and at a lower cost (on the surface). But the reality is that those costs just got externalized to populations of people who had little say in the process and to the environment. I’m not saying we’re perfect now, but we’re definitely more thoughtful about the impacts projects have, and we pay the cost for those impacts more directly than we used to. But, often, those costs weren’t anticipated during the planning phase. They show up later in design when more people get involved, and that drives the total project cost upward.

And the thing about project maturity is that, even when you get to the end of design, the project still only exists as a set of drawings on pieces of paper. There are still so many unanswered questions, the biggest one being, “How do we build this?” Large projects are complex, putting them at the mercy of all kinds of problems that can crop up during construction: material shortages, shipping delays, workforce issues, bad weather, and more. Then there are the unexpected site conditions. An engineer can only reasonably foresee so much while coming up with a design on paper or in computer software. A good example is the soil or rock conditions at the site. During design, we drill boreholes, take samples, and do tests on those samples. That lets you characterize the soil or rock in one tiny spot. Of course, you can drill lots of holes, but those holes and those tests are expensive, so it’s a guessing game trying to balance the cost of site investigations with the consequences of mischaracterizing the underlying materials.

If the engineer guesses wrong, it can mean that excavation is more time-consuming because the contractor expected soil and got rock, or that backfill material has to be brought in from somewhere else because the stuff on site isn’t any good. In the worst cases, projects have to be redesigned when the conditions at the site turn out to be different from what was assumed in the design phase. And that’s just the dirt. While it might be great for science or history, imagine the cost of your project if you find historical artifacts or endangered species that you didn’t know were there. It’s a simple reality that there is a lot of uncertainty moving from design into construction, and there just aren’t that many unexpected conditions that make a construction project simpler and cheaper. Of course, opportunities for cost savings do crop up from time to time, but usually those savings get pocketed by the contractor, not passed along to the owner. That’s intentional that the contractor takes on a lot of risk both good and bad. But you can’t saddle a contractor with all the risk that something unexpected won’t show up, and nearly all large contracts have change orders during construction that drive up the cost of the project.

Of course, you can’t ignore the more nefarious ways that costs go up. Any industry that has a lot of money moving around has to contend with fraud, and you don’t have to look too hard through the news to find examples of greed. And there are also plenty of examples where politicians or officials misrepresented the expected cost of a project to avoid public scrutiny. But, in most cases, the reasons for going over budget are much less villainous and far more human: we are just too darned optimistic and short-sighted. But that’s not a good excuse, and I think there’s a lot of room for improvement here. So what do we do? How can we get the actual project cost closer to the budget?

Of course, we can bring construction costs down, but that’s a whole discussion in and of itself. Maybe we’ll table that topic for a future video. I can hear people screaming at the monitor to just add contingency to the budget. Anyone who’s ever guesstimated the cost of anything knows to tack on an extra 15% for caution. Of course, contingency is a tool in the toolbox, but even that has to be justified. We know that the final cost of a project can be more than twice the preliminary estimates, but if you tell a client you added 100% to your estimate for safety, most likely, you’re going to get fired. No one wants to believe there’s that much uncertainty, and also it might not be true. You can’t set aside a billion dollars for a project that costs a hundred thousand, give or take a few K. Sure, you’ll come in under budget, but you just tied up a huge pile of public resources for no good reason.

It turns out a lot of the research suggests spending more money during the planning and design phases. Of course the paper-pushing engineer is saying to spend more money on engineering. But really, construction is where the majority of project costs are, so the theory is that if you can reduce the risks and uncertainty going into construction by spending a little more time in the preconstruction phases, you’ll often earn more than that cost back in the long run. Take three to five percent of those dollars you would have spent on construction, and spend them on risk assessment and contingency planning, and see if it doesn’t pay off. Honestly, even most contractors would prefer this. I know their insurance carriers would.

But, all that considered, I think the biggest place for improvement in budgeting for large construction projects is simply how we communicate those budgets. A single dollar number is easy to understand and easy to compare to some future single dollar number, but really it’s meaningless without more context about when it was developed and what it includes. Because, what is a budget anyway? It’s a way to manage expectations. And if you’re early on in the planning or design phase of a big project, you should expect the unexpected. There’s uncertainty in big projects, and it should be okay to admit that to the public. It should be okay to say, we think it’s going to cost X, but there are still a lot of unknowns. And we think the project will still be worth doing, even if the cost climbs up to Y. And if it goes beyond that, we’re not just going to keep pressing on. We’re going to regroup and find a way to make the benefits worth the costs. There is a ton of room to improve how we develop cost estimates for projects, but there’s tons of room to improve how we communicate about them too.

March 21, 2023 /Wesley Crump

Why Rivers Move

March 07, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is a map of the Mississippi River drafted by legendary geologist Harold Fisk. It’s part of a fairly unassuming geological report that he wrote in 1944 for Army Corps of Engineers, but the maps he produced are anything but run of the mill. They’re strikingly beautiful representations of not just the 1944 path of the Mississippi, but of all the historical paths it’s cut through the landscape over thousands of years. Although astonishing to see on a map, that meandering path represents a major challenge, not just for the people who live and work near the river, but the people around the world who depend on the goods and services that it supports. And that’s a lot of people. What the native Americans called the “Father of Waters” is one of the most important freight corridors in the entire United States, and a huge proportion of the grains we export to other countries is transported on barges along the Mississippi. A change in the river’s course could bottleneck freight traffic, cripple the economy, and potentially even result in a global food crisis. In the 80 or so years since Harold Fisk’s report was written, we’ve spent billions of dollars on infrastructure just to coerce the mighty Mississippi to stay within its current channel. And that’s only a single case study in a battle that’s happening nonstop, around the world between human activity on Earth and the dynamic nature of the rivers that form its landscape.

Even though the natural shifting and meandering of rivers and streams can seriously threaten our infrastructure, our economy, and even the environment, it’s not something that many people pay attention to or even know about at all! Because the timescale is slow and gradual, you don’t see it in the headlines until it becomes a serious problem. And the factors that affect how rivers move don’t really follow our intuitions. So, we’ve teamed up with Emriver, a company that makes physical river models called stream tables to create a two part-series on the science and engineering behind why river channels shift and meander, and what tools engineers use to manage the process. We’re on location at their facility in Carbondale, Illinois, and I’m so excited to show you these models. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about fluvial geomorphology, or the science behind the shape of rivers.

If someone asked you to engineer a channel for water to flow between two locations, what path would you choose? Probably a straight line between them, right? It’s the simplest and most cost effective choice. So why doesn’t mother nature choose it? This river table is full of media that represents earthen materials like silt, sand, and gravel. Each particle size has a different color to make it easier to differentiate. (And the online video compression algorithms love this stuff.) Water flows in at the top of the table and out at the bottom, so we can witness the actual physical processes that happen in real rivers. In the real world, this river system would be tens or hundreds of miles long, and what happens in this model over the course of a few hours might take hundreds or thousands of years as well. Let’s create that straight path in the earth connecting the inlet and outlet of the stream table, set the water flowing through it, and just see what happens. [Beat for time lapse]. Did the channel behave like you expected, or did you find the formation of the meandering path a little bit unintuitive? Hopefully by the end of this video, it will make perfect sense.

We learn about the process of erosion even when we’re really young. Wind and water carve at the earth, transporting the material from one location to another. In most places, erosion happens so slowly that you could never watch it in action, like growing grass or drying paint. But take a look at a river and you immediately see erosion underway. All you have to do is dip below the surface of the water and look. We usually think of rivers as highways for water, but they also transport another material in enormous quantities: sediment. All that silt, sand, gravel, and rock that erodes from the earth cascades and concentrates in rivers and streams, where it’s carried through valleys and eventually out to the lakes and oceans. Because of their power to move rock and soil, the shape of earth’s landscape, the geomorphology, is hugely influenced by river systems.

Maybe because the processes themselves happen so slowly, it took a long time for science to develop around how and why rivers change their paths through the landscape. But, in the 1950’s, a civil engineer and hydrologist by the name of Emory Lane quit his job at the US Bureau of Reclamation to serve as a professor at Colorado State University. Through his time at the Bureau, he worked in hydraulic laboratories studying the interactions between water, soil, and rock. By the time he accepted his appointment, he was well on the way to developing a unified theory of sediment transport. In 1955, he published his landmark equation that is still used today by engineers, geologists, and other professionals in the river sciences. And just like a lot of the most famous equations in history, it doesn’t look too complicated. It says that, in a stable stream, the flow of water multiplied by the slope of that stream is proportional to the flow of sediment in the stream multiplied by the size of that sediment. It seems simple - just four parameters - but, you know, it’s also a funny looking equation with zero context, so maybe you’re not feeling like an expert just yet. But, with the help of the stream table, I can show you the beauty of this relationship and how simple it makes predictions about how rivers will behave.

Let’s just look at some examples. Say that a large area is hit by wildfire that burns all the trees and vegetation. Where before you had a lush and verdant landscape with plants, bushes, and trees to stabilize the soil, now it’s mostly just bare earth. When it rains, the water that runs off the burned area erodes the unprotected landscape, washing more sediment into the river than it would have before the fire. We can demonstrate this by simply adding media to the upstream part of the stream table. Can you predict how the river will respond? Let’s look back at Lane’s Equation. We’ve increased the flow of sediment in the river, but we haven’t changed any of the other variables. We didn’t change the size of the sediment, we didn’t change the flow in the river, and we didn’t change its slope. That means the two sides are imbalanced. Lane’s Equation no longer holds true, and the river is out of equilibrium. In other words, this is no longer a stable channel. In fact, we can convert Lane’s equation into a diagram to make this much simpler to understand.

On one side of this balance is the sediment load and the other side is the volume of flow in the stream. Add more flow and you can transport more sediment. Reduce the flow of water, and you reduce the flow of sediment accordingly. Pretty straightforward, right? But we still have to include the other two parameters, sediment size and stream slope. Now you can see how things get a little more complicated to keep in balance. Any disturbance to any of these four parameters causes the scale to get out of balance, affecting the stream’s equilibrium. When that happens, you have short term consequences, and long term ones too. For the wildfire example where we increased the sediment load in the stream, the top of the balance swings left toward deposition. There’s not enough water to keep the sediment in suspension, so it’s going to deposit within the bed of the river like we’re seeing here in the model. The flow in this example just can’t hold all the sediment we’re washing into it, so it accumulates in the bed and banks of the channel over time.

Here’s another example of a natural disruption to a river system that’s easier to demonstrate in the flume. Beavers build a small dam across the channel, creating a pond that slows down the flow. As the velocity of the stream reduces, heavier sediment settles out. That means that the water below the beaver dam only carries the fine particles of silt and clay downstream. You can see the lighter white particles being carried away while the darker, heavier ones get caught behind the dam. Let’s take a look at Lane’s Balance to predict what will happen to the stream. When we reduce the size of the sediment load in the river, it shifts the left side of the balance inward, and again we lose our equilibrium. But this time, instead of deposition, we can expect the stream to erode downstream, and downstream of a dam, human- or beaver-made, is a common place to find erosion occurring.

Let’s look at one more example of a natural disturbance to a river, changes in the flow. After all, rivers rarely carry a constant volume of water. Their flows change with the seasons and the weather with tremendous variability. That includes floods where heavy precipitation within a watershed converges toward valleys to swell the rivers and streams. We can simulate a flood in our model channel just by turning up the flow, and hopefully at least this parameter matches your intuitions. You can easily see the sediment being carried downstream by the increased flow of water. The banks of the river erode and the material is carried away by the flood. Looking at our diagram, it’s easy to see why. If we increase the flow of water, the scale is out of balance, leading to erosion of the channel.

These disturbances to a channel’s equilibrium seem relatively benign, and even beautiful, in the stream table, but they can represent a serious threat to property, infrastructure, and even the environment. Erosion can cause rivers to shift, washing away roads, underground utilities, and even destabilizing structures. I worked on a project once with a river running alongside a cemetery. Imagine the haunting headlines that a little erosion could create. On the other hand, deposition in a river channel can also create serious issues. Sediment can choke a navigation channel, reducing its capacity for freight traffic, and fill up reservoirs, reducing their storage volume. It can damage the habitat of native fish and other wildlife. And, deposition can reduce the ability of a river channel to carry water, increasing the impacts and inundation during a flood.

Of course, floods and many other disturbances to channels are usually short term events, so the scale naturally balances itself once the river returns to normal conditions. But consider something longer term, like the beaver pond we discussed or a change in climate that means a river is receiving greater flows year over year. At first the balance swings toward erosion or deposition, but a central part of Lane’s theory is that natural forces will gradually adjust the factors to bring the river back into equilibrium. That’s mostly a result of the fourth parameter that we haven’t touched on yet: slope. Erosion and deposition have a natural feedback mechanism with the slope of a river. But how can a river change its slope? After all, the starting and ending points are relatively fixed. Slope is defined as the length of a line divided by its change in elevation (the rise over the run if you remember from algebra class). A river really can’t change the rise (or fall) between its source and mouth, but it can change the run, its length.

Consider the original example I gave you at the beginning of the video. Its Lane balance was all out of whack. Too much water and too much slope created a situation where it eroded out significantly at first. But over the course of a few hours, a new pattern started to emerge. The river started to meander, to lengthen itself by curving back and forth, creating a sinuous path from start to finish. That lengthening led to a reduction in the river’s slope, naturally bringing the channel back closer to its equilibrium condition.

But look closely and you’ll still see sediment moving. It erodes from the outside of bends where flow is most swift, called cut banks, and it deposits on the inside of bends where the flow is slower, called point bars. This creates natural meandering of rivers and geographic features like oxbow lakes where a river cuts itself off at a bend, leaving a curved depression behind. You also see natural aggradation where a river discharges into an ocean or lake where sediment falls out of suspension, called a delta. These phenomena happen for most rivers and streams, even those that are quote-unquote “balanced” according to Lane’s theory. In reality, there’s no such thing as a static state for a river. All the variables are changing over time. Floods, droughts, fires, debris jams, animal activity, and many other natural processes ping the balance this way and that, and we haven’t mentioned the human activities that affect rivers at all. That’s the topic of the next video in this series (by the way) so make sure you subscribe so you don’t miss it. In addition to the constant shifting of flow and sediment load, the natural processes that pull a river toward equilibrium are not very precise or predictable as we can easily see in the stream table. In reality, Lane’s scale is always in motion, bouncing between erosion and deposition states at every point along a river or stream. We call this a dynamic equilibrium because even when all the factors of sediment transport are in balance, rivers still shift and meander. In that way, Lane’s equation is more a way to characterize the magnitude of change than a binary measure of whether a stream channel is in motion or not.

And of course, it’s a simplification. I’ve been calling it an equation, but there’s no equal sign to be found. It’s really just a qualitative relationship that can’t tell you exactly how fast a river will meander or to what extent. There are also factors that it doesn’t consider like vegetation or pulsed flow. For example, imagine a scenario where the climate shifts toward more extreme periods of droughts and floods. Lane’s relationship looks at averages. So, if one river has a relatively constant flow while an identical river has pulses of high and low flows, as long as their average flow is the same, Lane’s relationship would assume they would behave identically. Well, we decided to try it out. See if you can see the difference. [Beat for time lapse].

Even if Lane would predict similar behavior between the two models, it’s easy to see that the pulsed flow model experiences much more erosion and faster movements of the channel. Clearly, we still have progress to make in our understanding of how rivers and streams behave over time under the wide variety of conditions that rivers face. From the tiniest urban drainage ditches to the mighty Mississippi, rivers and streams have enormous consequences for humans. And, like pretty much everything in life, rivers are complicated. Even when all those conditions are perfectly balanced, they never stop moving and changing.

March 07, 2023 /Wesley Crump

The Only State Capital Where You Can’t Drink the Water

February 21, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

As a blast of bitter Arctic air poured into North America around Christmas Time in December 2022, weather conditions impacted nearly every aspect of life, from travel to electricity to just trying to get out the front door. But the frigid temperatures kicked one American city while it was already down. For many people, the idea of not being able to drink the water in their own house is unimaginable. But for the residents of Jackson, Mississippi, it was just another day…or twelve. Last August, a flood took out the aging water system, leaving nearly everyone in the City without water for more than a week. Only a few months later, that arctic weather spell broke so many pipes in the city that residents again lost access to water, some for nearly two weeks, continuing one of the worst water crises in American history. It’s a stark reminder of the massive undertaking involved in providing clean water, day after day, to an entire city of people at once and the enormous stakes of getting it wrong. What does it really take to run a public water supply, what happened in Jackson, and what does its future hold? I’m Grady, and this is Practical Engineering. In today’s episodes, we’re talking about the Jackson water crisis.

Jackson is not only the capital of Mississippi but also its largest city. Its water utility has around 70,000 connections to homes and businesses and services about 170,000 people through two surface water treatment plants. The OB Curtis plant is the larger of the two, with a rated capacity of 50 million gallons per day or 190,000 cubic meters per day. An intake structure collects raw water from a nearby reservoir. That’s the term for untreated surface water. From there, two large-diameter pipelines carry the water to the headworks of the plant. At the headworks, pumps send the raw water through various treatment processes to clean it up and, ideally, make it safe for drinking. Closer to downtown, the JH Fewell Plant has a rated capacity of about half the OB Curtis plant. It draws raw water directly from the Pearl River downstream of the reservoir. The City also has a few groundwater wells to supplement the surface water system.

In normal conditions, clean and drinkable water flows from both plants into a network of pipes and elevated tanks that deliver that water to each building in the City of Jackson. This is a public water supply, something that might sound kind of obvious, but that term has a specific meaning. Because not just anyone can hook a bunch of pipes up to customers and sell them water. Water is both an immediate necessity for life on this blue earth and a powerful agent of disease transmission, so we have rules that regulate those who would collect it and deliver it to others. Specifically, we have the Safe Drinking Water Act, and each state also has its own rules that govern contaminants, monitoring, and public notifications. The goal of drinking water regulation is to make sure that no matter where you are in the United States, you can open the tap and use that water to cook, bathe, or drink, and not have to worry about getting sick. This might sound like a relatively ordinary endeavor, but designing, building, operating, and maintaining a public water system - even for a relatively small city like Jackson - is a monumental enterprise that requires a lot of money, a lot of people, a lot of oversight, and a lot of infrastructure.

Unfortunately, Jackson has gone without many of those necessities for decades, creating issues in the City that eventually led the federal government, specifically the Environmental Protection Agency or EPA, to conduct an inspection of the system in 2020. What they saw shocked them. The City’s water system was in such a state of disrepair and mismanagement that the EPA immediately issued an emergency order. Regulation of a public water system usually falls to the state, in this case the Mississippi State Department of Health, but the federal government can step in if there is an imminent and substantial threat to public health, and in this case, the EPA decided there was. The emergency order required the City to create a plan to fix all the broken equipment and bring the system back into working order, but the urgency was too little, and too late. From that time in early 2020 until nearly the present day, Jackson’s water system faced a seemingly unending cascade of challenges bringing to light just how bad things had gotten.

In February of 2021, the same winter storm that nearly took out the Texas power grid, hit the City of Jackson too. The unseasonably cold weather affected water mains below the city streets, causing them to break and leak. So many water mains broke that the pumps at the water treatment plants couldn’t keep up. The result was the pressure in the system dropping, in some places so low that they had no water pressure at all. In other words, they had no water. Like it had done so many times before, the City issued a system-wide boil water notice. You may have heard this term before but not quite understood the implications. Water systems are pressurized well beyond what’s needed to move the water through the pipes. That’s done for a reason. High pressure keeps unwanted contaminants out. If the system loses pressure, pollutants can be drawn into the pipes through cracks, breaks, or joints, contaminating the water inside. So if a main breaks or a pump stops working or a treatment plant has to shut down, the operator sends out a boil water notice to affected customers, letting them know that their water might be contaminated, and that it should be boiled to kill any potential pathogens before using it for drinking or cooking. This notice in February 2021 lasted for an entire month. Imagine not being able to trust the water from your tap for that long. But even though that particular notice was eventually lifted, residents of Jackson have lived under a practically constant recommendation to boil the water that comes out of the tap, and that’s if they even have any water to boil in the first place.

Only a few months later, in April of 2021, an electrical fire in the OB Curtis plant took out all five of the high-service pumps, the ones that deliver fresh water into the distribution system. Again the pipes lost pressure, and again a boil water notice was issued, this one lasting for four days. It would be another year before the electrical panel for the pumps would be replaced, crippling the treatment plant’s ability to pressurize the water system. That November, chemical feed issues forced operators to shut down the OB Curtis plant, once again causing the system pressure to drop. That boil water notice lasted another four days. In April 2022, water hammer broke a pipe in the OB Curtis plant, again requiring a shutdown. In June, filters at the plant failed, requiring yet another shutdown and yet another system-wide boil water notice (this one for two weeks while the City worked to fix the problem).

In July, the EPA issued a report summarizing the litany of problems faced by the Jackson water distribution system, and the list would be impressive if it didn’t represent such an injustice to the people the system is meant to serve. Water mains were constantly breaking. The City had an annual rate of 55 breaks per 100 miles of pipe when the industry benchmark is 15. There was no monitoring of pressure, meaning the City had no way to identify or address problem areas in the system. There was no map of the system pipes or valves, making it difficult or impossible to implement repairs. Water towers weren’t getting enough flow, causing the water inside to stagnate. Monitoring equipment in the treatment plants wasn’t working, and if it was working, it wasn’t calibrated. And if it was calibrated, there wasn’t enough staff to keep an eye on it.

For an extended period, the utility had no manager, and it almost never had enough operators to staff the plants. A treatment plant operator has to be licensed and know a lot about chemistry and hydraulics and the various equipment used to clean water. It’s usually a great career because it doesn’t require a college degree, the work is rewarding, and the hours are consistent, but that wasn’t the case in Jackson. The City couldn’t pay enough to keep the three shifts at each treatment plant staffed 7 days per week, so the operators that were there were working lots of overtime, and occasionally not being paid for it.

Over the course of 4 years, the City had issued over 750 boil water notices because of the numerous losses of pressure. Water meters throughout the City were broken or misconfigured, meaning people weren’t being billed correctly or billed at all. In fact, the City estimated its non-revenue water, that’s the water that isn’t being paid for because of leaks or bad metering, to be 50 percent! Half of all the water treated and delivered into the distribution system was just being lost; it wasn’t generating revenue that could be used to maintain infrastructure and pay the staff. On top of that many of the large institutions that should be the utilities biggest customers, including local schools and hospitals, had opted to drill their own wells rather than rely on the failing city system, cutting off even more revenue. It’s not hard to imagine why the system was having trouble keeping up.

Throughout the entire year after the federal inspection, the City had been in constant negotiations with the state and the EPA trying to plot a path forward to bringing their ailing water system back into compliance. Biweekly meetings that included representatives from nearly every side of the issue were held to keep track of progress, but the progress was slow. In August of 2022, the OB Curtis plant switched the chemicals used for corrosion control, resulting in a boil water notice that lasted nearly a month. As the city worked to get the treatment process under control, the mayor said in a press conference, “Even when we come out of this boil water notice, I want to be clear that we are still in a state of emergency.” Then came the flood.

In late August, a deluge of heavy rainfall swept across Mississippi, dropping enormous volumes of precipitation across the state. The Ross Barnett Reservoir was already full of water, meaning all the inflows had to be released through the spillway. That swelled the Pearl River downstream, flooding streets and homes throughout the city. You might think a flood would be a good thing for a water system; after all, a flood is just a lot of water. But the problem with flooding is sediment. Heavy runoff carries soil, making the water muddy and much more difficult to treat. Several raw water pumps at OB Curtis quickly failed as they tried to deliver the sediment-laden water to the plant. And the fraction of water that did make it all the way to the plant was still a muddy mess. Any operator will tell you that slow changes to treatment processes and chemical feeds are best. When there’s a sudden shift in water quality that requires rapid adjustments to the processes, problems are bound to occur, and they did. Muddy water clogged filters and upset the various other processes to the point where the plant was utterly unable to treat it, resulting in a complete collapse of the system. The downstream surface water treatment plant suffered a similar fate. Nearly everyone in Jackson lost the ability to use water for basic safety and hygiene. Schools, restaurants, and businesses were closed. From washing hands to fighting fires to just having water to drink, the City was incapacitated.

The flooding threw the water crisis into the national spotlight. The Mayor, the governor, and eventually President Biden all issued disaster declarations, freeing up emergency resources. Federal officials and emergency workers flooded into Jackson to deliver bottled water to the residents and help restore the water supply. A team of engineers and drinking water experts worked to tackle miscellaneous projects at both treatment plants and most importantly, staff the facilities. By September 6, water pressure had been restored to customers, and a week later, the boil water notice was lifted. But it wasn’t the end of the emergency. The water system was barely functioning, and the relief team couldn’t stay in Jackson indefinitely. An emergency contract was issued for an outside company to take over operations of the water system for a year, but it was clear to everyone involved that an enormous capital improvement program and a huge influx of cash was the only way Jackson could pull itself out of the crisis. The City continued negotiating with the state, the EPA, and the Department of Justice, and in November of 2022, they appointed a third-party manager to take over control of the water system.

Ted Henifin, a licensed engineer and former public works director from Virginia, had already been involved in the emergency work, and was now tasked with 13 priority projects to bring Jackson’s system back, if not into perfect working order, at least into compliance with the drinking water laws. He was given broad power and freedom from normal contracting rules to hire, purchase, and contract as needed to get the work done, and he said in November that he planned to wrap up his priority projects within a year. And then the cold came.

The Christmas polar vortex event created a repeat of February 2021, cracking water mains with freezing weather, and creating so many leaks that the water treatment plants just couldn’t keep up. Jackson’s reliance on surface water creates a challenge because, unlike groundwater, the surface water supply is affected by ambient temperature. Freezing weather means chilly water in the reservoir, and when you’re sending very cold water through the underground mains, it causes them to shrink, and in some cases, to break. The cold weather also affected the chemistry of the raw water entering the plant, causing issues with the treatment processes and forcing the OB Curtis Plant to shut down. 22 schools had little to no water pressure and had to move to virtual learning as they returned from the winter break. For many, it was the last straw, and at least one restaurant decided to close their doors for good after having no water pressure for more than 40 days over the past two years, and many more days than that under boil water notices. The Christmas outage had customers without water for two weeks in the latest, but probably not the last event in the saga of underinvestment, misfortune, and utter failure to deliver a basic necessity to the residents of Jackson.

I talk about the engineering behind catastrophes like this, but in so many cases, it’s impossible to ignore the larger issues driving the story. Like all infrastructure, fresh water systems require investment. In an ideal world, those resources come from the water rates, the money that people pay for the water they receive, so the system supports itself. But, when those rates aren’t enough, something has to be done quickly and decisively, because chronic underinvestment creates a vicious cycle. The infrastructure fails, the billing system doesn’t work, customers leave, staff positions can’t be filled, and things just spiral downhill. So infrastructure funding is often supplemented by debt, by grants, by state and federal investment programs, all resources that require more than good management; they often require politicians. The people in charge of the water system in Jackson have been trying to sound the alarms for years. The Mayor even said the city was in an emergency the week before a flood hit the City that completely collapsed the water system. But it took that flood to convince politicians to free up resources. It’s also impossible to ignore the history of Jackson as a part of this story, including a legacy of racism that isolated and separated minorities, gutted the community tax base, and ultimately led to the failing infrastructure we see now. If you want to learn more about that history, there are far more qualified voices than mine to tell it, so I’ll leave some links to the best sources I found below.


In January, the mayor of Jackson announced that they had secured $800 million dollars in federal funding to tackle the city’s issues with water and sewer infrastructure. Those funds will take years to allocate and spend, but it’s another step forward for a city whose water system has fallen so far behind. Clean water is a human right, and the fact that the citizens of a major city in the US don’t have access to it is more than a shame. I’m sharing the story because I think it’s important for all of us to see the consequences of mismanagement, disregard, and discrimination and hopefully learn from those mistakes so that we can be better managers, leaders, or just advocates for the infrastructure that we rely on everyday.

February 21, 2023 /Wesley Crump

Why Some Roadways Are Made of Styrofoam

February 07, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

If you’ve ever driven or ridden in an automobile, there’s a near 100% chance you’ve hit a bump in the road as you transition onto or off of a bridge. In fact, some studies estimate that it happens on a quarter of all bridges in the US! It’s dangerous to drivers and expensive to fix, but the reason it happens isn’t too complicated to understand. It’s a tale (almost) as old as time: You need a bridge to pass over another road or highway. But, you need a way to get vehicles from ground level up to the bridge. So, you design an embankment, a compacted pile of soil that can be paved into a ramp up to the bridge. But, here’s the problem. Even though the bridge and embankment sit right next to each other, they are entirely different structures with entirely different structural behavior. A bridge is often relatively lightweight and supported on a rigid foundation like piles driven or drilled deep into the ground. An embankment is - if the geotechnical engineers will forgive me for saying it - essentially just a heavy pile of dirt. And when you put heavy stuff on the ground, particularly in places that have naturally soft soils like swamps and coastal plains, the ground settles as a result. If the bridge doesn’t settle as much or at the same rate, you end up with a bump. Over the years, engineers have come up with a lot of creative ways to mitigate the settlement of heavy stuff on soft soils, but one of those solutions seems so simple, that it’s almost unbelievable: just make embankments less heavy. Let’s talk about some of the bizarre materials we can use to reduce weight, and a few of the reasons it’s not quite as simple as it sounds. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about lightweight fills.

The Latin phrase for dry land, “terra firma,” literally translates to firm earth. It’s ingrained in us that the ground is a solid entity below our feet, but geotechnical engineers know better. The things we build often exceed the earth’s capacity to withstand their weight, at least not without some help. Ground modification is the technical term for all the ways we assist the natural soil’s ability to bear imposed loads, and I’ve covered quite a few of them in previous videos, including vertical drains that help water leave the soil; surcharge loading to speed up settlement so it happens during construction instead of afterwards; soil nails used to stabilize slopes; and one of the first videos I ever made: the use of reinforcing elements to create mechanically stabilized earth walls.

One of the simplest definitions of design engineering is just making sure that the loads don’t exceed the strength of the material in question. If they do, we call it a failure. A failure can be a catastrophic loss of function, like a collapse. But a failure can also be a loss of serviceability, like a road that becomes too rough or a bridge approach that develops a major bump. Ground modification techniques mostly focus on increasing the strength of the underlying soil, but one technique instead involves decreasing the loads, allowing engineers to accept the natural resistance of a soft foundation.

Let me put you in a hypothetical situation to give you a sense of how this works: Imagine you’re a transportation engineer working on a new highway bridge that will replace an at-grade intersection that uses a traffic signal, allowing vehicles on the highway to bypass the intersection. This is already a busy intersection, hence the need for the bypass, and now you’re going to mess it all up with a bunch of construction. You design the embankments that lead up to the bridge to be built from engineered fill - a strong soil material that’s about as inexpensive as construction gets. You hand the design off to your geotechnical engineer, and they come back with this graph: a plot of settlement over time. Let’s just say you want to limit the settlement of the embankment to 2 inches or 5 centimeters after construction is complete. That’s a pretty small bump. This graph says that, to do that, you’ll have to let your new embankment sit and settle for about 3 years before you pave the road and open the bridge. If you put this up on a powerpoint slide at a public meeting in front of all the people who use this intersection on a daily basis, what do you think they’ll say?

Most likely they’re going to ask you to find a way to speed up the process (politely or otherwise). From what I can tell from my inbox, a construction site where no one’s doing any work is a commuter’s biggest pet peeve. So, you start looking for alternative designs and you remember a key fact about roadway embankments: the weight of the traffic on the road is only a small part of the total load experienced by the natural ground. Most of the weight is the embankment itself. Soil is heavy. They teach us that in college. So what if you could replace it with something else? In fact, there is a litany of granular material that might be used in a roadway embankment instead of soil to reduce the loading on the foundation, and all of them have unique engineering properties (in other words, advantages, and disadvantages).

Wood fibers have been used for many years as a lightweight fill with a surprisingly robust service life of around 50 years before the organic material decays. Similarly, roadway embankments have been seen as a popular way to reuse waste materials. In particular, the State of New York has used shredded tires as a lightweight fill with success, so far avoiding the spontaneous combustions that have happened in other states. There are also some very interesting materials that are manufactured specifically to be used as lightweight fills.

Expanded shale and clay aggregates are formed by heating raw materials in a rotary kiln to temperatures above 1000 celsius. The gasses in the clay or shale expand, forming thousands of tiny bubbles. The aggregate comes out of the kiln in this round shape, and it has a lot of uses outside heavy civil construction like insulation, filtration, and growing media for plants. But round particles like this don’t work well as backfill because they don’t interlock. So, most manufacturers send the aggregate through a final crushing and screening process before the material is shipped out. Another manufactured lightweight fill is foamed glass aggregate. This is created in a similar way to the expanded shale where heating the raw material plus a foaming agent creates tiny bubbles. When the foamed glass exits the kiln, it is quickly cooled, causing it to naturally break up into aggregate sized pieces. You can see in my graduated cylinders here that I have one pound or about half a kilogram of soil, sand, and gravel. It takes about twice as much expanded shale aggregate to make up that weight since its bulk density is about half that of traditional embankment building materials. And the foamed glass aggregate is even lighter.

All these different lightweight fills can be used to reduce the loading on soft soils below roadways and protect underground utilities from damage, but they also have a major advantage when used with retaining walls: reduced lateral pressure. I’ve covered retaining walls in a previous video, so check that out after this if you want to learn more, but here’s an overview. Granular materials like soil aren’t stable on steep slopes, so we often build walls meant to hold them back, usually to take fuller advantage of a site by creating more usable spaces. Retaining walls are everywhere if you know where to look, but they also represent one of the most underappreciated challenges in civil engineering. Even though soil doesn’t flow quite as easily as water does, it is around twice as dense. That means building a wall to hold back soil is essentially like building a dam. The force of that soil against the wall, called lateral earth pressure, can be enormous, and it’s proportional both to the height of the wall and the density of the material it holds back. Here’s an example:

When Port Canaveral in Florida decided to expand terminal 3 to accommodate larger cruise ships, they knew they would need not only a new passenger terminal building but also a truly colossal retaining wall to form the wharf. The engineers were tasked with designing a wall that would be around 50 feet (or 15 meters) tall to allow the enormous cruise ships to dock directly alongside the wharf. The port already had stockpiles of soil leftover from previous projects, so the new retaining wall would get its backfill for free. But, holding back 50 feet of heavy fill material is not a simple task. The engineers proposed a combi-wall system that is made from steel sheet piles supported between large pipe piles for added stiffness, in addition to a complex tie-back structure to provide additional support at the top of the wall. When the design team considered using lightweight fill behind the retaining wall, they calculated that they could significantly reduce the size of the piles of the combi-wall, use a more-commonly available grade of steel instead of the specialty material, and simplify the tie-back system.

Even though the lightweight fill was significantly more expensive than the free backfill available at the site, it still saved the project about $3 million dollars compared to the original design. The fill at Port Canaveral (and all the lightweight fills we’ve discussed so far) are granular materials that essentially behave like normal soil, sand, or gravel fills (just with a lower density). They still have to be handled, placed, and compacted to create an embankment or retaining wall backfill just like any typical earthwork project. But, there are a couple of lightweight fills that are installed much differently. Concrete can also be made lightweight using some of the aggregates mentioned earlier in place of normal stone and sand, or by injecting foam into the mix, often called cellular concrete. On projects where it’s difficult or time consuming to place and compact granular fill, you can just pump this stuff right out of a hose and place it right where it needs to be, speeding up construction and eliminating the need for lots of heavy equipment. There are a few companies that make cellular concrete, and they can tailor the mix to be as strong or lightweight as needed for the project. You can even get concrete with less density than water, meaning it floats!

This test cylinder was graciously provided by Cell-Crete so I could give you a close up look at how the product behaves. Of course we should try and break it. Let’s put it under the hydraulic press and see how much force it takes. The pressure gauges on my press showed a force of just under a ton to break this sample. That is equivalent to a pressure of around 200 psi or 1.4 megapascals, much stronger than most structural backfills. You’re not going to be making skyscraper frames or bridge girders from cellular concrete, but it’s more than strong enough to hold up to traffic loads without imposing tons of weight into a retaining wall or the soft soils below an embankment.

The last lightweight fill used in heavy civil construction is also the most surprising: expanded polystyrene foam, also known as EPS and colloquially as styrofoam. When used in construction, it’s often called geofoam, but it’s the same stuff that makes up your disposable coffee cups, mannequin heads, and packaging material. EPS seems insubstantial because of its weight, but it’s actually a pretty strong material in compression. About 7 years ago I used my car to demonstrate the compressive strength of mechanically stabilized earth. Well, I still have that jack and I still drive that car, so let’s try the experiment with EPS foam. This is probably around 5 to 600 pounds, and there is some deflection, but the block isn’t struggling to hold the weight. In an actual embankment, the pavement spreads out traffic loads so they aren’t concentrated like what’s shown in my demonstration to the point where you would never know that you’re driving on styrofoam.

EPS foam has some cool benefits, including how easy it is to place. The blocks can be lifted by a single worker, placed in most weather conditions, don’t require compaction or heavy equipment, and can be shaped as needed using hot wires. But it has some downsides too. This material won’t work well for embankments that see standing water or high groundwater, because of the buoyancy. The embankment could literally float away. They’re also so lightweight that you have to consider a new force that most highway engineers don’t think about when designing embankments: the wind. Also, because EPS foam is such a good insulator, it creates a thermal disconnect between the pavement and the underlying ground, making the road more susceptible to icing. Finally, EPS foam has a weakness to a substance that is pretty regularly spilled onto roadways: it dissolves in fuel. If a crash, spill, or leak were to happen on an embankment that uses EPS foam without a properly designed barrier, the whole thing could just melt away.


Even with all those considerations, EPS foam is a popular choice for lightweight fills. We even have a nice government report on best practices called Guideline and Recommended Standard for Geofoam Applications in Highway Embankments (if you’re looking for some lightweight bedtime reading). It was used extensively in Seattle on the replacement of the Alaskan Way Viaduct to avoid overstressing the landfill materials that underlie major parts of the city. Thousands of drivers in Seattle and millions of people around the world drive over lightweight embankments, probably without any knowledge of what’s below the pavement. But the next time you pass over a bridge and don’t feel a bump transitioning between the deck and roadway embankments, it might just be lightweight aggregate, cellular concrete, or geofoam below your tires working to make our infrastructure as cost-effective and long-lasting as possible.

February 07, 2023 /Wesley Crump
  • Newer
  • Older