Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

The Most Implausible Tunneling Method

May 20, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The original plan to get I-95 over the Baltimore Harbor was a double-deck bridge from Fort McHenry to Lazaretto Point. The problem with the plan was this: the bridge would have to be extremely high so that large ships could pass underneath, dwarfing and overshadowing one of the US’s most important historical landmarks. Fort McHenry famously repelled a massive barrage and attack from the British Navy in the War of 1812, and inspired what would later become the national anthem. An ugly bridge would detract from its character, and a beautiful one would compete for it. So they took the high road by building a low road and decided to go underneath the harbor instead. Rather than bore a tunnel through the soil and rock below like the Channel Tunnel, the entire thing was prefabricated in sections and installed from the water surface above - a construction technique called immersed tube tunneling.

This seems kind of simple at first, but the more you think about it, the more you realize how complicated it actually is to fabricate tunnel sections the length of a city block, move them into place, and attach them together so watertight and safe that, eventually, you can drive or take a train from one side to the other. Immersed tube construction makes tunneling less like drilling a hole and more like docking a spacecraft. Materials and practices vary across the world, but I want to try and show you, at least in a general sense, how this works. I’m Grady, and this is Practical Engineering.

One of the big problems with bridges over navigable waterways is that they have to be so tall. Building high up isn’t necessarily the challenge; it’s getting up and back down. There are limits to how steep a road can be for comfort, safety, and efficiency, and railroads usually have even stricter constraints on grade. That means the approaches to high bridges have to be really long, increasing costs and, in dense cities, taking up more valuable space. This is one of the ways that building a tunnel can be a better option; They greatly reduce the amount of land at the surface needed for approaches. But traditional tunnels built using boring have to be installed somewhat deep into the ground, maintaining significant earth between the roof of the tunnel and the water for stability and safety. Since they’re installed from above, immersed tube tunnels don’t have the same problem. It’s basically a way to get the shortest tunnel possible for a given location, which often means the cheapest tunnel too. That’s a big deal, because tunnels are just about the most expensive way to get from point A to point B. Anything you can do to reduce their size goes a long way.

And there are other advantages too. Tunnel boring machines make one shape: a circle. It’s not the best shape for a tunnel, in a lot of ways. Often there’s underutilized space at the top and bottom - excavation you had to perform because of the machinery that is mostly just a waste. Immersed tubes can be just about any shape you need, making them ideal for wider tunnels like combined road and rail routes where a circular cross-section isn’t a good fit.

One of the other benefits of immersed tubes is that most of the construction happens on dry land. I probably don’t have to say this, but building stuff while underground or underwater is complex and difficult work. It requires specialty equipment, added safety measures, and a lot of extra expense. Immersed tube sections are built in dry docks or at a shipyard where it's much easier to deliver materials and accomplish the bulk of the actual construction work.

Once tunnel sections are fabricated, they have to be moved into place, and I think this is pretty clever. These sections can be enormous - upwards of 650 feet or 200 meters long. But they’re still mostly air. So if you put a bulkhead on either side to trap that air inside, they float. You can just flood the dry dock, hook up some tugboats, and tow them out like a massive barge. Interestingly, the transportation method means that the tunnel segments have to be designed to work as a watercraft first. The weight, buoyancy, and balance of each section are engineered to keep them stable in the water and avoid tipping or rolling before they have to be stable as a structure.

Once in place, a tunnel segment is handed over to the apparatus that will set it into place. In most cases, this is a catamaran-style behemoth called a lay barge. Two working platforms are connected by girders, creating a huge floating gantry crane. Internal tanks are filled with water to act as ballast, allowing the segment to sink. But when it gets to the bottom, it doesn’t just sit on the sea or channel floor below. And this is another benefit of immersed tube construction.

Especially in navigable waterways, you need to protect a tunnel from damage from strong currents, curious sea life, and ship anchors. So most immersed tube tunnels sit in a shallow trench, excavated using a clamshell or suction dredger. Most waterways have a thick layer of soft sediment at the surface - not exactly ideal as a foundation. This is another reason most boring machines have to be in deeper material. Drilling through soft sediment is prone to problems. Imagine using a power drill to make a nice, clean hole through pudding. But, at least in part due to being full of buoyant air, immersed tubes aren’t that heavy; in fact, in most cases, they’re lighter than the soil that was there in the first place, so the soft sediment really isn’t a problem. You don’t need a complicated foundation. In many cases, it’s just a layer of rock or gravel placed at the bottom of the trench, usually using a fall pipe (like a big garden hose for gravel) to control the location. This layer is then carefully leveled using a steel screed that is dragged over the top like an underwater bulldozer. Even in deep water, the process can achieve a remarkably accurate surface level for the tunnel segments to rest on.

The lowering process is the most delicate and important part of construction. The margins are tight because any type of misalignment may make it impossible for the segment to seal against its neighbor. Normally, you’d really want to take your time with this kind of thing, but here, the work usually has to happen in a narrow window to avoid weather, tides, and disruption to ship traffic. The tunnel section is fitted with rubber seals around its face, creating a gasket. Sometimes, the segment will also have a surveying tower that pokes above the water surface, allowing for measurements and fine adjustments to be made as it’s set into place. In some cases, the lowering equipment can also nudge the segment against its neighbor. In other cases, hydraulic jacks are used to pull the segments together. Divers or remotely operated submersibles can hook up the jacks. Or couplers, just like those used on freight trains, can do it without any manual underwater intervention. The jacks extend to couple the free segment to the one already installed, then retract to pull them together, compressing the gasket and sealing the area between the two bulkheads.

This joint is the most important part of an immersed tunnel design. It has to be installed blindly and accommodate small movements from temperature changes, settlement, and changes in pressure as water levels go up and down. The gasket provides the initial seal, but there’s more to it. Once in place, valves are opened in the bulkheads to drain the water between them. That actually creates a massive pressure difference between one side of the segment and the other. Hydrostatic force from the water pushes against the end of the tunnel, putting it in even firmer contact with its neighbor and creating a stronger seal. Once in its final place, the segment can be backfilled.

The tunnel segment connection is not like a pipe flange, where the joints are securely bolted together, completely restraining any movement. The joints on immersed tunnels have some freedom to move. Of course, there is a restraint for axial compression since the segments butt up against each other. In addition, keys or dowels are usually installed along the joint so that shear forces can transfer between segments, keeping the ends from shifting during settlement or small sideways movements. However, the joints aren’t designed to transfer torque, called moments. And there’s rarely much mechanical restraint to axial tension that might pull one joint away from the other. So you can see why the backfill is so important. It locks each segment into place. In fact, the first layer of backfill is called locking fill for that exact reason. I don’t think they make underwater roller compactors, and you wouldn’t want strong vibrations disturbing the placement of the tunnel segments anyway. So this material is made from angular rock that self-compacts and is placed using fall pipes in careful layers to secure each segment without shifting or disturbing it.

After that, general backfill - maybe even the original material if it wasn’t contaminated - can be used in the rest of the trench, and then a layer is placed over the top of everything to protect the backfill and tunnel against currents caused by ships and tides. Sometimes this top layer includes bands of large rock meant to release a ship’s anchor from the bottom, keeping it from digging in and damaging the tunnel.

Once a tunnel segment is secured in place, the bulkhead in the previous segment can be removed from the inside, allowing access inside the joint. The usual requirement is that access is only allowed when there are two or more bulkheads between workers and the water outside. A second seal, called an omega seal (because of its shape), then gets installed around the perimeter of the joint. And the process keeps going, adding segments to the tunnel until it’s a continuous, open path from one end to the other. When it reaches that point, all the other normal tunnel stuff can be installed, like roadways, railways, lights, ventilation, drainage, and pumps. By the time it’s ready to travel through, there’s really no obvious sign from inside that immersed tube tunnels are any different than those built using other methods.

This is a simplification, of course. Every one of these steps is immensely complicated, unique to each jobsite, and can take weeks to months, to even years to complete. And as impressive as the process is, it’s not without its downsides. The biggest one is damage to the sea or river floor during construction. Where boring causes little disturbance at the surface, immersed tube construction requires a lot of dredging. That can disrupt and damage important habitat for wildlife. It also kicks up a lot of sediment into suspension, clouding the water and potentially releasing buried contaminants that were laid down back when environmental laws were less strict. Some of these impacts can be mitigated: Sealed clamshell buckets reduce turbidity and mobilization of contaminated sediment. And construction activities can be scheduled to avoid sensitive periods like migration of important species. But some level of disturbance is inevitable and has to be weighed against the benefits of the project.

Despite the challenges, around 150 of these tunnels have been built around the globe. Some of the most famous include the Øresund Link between Denmark and Sweden, the Busan-Geoje tunnel in South Korea, the Marmaray tunnel crossing the Bosphorus in Turkey, of course, the Fort McHenry tunnel in Baltimore I mentioned earlier, and the BART Transbay Tube between Oakland and San Francisco. And some of the most impressive projects are under construction now, including the Fehmarn Belt between Denmark and Germany, which will be the world’s longest immersed tunnel. My friend Fred produced a really nice documentary about that project on The B1M channel if you want to learn more about it, and the project team graciously shared a lot of very cool clips used in this video too.

There’s something about immersed tube tunnels that I can’t quite get over. At a glance, it’s dead simple - basically like assembling lego blocks. But the reality is that the process is so complicated and intricate, more akin to building a moon base. Giant concrete and steel segments floated like ships, carefully sunk into enormous trenches, precisely maneuvered for a perfect fit while completely submerged in sometimes high-traffic areas of the sea, with tides, currents, wildlife, and any number of unexpected marine issues that could pop up. And then you just drive through it like it’s any old section of highway. I love that stuff.

May 20, 2025 /Wesley Crump

When Abandoned Mines Collapse

May 06, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In December of 2024, a huge sinkhole opened up on I-80 near Wharton, New Jersey, creating massive traffic delays as crews worked to figure out what happened and get it fixed. Since then, it happened again in February 2025 and then again in March. Each time, the highway had to be shut down, creating a nightmare for commuters who had to find alternate routes. And it’s a nightmare for the DOT, too, trying to make sure this highway is safe to drive on despite it literally collapsing into the earth. From what we know so far, this is not a natural phenomenon, but one that’s human-made. It looks like all these issues were set in motion more than a century ago when the area had numerous underground iron mines. This is a really complex issue that causes problems around the world, and I built a little model mine in my garage to show you why it’s such a big deal. I’m Grady and this is Practical Engineering.

We’ve been extracting material and minerals from the earth since way before anyone was writing things down. It’s probably safe to say that things started at the surface. You notice something shiny or differently colored on the side of a hill or cliff and you take it out. Over time, we built up knowledge about what materials were valuable, where they existed, and how to efficiently extract them from the earth. But, of course, there’s only so much earth at the surface. Eventually, you have to start digging. Maybe you follow a vein of gold, silver, copper, coal or sulfur down below the surface. And things start to get more complicated because now you’re in a hole. And holes are kind of dangerous. They’re dark, they fill with water, they can collapse, and they collect dangerous gases. So, in many cases, even today, it makes sense to remove the overburden - the soil and rock above the mineral or material you’re after. Mining on the surface has a lot of advantages when it comes to cost and safety.

But there are situations where surface mining isn’t practical. Removing overburden is expensive, and it gets more expensive the deeper you go. It also has environmental impacts like habitat destruction and pollution of air and water. So, as technology, safety, and our understanding of soil and rock mechanics grew, so did our ability to go straight to the source and extract minerals underground.

One of the major materials that drove the move to underground mining was coal. It’s usually found in horizontal formations called seams, that formed when vast volumes of paleozoic plants were buried and then crushed and heated over geologic time. At the start of the Industrial Revolution, coal quickly became a primary source of energy for steam engines, steel refining, and electricity generation. Those coal seams vary in thickness, and they vary in depth below the surface too, so many early coal mines were underground.

In the early days of underground mining, there was not a lot of foresight. Some might argue that’s still true, but it was a lot more so a couple hundred years ago. Coal mining companies weren’t creating detailed maps of their mines, and even if they did, there was no central archive to send them to. And they just weren’t that concerned about the long-term stability of the mines once the resources had been extracted. All that mattered was getting coal out of the ground. Mining companies came and went, dissolved or were acquired, and over time, a lot of information about where mines existed and their condition was just lost. And even though many mines were in rural areas, far away from major population centers, some weren’t, and some of those rural areas became major population centers without any knowledge about what had happened underneath them decades ago.

An issue that confounds the problem of mine subsidence is that in a lot of places, property ownership is split into two pieces: surface rights and mineral rights. And those rights can be owned by different people. So if you’re a homeowner, you may own the surface rights to your land, while a company owns the right to drill or mine under your property. That doesn’t give them the right to damage your property, but it does make things more complicated since you don’t always have a say in what’s happening beneath the surface.

There are myriad ways to build and operate underground mines, but especially for soft rock mining, like coal, the predominant method for decades was called “room and pillar”. This is exactly what it sounds like. You excavate the ore, bringing material to the surface. But you leave columns to support the roof. The size, shape, and spacing of columns are dictated by the strength of the material. This is really important because a mine like this has major fixed costs: exploration, planning, access, ventilation, and haulage. It’s important to extract as much as possible, and every column you leave supporting the roof is valuable material you can’t recover. So, there’s often not a lot of margin in these pillars. They’re as small as the company thought they could get away with before they were finished mining.

I built a little room and pillar mine in my garage. I’ll be the first to admit that this little model is not a rigorous reproduction of an actual geologic formation. My coal seam is just made of cardboard, and the bright colors are just for fun. But, I’m hoping this can help illustrate the challenges associated with this type of mine. I’ve got a little rainfall simulator set up, because water plays a big role in these processes. This first rainfall isn’t necessarily representative of real life, since it’s really just compacting the loose sand. But it does give a nice image of how subsidence works in general. You can see the surface of the ground sinking as the sand compacts into place.

But you can also see that as the water reaches the mine, things start to deform. In a real mine, this is true, too. Stresses in the surrounding soil and rock redistribute over time from long-term movements, relaxation of stresses that were already built up in the materials before extraction, and from water.

I ran this model for an entire day, turning the rainfall on and off to simulate a somewhat natural progression of time in the subsurface. By the end of the day, the mine hadn’t collapsed, but it was looking a great deal less stable than when it started. And that’s one big thing you can learn from this model - in a lot of cases, these issues aren’t linearly progressive. They can happen in fits and starts, like this small leak in the roof of the mine. You get a little bit of erosion of soil, but eventually, enough sand built up that it kind of healed itself, and, for a while, you can’t see any evidence of any of it at the surface. The geology essentially absorbed the sinkhole by redistributing materials and stresses so there’s no obvious sign at the surface that anything wayward is happening below.

In the US, there were very few regulations on mining until the late 19th century, and even those focused primarily on the safety of the workers. There just wasn’t that much concern about long-term stability. So as soon as material was extracted, mines were abandoned. The already iffy columns were just left alone, and no one wasted resources on additional supports or shoring. They just walked away.

One thing that happens when mines are abandoned is that they flood. Without the need to work inside, the companies stop pumping out the water. I can simulate this on my model by just plugging up the drain. In a real soft rock mine, there can be minerals like gypsum and limestone that are soluble in water. Repeated cycles of drying and wetting can slowly dissolve them away. Water can also soften certain materials and soils, reducing their mechanical strength to withstand heavy loads, just like my cardboard model. And then, of course, water simply causes erosion. It can literally carry soil particles with it, again, causing voids and redistribution of stresses in the subsurface. This is footage from an old video I did demonstrating how sinkholes can form.

The ways that mine subsidence propagates to the surface can vary a lot, based on the geology and depth of the mine. For collapses near the surface, you often see well-defined sinkholes where the soil directly above the mine simply falls into the void. And this is usually a sudden phenomenon. I flooded and drained my little mine a few times to demonstrate this. Accidentally flooded my little town a few times in the process, but that’s okay. You can see in my model, after flooding the mine and draining it down, there was a partial failure in the roof and a pile of sand toward the back caved in. And on the surface, you see just a small sinkhole. In 2024, a huge hole opened right in the center of a sports complex in Alton, Illinois. It was quickly determined that part of an active underground aggregate mine below the park had collapsed, leading to the sinkhole. It’s pretty characteristic of these issues. You don’t know where they’re going to happen, and you don’t know how the surface soils are going to react to what’s happening underneath.

Subsidence can also look like a generalized and broader sinking and settling over a large area. You can see in my model that most of the surface still looks pretty flat, despite the fact that it started here and is now down here as the mine supports have softened and deformed. This can also be the case when mines are deeper in the ground. Even if the collapse is sudden, the subsidence is less dramatic because the geology can shift and move to redistribute the stresses. And the subsidence happens more slowly as the overburden settles into a new configuration. In all cases, the subsidence can extend laterally from the mine, so impacted areas aren’t always directly above. The deeper the mine, the wider the subsidence can be.

I ran my little mine demo for quite a few cycles of wet and dry just to see how bad things would get. And I admit I used a little percussion at the end to speed things along. Let’s say this is a simulation of an earthquake on an abandoned mine. [Beat] You can see that by the end of it, this thing has basically collapsed.

And take a look at the surface now. You have some defined sinkholes for sure. And you also have just generalized subsidence - sloped and wavy areas that were once level. And you can imagine the problems this can cause. Structures can easily be damaged by differential settlement. Pipes break. Foundations shift and crack. Even water can drain differently than before, causing ponding and even changing the course of rivers and streams for large areas. And even if there are no structures, subsidence can ruin high-value farm land, mess up roads, disrupt habitat, and more.

In many cases, the company that caused all the damage is long gone. Essentially they set a ticking time bomb deep below the ground with no one knowing if or when it would go off. There’s no one to hold accountable for it, and there’s very little recourse for property owners. Typical property insurance specifically excludes damage from mine subsidence. So, in some places where this is a real threat, government-subsidized insurance programs have been put in place. Eight states in the US, those where coal mining was most extensive, have insurance pools set up. In a few of those states, it is a requirement in order to own property. The federal government in the US also collects a fee from coal mines that goes into a fund that helps cover reclamation costs of mines abandoned before 1977 when the law went into effect.

That federal mining act also required modern mines to use methods to prevent subsidence, or control its effects, because this isn’t just a problem with historic abandoned mines. Some modern underground soft rock mining doesn’t use the room and pillar method but instead a process called longwall mining. Like everything in mining, there are multiple ways to do it. But here’s the basic method: Hydraulic jacks support the roof of the mine in a long line. A machine called a shearer travels along the face of the seam with cutting drums. The cut coal falls onto a conveyor and is transported to the surface. The roof supports move forward into the newly created cavity, intentionally allowing the roof behind them to collapse. It’s an incredibly efficient form of mining, and you get to take the whole seam, rather than leaving pillars behind to support the roof. But, obviously, in this method, subsidence at the surface is practically inevitable.

Minimizing the harm that subsidence creates starts just by predicting its extent and magnitude. And, just looking at my model, I think you can guess that this isn’t a very easy problem to solve. Engineers use a mix of empirical information, like data from similar past mining operations, geotechnical data, simplified relationships, and in some cases detailed numerical modeling that accounts for geologic and water movement over time. But you don’t just have to predict it. You also have to measure it to see if your predictions were right. So mining companies use instruments like inclinometers and extensometers above underground mines to track how they affect the surface. I have a whole video about that kind of instrumentation if you want to learn more after this.

The last part of that is reclamation - to repair or mitigate the damage that’s been done. And this can vary so much depending on where the mine is, what’s above it, and how much subsidence occurs. It can be as simple as filling and grading land that has subsided all the way to extensive structural retrofits to buildings above a mine before extraction even starts. Sinkholes are often repaired by backfilling with layers of different-sized materials, from large at the bottom to small at top. That creates a filter to keep soil from continuing to erode downward into the void. Larger voids can be filled with grout or even polyurethane foam to stabilize the ground above, reducing the chance for a future collapse.

I know coal - and mining in general - can be a sensitive topic. Most of us don’t have a lot of exposure to everything that goes into obtaining the raw resources that make modern life possible. And the things we do see and hear are usually bad things like negative environmental impacts or subsidence. But I really think the story of subsidence isn’t just one of “mining is bad” but really “mining used to be bad, and now it’s a lot better, but there are still challenges to overcome.” I guess that’s the story of so many things in engineering - addressing the difficulties we used to just ignore. And this video isn’t meant to fearmonger. This is a real issue that causes real damages today, but it’s also an issue that a lot of people put a great deal of thought, effort, and ultimately resources into so that we can strike a balance between protection against damage to property and the environment and obtaining the resources that we all depend on.

May 06, 2025 /Wesley Crump

When Kitty Litter Caused a Nuclear Catastrophe

April 15, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Late in the night of Valentine’s Day 2014, air monitors at an underground nuclear waste repository outside Carlsbad, New Mexico, detected the release of radioactive elements, including americium and plutonium, into the environment. Ventilation fans automatically switched on to exhaust contaminated air up through a shaft, through filters, and out to the environment above ground. When filters were checked the following morning, technicians found that they contained transuranic materials, highly radioactive particles that are not naturally found on Earth. In other words, a container of nuclear waste in the repository had been breached. The site was shut down and employees sent home, but it would be more than a year before the bizarre cause of the incident was released. I’m Grady, and this is Practical Engineering.

The dangers of the development of nuclear weapons aren’t limited to mushroom clouds and doomsday scenarios. The process of creating the exotic, transuranic materials necessary to build thermonuclear weapons creates a lot of waste, which itself is uniquely hazardous. Clothes, tools, and materials used in the process may stay dangerously radioactive for thousands of years. So, a huge part of working with nuclear materials is planning how to manage waste. I try not to make predictions about the future, but I think it’s safe to say that the world will probably be a bit different in 10,000 years. More likely, it will be unimaginably different. So, ethical disposal of nuclear waste means not only protecting ourselves but also protecting whoever is here long after we are ancient memories or even forgotten altogether. It’s an engineering challenge pretty much unlike any other, and it demands some creative solutions.

The Waste Isolation Pilot Plant, or WIPP, was built in the 1980s in the desert outside Carlsbad, New Mexico, a site selected for a very specific reason: salt. One of the most critical jobs for long-term permanent storage is to keep radioactive waste from entering groundwater and dispersing into the environment. So, WIPP was built inside an enormous and geologically stable formation of salt, roughly 2000 feet or 600 meters below the surface. The presence of ancient salt is an indication that groundwater doesn’t reach this area since the water would dissolve it. And the salt has another beneficial behavior: it’s mobile.

Over time, the walls and ceilings of mined-out salt tend to act in a plastic manner, slowly creeping inwards to fill the void. This is ideal in the long term because it will ultimately entomb the waste at WIPP in a permanent manner. It does make things more complicated in the meantime, though, since they have to constantly work to keep the underground open during operation. This process, called “ground control,” involves techniques like drilling and installing roof bolts in epoxy to hold up the ceilings. I have an older video on that process if you want to learn more after this. The challenge in this case is that, eventually, we want the roof bolts to fail, allowing a gentle collapse of salt to fill the void because it does an important job.

The salt, and just being deep underground in general, acts to shield the environment from radiation. In fact, a deep salt mine is such a well-shielded area that there’s an experimental laboratory located in WIPP across on the other side of the underground from the waste panels where various universities do cutting-edge physics experiments precisely because of the low radiation levels. The thousands of feet of material above the lab shield it from cosmic and solar radiation, and the salt has much lower levels of inherent radioactivity than other kinds of rock. Imagine that: a low-radiation lab inside a nuclear waste dump.

Four shafts extend from the surface into the underground repository for moving people, waste, and air into and out of the facility. Room-and-pillar mining is used to excavate horizontal drifts or panels where waste is stored. Investigators were eventually able to re-enter the repository and search for the cause of the breach. They found the source in Panel 7, Room 7, the area of active disposal at the time. Pressure and heat had burst a drum, starting a fire, damaging nearby containers, and ultimately releasing radioactive materials into the air.

On activation of the radiation alarm, the underground ventilation system automatically switched to filtration mode, sending air through massive HEPA filters. Interestingly, although they’re a pretty common consumer good now, High Efficiency Particulate Air, or HEPA, filters actually got their start during the Manhattan Project specifically to filter radionuclides from the air.

The ventilation system at WIPP performed well, although there was some leakage past the filters, allowing a small percentage of radioactive material to bypass the filters and release directly into the atmosphere at the surface. 21 workers tested positive for low-level exposure to radioactive contamination but, thankfully, were unharmed. Both WIPP and independent testing organizations confirmed that detected levels were very low, the particles did not spread far, and were extremely unlikely to result in radiation-related health effects to workers or the public. Thankfully, the safety features at the facility worked, but it would take investigators much longer to understand what went wrong in the first place, and that involved tracing that waste barrel back to its source.

It all started at the Los Alamos National Laboratory, one of the labs created as part of the 1940s Manhattan Project that first developed atomic bombs in the desert of New Mexico. The 1970s brought a renewed interest in cleaning up various Department of Energy sites. Los Alamos was tasked with recovering plutonium from residue materials left over from previous wartime and research efforts. That process involved using nitric acid to separate plutonium from uranium. Once plutonium is extracted, you’re left with nitrate solutions that get neutralized or evaporated, creating a solid waste stream that contains residual radioactive isotopes.

In 1985, a volume of this waste was placed in a lead-lined 55-gallon drum along with an absorbent to soak up any moisture and put into temporary storage at Los Alamos, where it sat for years. But in the summer of 2011, the Las Conchas wildfire threatened the Los Alamos facility, coming within just a few miles of the storage area. This actual fire lit a metaphorical fire under various officials, and wheels were set into motion to get the transuranic waste safely into a long-term storage facility. In other words, ship it down the road to WIPP.

Transporting transuranic wastes on the road from one facility to another is quite an ordeal, even when they’re only going through the New Mexican desert. There are rules preventing the transportation of ignitable, corrosive, or reactive waste, and special casks are required to minimize the risk of radiological release in the unlikely event of a crash. WIPP also had rules about how waste can be packaged in order to be placed for long-term disposal called the Waste Acceptance Criteria, which included limits on free liquids. Los Alamos concluded that barrel didn’t meet the requirements and needed to be repackaged before shipping to WIPP. But, there were concerns about which absorbent to use.

Los Alamos used various absorbent materials within waste barrels over the years to minimize the amount of moisture and free liquid inside. Any time you’re mixing nuclear waste with another material, you have to be sure there won’t be any unexpected reactions. The procedure for repackaging nitrate salts required that a superabsorbent polymer be used, similar to the beads I’ve used in some of my demos, but concerns about reactivity led to meetings and investigations about whether it was the right material for the job. Ultimately, Los Alamos and their contractors concluded that the materials were incompatible and decided to make a switch. In May 2012, Los Alamos published a white paper titled “Amount of Zeolite Required to Meet the Constraints Established by the EMRTC Report RF 10-13: Application of LANL Evaporator Nitrate Salts.” In other words, “How much kitty litter should be added to radioactive waste?” The answer was about 1.2 to 1, inorganic zeolite clay to nitrate salt waste, by volume.

That guidance was then translated into the actual procedures that technicians would use to repackage the waste in gloveboxes at Los Alamos. But something got lost in translation. As far as investigators could determine, here’s what happened: In a meeting in May 2012, the manager responsible for glovebox operations took personal notes about this switch in materials. Those notes were sent in an email and eventually incorporated into the written procedures:

“Ensure an organic absorbent is added to the waste material at a minimum of 1.5 absorbent to 1 part waste ratio.”

Did you hear that? The white paper’s requirement to use an inorganic absorbent became “...an organic absorbent” in the procedures. We’ll never know where the confusion came from, but it could have been as simple as mishearing the word in the meeting. Nonetheless, that’s what the procedure became. Contractors at Los Alamos procured a large quantity of Swheat Scoop, an organic, wheat-based cat litter, and started using it to repackage the nitrate salt wastes. Our barrel first packaged in 1985 was repackaged in December 2013 with the new kitty litter. It was tested and certified in January 2014, shipped to WIPP later that month, and placed underground. And then it blew up. The unthinkable had happened; the wrong kind of kitty litter had caused a nuclear disaster.

While the nitrates are relatively unreactive with inorganic, mineral-based zeolite kitty litter that should have been used, the organic, carbon-based wheat material could undergo oxidation reactions with nitrate wastes. I think it’s also interesting to note here that the issue is a reaction that was totally unrelated to the presence of transuranic waste. It was a chemical reaction - not a nuclear reaction - that caused the problem. Ultimately, the direct cause of the incident was determined to be “an exothermic reaction of incompatible materials in LANL waste drum 68660 that led to thermal runaway, which resulted in over-pressurization of the drum, breach of the drum, and release of a portion of the drum’s contents (combustible gases, waste, and wheat-based absorbent) into the WIPP underground.” Of course, the root cause is deeper than that and has to do with systemic issues at Los Alamos and how they handled the repackaging of the material.

The investigation report identified 12 contributing causes that, while individually did not cause the accident, increased the likelihood or severity of it. These are written in a way that is pretty difficult for a non-DOE expert to parse: take a stab at digesting contributing cause number 5: “Failure of Los Alamos Field Office (NA-LA) and the National Transuranic (TRU) Program/Carlsbad Field Office (CBFO) to ensure that the CCP [that is, the Central Characterization Program] and LANS [that is, that is the contractor, Los Alamos National Security] complied with Resource Conservation and Recovery Act (RCRA) requirements in the WIPP Hazardous Waste Facility Permit (HWFP) and the LANL HWFP, as well as the WIPP Waste Acceptance Criteria (WAC).”

Still, as bad as it all seems, it really could have been a lot worse. In a sense, WIPP performed precisely how you’d want it to in such an event, and it’s a really good thing the barrel was in the underground when it burst. Had the same happened at Los Alamos or on the way to WIPP, things could have been much worse. Thankfully, none of the other barrels packaged in the same way experienced a thermal runaway, and they were later collected and sealed in larger containers.

Regardless, the consequences of the “cat-astrophe” were severe and very expensive. The cleanup involved shutting down the WIPP facility for several years and entirely replacing the ventilation system. WIPP itself didn’t formally reopen until January of 2017, nearly three full years after the incident, with the cleanup costing about half a billion dollars.

Today, WIPP remains controversial, not least because of shifting timelines and public communication. Early estimates once projected closure by 2024. Now, that date is sometime between 2050 and 2085. And events like this only add fuel to the fire. Setting aside broader debates on nuclear weapons themselves, the wastes these weapons generate are dangerous now, and they will remain dangerous for generations. WIPP has even explored ideas on how to mark the site post-closure, making sure that future generations clearly understand the enduring danger. Radioactive hazards persist long after languages and societies may have changed beyond recognition, making it essential but challenging to communicate clearly about risks.

Sometimes, it’s easy to forget - amidst all the technical complexity and bureaucratic red tape that surrounds anything nuclear - that it’s just people doing the work. It’s almost unbelievable that we entrust ourselves - squishy, sometimes hapless bags of water, meat, and bones - to navigate protocols of such profound complexity needed to safely take advantage of radioactive materials. I don’t tell this story because I think we should be paralyzed by the idea of using nuclear materials - there are enormous benefits to be had in many areas of science, engineering, and medicine. But there are enormous costs as well, many of which we might not be aware of if we don’t make it a habit to read obscure government investigation reports. This event is a reminder that the extent of our vigilance has to match the permanence of the hazards we create.

April 15, 2025 /Wesley Crump

Why Are Beach Holes So Deadly?

April 01, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Even though it’s a favorite vacation destination, the beach is surprisingly dangerous. Consider the lifeguard: There aren’t that many recreational activities in our lives that have explicit staff whose only job is to keep an eye on us, make sure we stay safe, and rescue us if we get into trouble. There are just a lot of hazards on the beach. Heavy waves, rip currents, heat stress, sunburn, jellyfish stings, sharks, and even algae can threaten the safety of beachgoers. But there’s a whole other hazard, this one usually self-inflicted, that usually doesn’t make the list of warnings, even though it takes, on average, 2-3 lives per year just in the United States. If you know me, you know I would never discourage that act of playing with soil and sand. It’s basically what I was put on this earth to do. But I do have one exception. Because just about every year, the news reports that someone was buried when a hole they dug collapsed on top of them. There’s no central database of sandhole collapse incidents, but from the numbers we do have, about twice as many people die this way than from shark attacks in the US.

It might seem like common sense not to dig a big, unsupported hole at the beach and then go inside it, but sand has some really interesting geotechnical properties that can provide a false sense of security. So, let’s use some engineering and garage demonstrations to explain why. I’m Grady and this is Practical Engineering.

In some ways, geotechnical engineering might as well be called slope engineering, because it’s a huge part of what they do. So many aspects of our built environment rely on the stability of sloped earth. Many dams are built from soil or rock fill using embankments. Roads, highways, and bridges rely on embankments to ascend or descend smoothly. Excavations for foundations, tunnels, and other structures have to be stable for the people working inside. Mines carefully monitor slopes to make sure their workers are safe. Even protecting against natural hazards like landslides requires a strong understanding of geotechnical engineering. Because of all that, the science of slope stability is really deeply understood. There’s a well-developed professional consensus around the science of soil, how it behaves, and how to design around its limitations as a construction material. And I think a peek into that world will really help us understand this hazard of digging holes on the beach.

Like many parts of engineering, analyzing the stability of a slope has two basic parts: the strengths and the loads. The job of a geotechnical engineer is to compare the two. The load, in this case, is kind of obvious: it’s just the weight of the soil itself. We can complicate that a bit by adding loads at the top of a slope, called surcharges, and no doubt surcharge loads have contributed to at least a few of these dangerous collapses from people standing at the edge of a hole. But for now, let’s keep it simple with just the soil’s own weight.

On a flat surface, soils are generally stable. But when you introduce a slope, the weight of the soil above can create a shear failure. These failures often happen along a circular arc, because an arc minimizes the resisting forces in the soil while maximizing the driving forces. We can manually solve for the shear forces at any point in a soil mass, but that would be a fairly tedious engineering exercise, so most slope stability analyses use software. One of the simplest methods is just to let the software draw hundreds of circular arcs that represent failure planes, compute the stresses along each plane based on the weight of the soil, and then figure out if the strength of the soil is enough to withstand the stress. But what does it really mean for a soil to have strength?

If you can imagine a sample of soil floating in space, and you apply a shear stress, those particles are going to slide apart from each other in the direction of the stress. The amount of force required to do it is usually expressed as an angle, and I can show you why. You may have done this simple experiment in high school physics where you drag a block along a flat surface and measure the force required to overcome the friction. If you add weight, you increase the force between the surfaces, called the normal force, which creates additional friction. The same is true with soils. The harder you press the particles of soil together, the better they are at resisting a shear force. In a simplified force diagram, we can draw a normal force and the resulting friction, or shear strength, that results. And the angle that hypotenuse makes with the normal force is what we call the friction angle. Under certain conditions, it’s equal to the angle of repose, the steepest angle that a soil will naturally stand.

If I let sand pour out of this funnel onto the table, you can see, even as the pile gets higher, the angle of the slope of the sides never really changes. And this illustrates the complexity of slope stability really nicely. Gravity is what holds the particles together, creating friction, but it’s also what pulls them apart. And the angle of repose is kind of a line between gravity’s stabilizing and destabilizing effects on the soil. But things get more complicated when you add water to the mix.

Soil particles, like all things that take up space, have buoyancy. Just like lifting a weight under water is easier, soil particles seem to weigh less when they’re saturated, so they have less friction between them. I can demonstrate this pretty easily by just moving my angle of repose setup to a water tank. It’s a subtle difference, but the angle of repose has gone down underwater. It’s just because the particle’s effective weight goes down, so the shear strength of the soil mass goes down too. And this doesn’t just happen under lakes and oceans. Soil holds water - I’ve covered a lot of topics on groundwater if you want to learn more. There’s this concept of the “water table” below which, the soils are saturated, and they behave in the same way as my little demonstration. The water between the particles, called “pore water” exerts pressure, pushing them away from one another and reducing the friction between them. Shear strength usually goes down for saturated soils. But, if you’ve played with sand, you might be thinking: “This doesn’t really track with my intuitions.” When you build a sand castle, you know, the dry sand falls apart, and the wet sand holds together.

So let’s dive a little deeper. Friction actually isn’t the only factor that contributes to shear strength in a soil. For example, I can try to shear this clay, and there’s some resistance there, even though there is no confining force pushing the particles together. In finer-grained soils like clay, the particles themselves have molecular-level attractions that make them, basically, sticky. The geotechnical engineers call this cohesion. And it’s where sand gets a little sneaky.

Water pressure in the pores between particles can push them away from each other, but it can also do the opposite. In this demo, I have some dry sand in a container with a riser pipe to show the water table connected to the side. And I’ve dyed my water black to make it easier to see. When I pour the water into the riser, what do you think is going to happen? Will the water table in the soil be higher, lower, or exactly the same as the level in the riser? Let’s try it out. Pretty much right away, you can see what happens. The sand essentially sucks the water out of the riser, lifting it higher than the level outside the sand. If I let this settle out for a while, you can see that there’s a pretty big difference in levels, and this is largely due to capillary action. Just like a paper towel, water wicks up into the sand against the force of gravity.

This capillary action actually creates negative pressure within the soil (compared to the ambient air pressure). In other words, it pulls the particles against each other, increasing the strength of the soil. It basically gives the sand cohesion, additional shear strength that doesn’t require any confining pressure. And again, if you’ve played with sand, you know there’s a sweet spot when it comes to water. Too dry, and it won’t hold together. Too wet, same thing. But if there’s just enough water, you get this strengthening effect. However, unlike clay that has real cohesion, that suction pressure can be temporary. And it’s not the only factor that makes sand tricky.

The shear strength of sand also depends on how well-packed those particles are. Beach sand is usually well-consolidated because of the constant crashing waves. Let’s zoom in on that a bit. If the particles are packed together, they essentially lock together. You can see that to shear them apart doesn’t just look like a sliding motion, but also a slight expansion in volume. Engineers call this dilatancy, and you don’t need a microscope to see it. In fact, you’ve probably noticed this walking around on the beach, especially when the water table is close to the surface. Even a small amount of movement causes the sand to expand, and it’s easy to see like this because it expands above the surface of the water. The practical result of this dilatant property is that sand gets stronger as it moves, but only up to a point. Once the sand expands enough that the particles are no longer interlocked together, there’s a lot less friction between them. If you plot movement, called strain, against shear strength, you get a peak and then a sudden loss of strength.

Hopefully you’re starting to see how all this material science adds up to a real problem. The shear strength of a soil, basically its ability to avoid collapse, is not an inherent property: It depends on a lot of factors; It can change pretty quickly; And this behavior is not really intuitive. Most of us don’t have a ton of experience with excavations. That’s part of the reason it’s so fun to go on the beach and dig a hole in the first place. We just don’t get to excavate that much in our everyday lives. So, at least for a lot of us, it’s just a natural instinct to do some recreational digging. You excavate a small hole. It’s fun. It’s interesting. The wet sand is holding up around the edges, so you dig deeper. Some people give up after the novelty wears off. Some get their friends or their kids involved to keep going. Eventually, the hole gets big enough that you have to get inside it to keep digging. With the suction pressure from the water and the shear strengthening through dilatancy, the walls have been holding the entire time, so there’s no reason to assume that they won’t just keep holding. But inside the surrounding sand, things are changing.

Sand is permeable to water, meaning water moves through it pretty freely. It doesn’t take a big change to upset that delicate balance of wetness that gives sand its stability. The tide could be going out, lowering the water table and thus drying the soil at the surface out. Alternatively, a wave or the tide could add water to the surface sand, reducing the suction pressure. At the same time, tiny movements within the slopes are strengthening the sand as it tries to dilate in volume. But each little movement pushes toward that peak strength, after which it suddenly goes away. We call this a brittle failure because there’s little deformation to warn you that there’s going to be a collapse. It happens suddenly, and if you happen to be inside a deep hole when it does, you might be just fine, like our little friend here, but if a bigger section of the wall collapses, your chance of surviving is slim. Soil is heavy. Sand has about two-and-a-half times the density of water. It just doesn’t take that much of it to trap a person.

This is not just something that happens to people on vacations, by the way. Collapsing trenches and excavations are one of the most common causes of fatal construction incidents. In fact, if you live in a country with workplace health and safety laws, it’s pretty much guaranteed that within those laws are rules about working in trenches and excavations. In the US, OSHA has a detailed set of guidelines on how to stay safe when working at the bottom of a hole, including how steep slopes can be depending on the types of soil, and the devices used to shore up an excavation to keep it from collapsing while people are inside. And for certain circumstances where the risks get high enough or the excavation doesn’t fit neatly into these simplified categories, they require a professional engineer be involved.

So does all this mean that anyone who’s not an engineer just shouldn’t dig holes at the beach. If you know me, you know I would never agree with that. I don’t want to come off too earnest here, but we learn through interaction. Soil and rock mechanics are incredibly important to every part of the built environment, and I think everyone should have a chance to play with sand, to get muddy and dirty, to engage and connect and commune with the stuff on which everything gets built. So, by all means, dig holes at the beach. Just don’t dig them so deep. The typical recommendation I see is to avoid going in a hole deeper than your knees. That’s pretty conservative. If you have kids with you, it’s really not much at all. If you want to follow OSHA guidelines, you can go a little bigger: up to 20 feet (or 6 meters) in depth, as long as you slope the sides of your hole by one-and-a-half to one or about 34 degrees above horizontal. You know, ultimately you have to decide what’s safe for you and your family. My point is that this doesn’t have to be a hazard if you use a little engineering prudence. And I hope understanding some of the sneaky behaviors of beach sand can help you delight in the primitive joy of digging a big hole without putting your life at risk in the process.

April 01, 2025 /Wesley Crump

This Bridge’s Bizarre Design Nearly Caused It To Collapse

March 18, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Washington Bridge that carries I-195 over the Seekonk River in Providence, Rhode Island… or at least, it was the Washington Bridge. You can see that the westbound span is just about completely gone. In July of 2023, that part of the bridge, although marked as being in poor condition, received a passing inspection. Six months later, the bridge was abruptly closed to traffic because it was in imminent danger of collapse. Now, the whole thing has nearly been torn down as part of an emergency replacement project. Rhode Islanders who need to travel between Providence and East Providence have suffered through more than a year of traffic delays from the loss of this important link, and business owners have seen major downturns. If you live in the area, you’re probably tired of seeing it in the news. But it hasn’t had a lot of coverage outside the state. And I think it’s a really fascinating case study in the complexities of designing, building, and taking care of bridges, including some lessons that apply to designing just about anything. I’m Grady, and this is Practical Engineering.

The original bridge over the Seekonk River was finished in 1930. Part of that old bridge now serves as a pedestrian crossing and bike link. It’s a nice bridge: concrete and stone multiple arch spans give it a graceful look over the river. In 1959, when I-195 expanded to include this road, it quickly filled with traffic. The old bridge just wasn’t big enough, at least according to the standards of the time. So, a new bridge to carry the westbound lanes was planned, with the federal government picking up most of the bill.

Since the feds were paying, they wanted a simple, inexpensive steel girder bridge. But Rhode Island refused. The state didn’t want a plain, stark, utilitarian structure right next to their historic and elegant multi-arch bridge. It took years to come to an agreement, but eventually, they met in the middle with the Federal Bureau of Roads agreeing to include false concrete arch facades between each of the exterior piers, matching the style of the eastbound bridge. But by that time, the field of bridge engineering had shifted.

The Interstate Highway system in the US started in 1956 with the idea of an interconnected freeway system with no at-grade intersections. Every road and rail crossing required grade separation, and that meant we started building a lot of bridges. We’re up to around 55,000 today, and that’s just on the interstates. With steel in short supply, a new kind of bridge girder was coming into vogue made from pre-stressed concrete. In simple reinforced concrete structures, the rebar is just cast inside. It takes some deflection of the concrete before the steel can take on any of the internal stress within the member. For beams, the amount of deflection needed to develop the strength of the steel often leads to cracks, which eventually lead to corrosion as water reaches the steel. But if you can load up the steel before the beam is put into service, in other words, “prestress” it, you can stiffen the beam, making it less likely to crack under load. I have a whole video going into more detail about prestressed concrete if you want to learn more after this. If you’ve already seen it, then you know there are two main ways to do it.

In some structures, the reinforcing steel is tensioned before the concrete is cast. This “pre-tensioning” is usually done in facilities with specialized equipment that can apply and hold those extreme forces while the concrete cures. Alternatively, you can do it on-site by running steel tendons through hollow tubes in the concrete. Once it’s cured, jacks are used to stress the tendons, a process called post-tensioning.

The engineers for the westbound lanes of the Washington Bridge took advantage of this relatively new construction method, using both post-tensioned and pre-tensioned beams. While most of the grade separation bridges on interstate highways were rigidly standardized, this was a bridge unlike practically any other in the United States. It had 18 spans of varying structural types. Except for the navigation span for boats that used steel girders, the rest of the bridge passing over the water used cantilever beams.

Rather than having the end of the beam sit on the pier like most beam bridges do, called simply supported, the primary beams in the Washington Bridge were supported at their center, cantilevering out in both directions. The pre-tensioned drop-in concrete girders were suspended between the cantilever arms. Those cantilever beams were post-tensioned structural members. Five steel cables were run in hollow ducts from one end to the other, then tensioned to roughly 200,000 pounds (nearly a meganewton each), and locked off at anchorages on both ends. Then the ducts were filled with grout to bond the strands to the rest of the concrete member and protect them against corrosion.

Most of the cantilever beams in the Washington Bridge were balanced, meaning they had roughly the same load on either side. But at the west abutment and navigation span, that wasn’t true. You can see that these beams support a drop-in girder on one end, but the steel girders over the navigation span are simply-supported on their piers. Since the cantilever beams weren’t balanced, designers needed an alternative way to keep them from rotating atop the pier. So steel rods called tie-downs were installed on each of the unbalanced cantilevers.

In December 2023, the now 57-year-old westbound bridge was in the middle of a 64-million-dollar construction project to repair damaged concrete, widen the deck for another lane of traffic, and add a new off-ramp, with the goal of extending the bridge’s life by 25 years. One of the engineers involved in that project was on site and noticed something unusual under the navigation span. Some of the tie-down rods on the unbalanced cantilevers were completely broken.

The finding was serious, so three days later, a more detailed inspection of the structure was carried out, discovering that half of the unbalanced cantilevers at piers 6 and 7 - the piers on either side of the navigation span - were not performing as designed. The Rhode Island Department of Transportation closed the bridge to traffic that day while the state could investigate the issue and come up with a solution.

The closure snarled traffic on a crossing that was already regularly congested. Westbound traffic was eventually rerouted onto the eastbound bridge, with the lanes narrowed to fit more vehicles. The state put up an interactive dashboard where you can look at travel times by route and time of day and view live webcams to try and help travelers and commuters decide how and when to get across the Seekonk River. Still, the closure has had an enormous impact on the Providence area, impacting travel times and economic activity in the area for more than a year now.

The state was fully expecting to implement some kind of emergency repair project, essentially a retrofit that would replace the broken tie-downs on the unbalanced cantilevers. The project was designed, and the contractor started installing work platforms below the bridge in January 2024. As they got access to the underside of the bridge, things started looking worse. Deteriorating concrete on the beams threatened to complicate the installation of the new tie-downs, so the state decided to do a more detailed investigation. They tested concrete in the beams, used ground penetrating radar and ultrasound to inspect the tendons inside, and even drilled into the beams to observe the actual condition of the post-tensioned cables. What they uncovered was a laundry list of serious issues.

In addition to the failed tie-down rods, there were major problems with the beams themselves. The concrete was soft and damaged, in part because of freeze-thaw action. Like most concrete from the 1960s, there was no air entrainment in the concrete beams. This requirement in most modern concrete mixes, especially in northern climates, introduces tiny air bubbles that act like cushions to reduce damage when water freezes. Without air, concrete exposed to water and freezing conditions will spall, crack, and deteriorate over time.

The post-tensioning system was also in bad condition. The anchorages at the end of the beams were corroded, and voids and soft grout were found within the cable ducts. When the inspectors drilled into the beams to reach one of the cables, they saw that the poor grout job had allowed water inside the duct, corroding the cable itself.

Most of the damage was related to the condition and location of the joints in the bridge deck, which allowed water and salty snow melt to leak down onto the structure below. If you saw my video on the Fern Hollow Bridge collapse in Pittsburgh, it was a similar situation. When the engineers analyzed the strength of the bridge, considering its actual condition, the results weren’t good.

With no traffic, the beams met the minimum requirements in the bridge code. When traffic loads were applied, it was a totally different story. The code does not allow any tension to occur in a post-tensioned member, but you can see in the graph that the top of the beam is in tension across a large portion of its length. Worse than that, the engineers found that the beams were in a condition where failure would happen before you could see significant cracking in the concrete. In other words, if the beam was in structural distress, it likely wouldn’t be caught during an inspection. There could be no warning before a potential failure. In short, this was not a bridge worth widening. It wasn’t even safe to drive on.

A big question here is: Why didn’t any of this get caught in inspections? And that mostly has to do with access. Only some of these tie-downs were visible to inspectors. The rest were embedded in concrete diaphragms that ran laterally between the beams. But it’s not clear if any special attention was paid to them, given their structural importance in the bridge. Looking through all the past inspection reports, there’s very little mention of the tie-down rods at all, and only a few pictures of them. The state actually used this photo from the July 2023 inspection, 5 months prior to when it was observed to be broken, to show that this tie-down wasn’t broken then, suggesting that maybe a large truck had caused the damage in a single event. But you can clearly see that, if it were fractured at that time, that break would be obscured by the pier in the photo. Same thing with this one; the fracture is at the very top of the rod, so it’s impossible to see if it was there in July. There’s no easy way to know how long this had been an issue. At least for these outside tie-rods, you have bare steel, exposed and mostly uncoated, directly beneath a leaky joint in the road deck. This is easy to say in hindsight, but if I’m an inspector and I understand the configuration of this bridge, I’m making sure to put eyes on every one of these visible tie-downs, or at least state clearly and explicitly that the access wasn’t enough to fully document their condition.

And it’s even worse for the post-tensioned anchorages in the beams. Those drop-in girders sat essentially flush with the ends of the beams, making it impossible to inspect their condition, let alone perform maintenance or repairs. Seismic retrofits installed in 1996 made access and visibility even tougher. And this is a perfect case study in the risks that hidden elements can pose. If you’ve ever done a renovation project on an older house, you know exactly how this goes. You start to change a light fixture, and next thing you know there’s a backhoe in your front yard. The bridge widening project uncovered the situation with the tie rods. The repairs to the tie rods revealed issues with the post-tension system in the beams. Investigation into that problem revealed further structural issues, and pretty quickly, you have a much bigger problem on your hands than you set out to fix in the first place. You’re trying to keep the public informed about what’s going on and predict how long the bridge is going to be closed at the same time that the situation is unraveling before your eyes.

The engineers looked at a bunch of options to repair all these issues, but the complexity of implementing any fixes just made it infeasible. Just to get to the beams, you’d have to demo the entire road deck and remove the drop-in girders. Since things have shifted, there was no way to know how the load had redistributed, so even taking the deck would come with risks. Then, with the state of the concrete in the beams, it wasn’t a sure bet that they could even support any external strengthening. And even if you did get it repaired, you would still have all the same issues with access and visibility. The report put it in plain words: the options for repair were “limited, complex, and [did] not completely mitigate the identified risks with the structure.” So, eventually, the state decided to demolish the entire thing and start over.

And that’s where it stands (or doesn’t stand) right now. Demolition is well underway, but that’s not the end of the mess. The state put out a request for proposals to design and build the replacement project in April 2024 with an aggressive schedule to finish construction by August 2026. Not a single contractor bid on the job, likely due to the difficult schedule and the inherent risks. The state planned to leave the substructure of the bridge (the piers and piles) intact, giving the replacement contractor the option to reuse it as a part of their design. It seems that no one could get comfortable with that idea, and I don’t blame them, considering how each milestone in this saga has only revealed new bad news about the condition of the bridge. In October, the state decided to just demo the substructure, too, adding it to the existing contract. They started a new solicitation process, this time with two stages, to try and find a contractor willing to take on this project. The two finalists were announced in December, and they expect to award a contract this summer of 2025. But, in the midst of just trying to figure out what to do with the bridge, the fight over who’s responsible for all this chaos started.

In August of 2024, the state filed a lawsuit against 13 companies, including firms that did the bridge inspections, alleging that they should have identified these structural issues earlier. At one point the attorney general stopped the demolition work to preserve evidence for the lawsuit, extending the timeline for a month. Then in January, the US Department of Justice disclosed that they’re investigating the state of Rhode Island under the False Claims Act, which comes into play when federal funds are misused or fraudulently obtained. The dual legal battles—one against the engineering firms and another potentially implicating the state—turned what was already a logistical and financial nightmare into a high-stakes showdown, with millions of dollars and public trust hanging in the balance. Then in February, this video came out showing the demolition contractor dropping huge pieces of the cantilever beams onto the barges below, sparking a workplace safety investigation from OSHA.

A fellow YouTube engineer, Casey Jones, has been covering a lot of the more detailed aspects of the situation if you want to keep up with the story, and I also have to shout out the local journalists who have done some fantastic work to keep the public apprised of the situation where maybe the State has faltered. This saga is far from over, and we’re probably going to learn a lot more in the coming months and years. Maybe the inspectors really did neglect their duties to identify major problems. Maybe the state has some issues with its inspection and review program. Probably there’s a little bit of both. But also, this bridge had some bizarre design decisions that made a lot of these problems inevitable.

Putting critical structural elements, like tie-downs and post-tension anchorages, where they can’t be inspected or repaired is essentially like planting a time bomb. We’re fortunate it was caught before it blew up. And a lot of those design decisions were driven by a roughly five-million-dollar (adjusted for inflation) battle between Rhode Island and the federal government over the visual appearance of the bridge in 1965. Now, it will cost roughly 20 times that just to tear the bridge down, and who knows how much to rebuild.

This situation is a mess! It’s an embarrassment for the state, a nightmare for the engineers and contractors who have worked on the bridge in the past, and a major problem for all the residents of Rhode Island who depend on this bridge. Every time I talk about failures, I get so much feedback about how bad US infrastructure is. And I don’t want to sugarcoat this situation, but I do want to put it in context. This is one of roughly 617,000 bridges in the US, and in some ways, it’s a success story: A serious problem was identified before it became a disaster, and the final outcome should be what was needed all along - replacing a bridge that had reached the end of its design life.

It’s not a bizarre situation that an old bridge was old. It happens all the time, and although sometimes the roadwork is frustrating, we generally understand that structures don’t last forever and eventually need to be replaced. But just like engineers design structures to be ductile, to fail with grace and warning, we want and need projects like this to happen in an orderly fashion. We should be able to recognize when replacement is necessary, plan ahead for the project, do a good job informing the public, and execute the job on a timeline that doesn’t require panic, chaos, and emergency contracts, and the Washington Bridge is a perfect case study in why that’s so important.

March 18, 2025 /Wesley Crump

All Dams Are Temporary

March 04, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Lewis and Clark Lake, on the border between Nebraska and South Dakota, might not be a lake for much longer. Together with the dam that holds it back, the reservoir provides hydropower, flood control, and supports a robust recreational economy through fishing, boating, camping, birdwatching, hunting, swimming, and biking. All of that faces an existential threat from a seemingly innocuous menace: dirt. Around 5 million tons of it flows down this stretch of the Missouri River every year until it reaches the lake, where it falls out of suspension. Since the 1950s, when the dam was built, the sand and silt have built up a massive delta where the river comes in. The reservoir has already lost about 30 percent of its storage capacity, and one study estimated that, by 2045, it will be half full of sediment.

On the surface, this seems like a silly problem, almost elementary. It’s just dirt! But I want to show you why it’s a slow-moving catastrophe with implications that span the globe. And I want you to think of a few solutions to it off the top of your head, because I think you’ll be surprised to learn why none of the ones we’ve come up with so far are easy. I’m Grady, and this is Practical Engineering.

I want to clarify that the impacts dams have on sediment movement happen on both sides. Downstream, the impacts are mostly environmental. We think of rivers as carriers of water; it’s right there in the definition. But if you’ve ever seen a river that looks like chocolate milk after a storm, you already know that they are also major movers of sediment. And the natural flow of sediment has important functions in a river system. It transports nutrients throughout the watershed. It creates habitat in riverbeds for fish, amphibians, mammals, reptiles, birds, and a whole host of invertebrates. It fertilizes floodplains, stabilizes river banks, and creates deltas and beaches on the coastline that buffer against waves and storms. Robbing the supply of sediment from a river can completely alter the ecosystem downstream from a dam. But if a river is more than just a water carrier, a reservoir is more than just a water collector. And, of course, I built a model to show how this works.

This is my acrylic flume. If you’re familiar with the channel, you’ve probably seen it in action before. I have it tilted up so we get two types of flow. On the right, we have a stream of fast-moving water to simulate a river, and on the left, I’ve built up a little dam. These stoplogs raise the level of the water, slowing it down to a gentle crawl. And there’s some mica power in the water, so you can really see the difference in velocity. Now let’s add some sediment. I bought these bags of colored sand, and I’m just going to dump them in the sump where my pump is recirculating this water through the flume. And watch what happens in the time lapse.

The swift flow of the river carries the sand downstream, but as soon as it transitions into the slow flow of the reservoir, it starts to fall out of suspension. It’s a messy process at first. The sand kind of goes all over the place. But slowly, you can see it start to form a delta right where the river meets the reservoir. Of course, the river speeds up as it climbs over the delta, so the next batch of sediment doesn’t fall out until it’s on the downstream end. And each batch of sand that I dump into the pump just adds to it. The mass of sediment just slowly fills the reservoir, marching toward the dam.

This looks super cool. In fact, I thought it was such a nice representation that I worked with an illustrator to help me make a print of it. We’re only going to print a limited run of these, so there's a link to the store down below if you want to pick one up. But, even though it looks cool, I want to be clear that it’s not a good thing. Some dams are built intentionally to hold sediment back, but in the vast majority of cases, this is an unwanted side effect of impounding water within a river valley. For most reservoirs, the whole point is to store water - for controlling floods, generating electricity, drinking, irrigation, cooling power plants, etc. So, as sediment displaces more and more of the reservoir volume, the value that reservoir provides goes down. And that’s not the only problem it causes. Making reservoirs shallower limits their use for recreation by reducing the navigable areas and fostering more unwanted algal blooms. Silt and sand can clog up gates and outlets to the structure and damage equipment like turbines. Sediment can even add forces to a dam that might not have been anticipated during design. Dirt is heavier than water. Let me prove that to you real quick. It’s a hard enough job to build massive structures that can hold back water, and sediment only adds to the difficulty.

But I think the biggest challenge of this issue is that it’s inevitable, right? There are no natural rivers or streams that don’t carry some sediments along with them. The magnitude does vary by location. The world’s a big place, and for better or worse, we’ve built a lot of dams across rivers. There are a lot of factors that affect how quickly this truly becomes an issue at a reservoir, mostly things that influence water-driven erosion on the land upstream. Soil type is a big one; sandy soils erode faster than silts and clays (that’s why I used sand in the model). Land use is another big one. Vegetated areas like forests and grasslands hold onto their soil better than agricultural land or areas affected by wildfires. But in nearly all cases, without intervention, every reservoir will eventually fill up.

Of course, that’s not good, but I don’t think there’s a lot of appreciation outside of a small community of industry professionals and activists for just how bad it is. Dams are among the most capital-intensive projects that we humans build. We literally pour billions of dollars into them, sometimes just for individual projects. This is kind of its own can of worms, but I’m just speaking generally that society often accepts pretty significant downsides in addition to the monetary costs, like environmental impacts and the risk of failure to downstream people and property in return for the enormous benefits dams can provide. And sedimentation is one of those problems that happens over a lifetime, so it’s easy at the beginning of a project to push it off to the next generation to fix. Well, the heyday of dam construction was roughly the 1930s through the 70s. So here we are starting to reckon with it, while being more dependent than ever on those dams. And there aren’t a lot of easy answers.

To some extent, we consider sediment during design. Modern dams are built to withstand the forces, and the reservoir usually has what’s called a “dead pool,” basically a volume that is set aside for sediment from the beginning. Low-level gates sit above the dead pool so they don’t get clogged. But that’s not so much a solution as a temporary accommodation since THIS kind of deadpool doesn’t live forever.

I think for most, the simplest idea is this: if there’s dirt in the lake, just take it out. Dredging soil is really not that complicated. We’ve been doing it for basically all of human history. And in some cases, it really is the only feasible solution. You can put an excavator on a barge, or a crane with a clamshell bucket, and just dig. Suction dredgers do it like an enormous vacuum cleaner, pumping the slurry to a barge or onto shore. But that word feasible is the key. The whole secret of building a dam across a valley is that you only have to move and place a comparatively small amount of material to get a lot of storage. Depending on the topography and design, every unit of volume of earth or concrete that makes up the dam itself might result in hundreds up to tens of thousands of times that volume of storage in the reservoir. But for dredging, it’s one-to-one. For every cubic meter of storage you want back, you have to remove it as soil from the reservoir. At that point, it’s just hard for the benefits to outweigh the costs. There’s a reason we don’t usually dig enormous holes to store large volumes of water. I mean, there are a lot of reasons, but the biggest one is just cost. Those 5 million tons of sediment that flow into Lewis and Clark Reservoir would fill around 200,000 end-dump semi-trailers. That’s every year, and it’s assuming you dry it out first, which, by the way, is another challenge of dredging: the spoils aren’t like regular soil.

For one, they’re wet. That water adds volume to the spoils, meaning you have more material to haul away or dispose of. It also makes the spoils difficult to handle and move around. There are a lot of ways to dry them out or “dewater” them as the pros say. One of the most common is to pump spoils into geotubes, large fabric bags that hold the soil inside while letting the water slowly flow out. But it’s still extra work. And for two, sometimes sediments can be contaminated with materials that have washed off the land upstream. In that case, they require special handling and disposal. Many countries have pretty strict environmental rules about dredging and disposal of spoils, so you can see how it really isn’t a simple solution to sedimentation, and for most cases, it often just isn’t worth the cost.

Another option for getting rid of sediment is just letting it flow through the dam. This is ideal because, as I mentioned before, sediment serves a lot of important functions in a river system. If you can let it continue on its journey downstream, in many ways, you’ve solved two problems in one, and there are a lot of ways to do this. Some dams have a low-level outlet that consistently releases turbid water that reaches the dam. But if you remember back to the model, not all of it does. In fact, in most cases, the majority of sediment deposits furthest from the dam, and most of it doesn’t reach the dam until the reservoir is pretty much full. Of course, my model doesn’t tell the whole story; it’s basically a 2D example with only one type of soil. As with all sediment transport phenomena, things are always changing. In fact, I decided to leave the model running with a time-lapse just to see what would happen. You can really get a sense of how dynamic this process can be. Again, it’s a very cool demonstration. But in most cases, much of the sediment that deposits in a reservoir is pretty much going to stay where it falls or take years and years before it reaches the dam.

So, another option is to flush the reservoir. Just set the gates to wide open to get the velocity of water fast enough to loosen and scour the sediment, resuspending it so it can move downstream. I tried this in the model, and it worked pretty well. But again, this is just a 2D representation. In a real reservoir that has width, flushing usually just creates a narrow channel, leaving most of the sediment in place. And, inevitably, this requires drawing down the reservoir, essentially wasting all the water. And more importantly than that, it sends a massive plume of sediment laden water downstream. I’ve harped on the fact that we want sediment downstream of dams and that’s where it naturally belongs, but you can overdo it. Sediment can be considered a pollutant, and in fact, it’s regulated in the US as one. That’s why you see silt fences around construction sites. So the challenge of releasing sediment from a dam is to match the rate and quantity to what it would be if the dam wasn’t there. And that’s a very tough thing to do because of how variable those rates can be, because sediment doesn’t flow the same in a reservoir as it would in a river, because of the constraints it puts on operations (like the need to draw reservoirs down) and because of the complicated regulatory environment surrounding the release of sediments into natural waterways.

The third major option for dealing with the problem is just reducing the amount of sediment that makes it to a reservoir in the first place. There are some innovations in capturing sediment upstream, like bedload interceptors that sit in streams and remove sediment over time. You can fight fire with fire by building check dams to trap sediment, but then you’ve just solved reservoir sedimentation by creating reservoir sedimentation. As I mentioned, those sediment loads depend a lot not only on the soil types in the watershed, but also on the land use or cover. Soil conservation is a huge field, and has played a big role in how we manage land in the US since the Dust Bowl of the 1930s. We have a whole government agency dedicated to the problem and a litany of strategies that reduce erosion, and many other countries have similar resources. A lot of those strategies involve maintaining good vegetation, preventing wildfires, good agricultural practices, and reforestation. But you have to consider the scale. Watersheds for major reservoirs can be huge. Lewis and Clark Reservoir’s catchment is about 16,000 square miles (41,000 square kilometers). That’s larger than all of Maryland! Management of an area that size is a complicated endeavor, especially considering that you have to do it over a long duration. So in many cases, there’s only so much you can do to keep sediment at bay.

And really, that’s just an overview. I use Lewis and Clark Reservoir as an example, but like I said, this problem extends to essentially every on-channel reservoir across the globe. And the scope of the problem has created a huge variety of solutions I could spend hours talking about. And I think that’s encouraging. Even though most of the solutions aren’t easy, it doesn’t mean we can’t have infrastructure that’s sustainable over the long term, and the engineering lessons learned from past shortsightedness have given us a lot of new tools to make the best use of our existing infrastructure in the future.

March 04, 2025 /Wesley Crump

An Engineer’s Love Letter to Cable-Stayed Bridges

February 14, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

I’m Grady, and this is Practical Engineering. You know, every once in a while, all the science, technology, economic factors, and stylistic tastes converge into a singular, beautiful idea of absolute perfection. Am I being superfluous? I don’t think so. Destin’s got laminar flow. Grey thinks hexagons are the bestagons. Matt loves the number 3, for whatever reason. Vi prefers 6. Alec loves the refrigeration cycle. I am not going to mince words here; they’re just wrong. I’m not trying to say that cable-stayed bridges are the best kind of bridge. I’m saying they’re the best, period. So, on this day dedicated to the people and things we love, let me tell you why I adore cable-stayed bridges.

Spanning a gap is a hard thing to do, in general - to provide support with nothing underneath. Even kids recognize there’s some inherent mystery and intrigue to the idea. Almost all bridges rely, to some extent, on girders - beams running along their length - to gather structural forces from the deck and move them to the supports. This action results in bending, known as moments to engineers, and those moments create internal stress. Too much stress and the material fails. You can increase the size of the beam to reduce the stress, but that creates more weight that creates a higher moment that results in more stress, and you’re back to where you started. For any material you choose as a girder, there is a practical limit in span because the self-weight of the beam grows faster than its ability to withstand the internal stress that weight causes.

The easiest way to deal with a moment that might stress a beam too much is to simply support it from below; build another column or pier there. And in old-fashioned viaducts, this is precisely what you’ll see. But there are a lot of places we want to cross where it’s just not that simple. Putting piers in areas where the water is deep or the soil is crummy can be cost-prohibitive. And sometimes, we just don’t want more supports to ruin the view. Fortunately, “push” has an opposite. Cables can be used to pull a bridge upward toward tall towers, supporting the deck from above.

There was a time when a suspension bridge was practically the only way to cross a long span. Huge main cables drape across the towers, and suspenders attach them to the deck below. You get that continuous support, reducing the demand on the girders and allowing for a much lighter, more efficient structure. But you get some other stuff too. All those forces transfer to the cables and to the tops of the towers. But the cables don’t just pull on the towers vertically. There’s some horizontal pulling too, and I’m sure you know what happens when you put a horizontal force at the top of something very tall. So the cables have to continue to the other side, balancing the lateral component. And that’s just kicking the force-can down the road; ultimately they have to go SOMEWHERE. In most suspension bridges, it’s the anchorage - a usually enormous concrete behemoth that attaches the main cables to the ground. The anchorages on the Golden Gate Bridge weigh 60,000 tons each.

Compare that to a cable-stayed span. Get rid of the main cables and just run the suspenders - now called stays - diagonally straight to the tower. You have balanced horizontal forces on the tower without the need for a massive anchorage that can be expensive or, in places with poor soils, completely infeasible. Instead, those horizontal forces transfer into the bridge deck and girders, but because they’re balanced, there’s no net horizontal force on the deck either. Of course, with traffic and wind loads, you can get slight imbalances in forces, but those can be taken care of with the stiffness of the tower and the anchor piers at the end of each backspan, which are much simpler than massive anchorages.

I should note that some suspension bridges do this too. So-called self-anchored suspension bridges also put the deck in compression in lieu of anchorages. In that case, the entire bridge deck has to withstand the full compression force from the main cables attached at its ends. In a cable-stayed bridge, the maximum compressive force in the deck is localized near the towers and diminishes as you get further from them, allowing you to be more efficient with materials.

This tension management also means cable-stayed bridges work well in multi-span arrangements. Consider the Western side of the Bay Bridge, an admittedly impressive multi-span bridge connecting traffic from San Francisco to Oakland. This is two suspension spans connected to one another, but look what’s in between them. This manmade mountain of a concrete anchorage is an unavoidable cost of this kind of construction.

Compare that to the sleek multi-span wonder of the French Millau(MEE-oh) Viaduct with eight spans, six of which are longer than a thousand feet or three hundred meters. While there certainly is a significant volume of concrete in the viaduct, it’s all in the deck and eight elegant pylons. No hulking anchorages to be seen; just gently curving spans above the French countryside. It also happens to be the tallest bridge in the world, with its tallest pylon surpassing the Eiffel tower! If that doesn’t make your heart flutter, nothing will.

And speaking of flutter, suspension bridges have another downside. You’ve probably seen this video before. Gravity loads aren’t the only forces for long-span bridges to withstand. The lightness of a suspension bridge is actually a disadvantage when it comes to the wind. Because of the droopy, parabolic shape of the main cables, suspension bridges are susceptible to relatively small forces causing outsized deflections of the structure. This is true laterally. But it’s also true for vertical forces. Since the main cables reach very shallow angles, even horizontal in the center of the span, huge tensions are required just to withstand moderate vertical loads, and those tensions come with large deflections as the cables straighten. Put another way, it’s a lot easier to straighten a sagging cable than to stretch one that’s taut. For a cable-stayed bridge, they’re already straight. There’s very little sag in the stays, so any deflections require the actual steel to stretch along its length. That makes cable-stayed bridges generally much stiffer than suspension bridges, giving them aerodynamic stability and allowing the decks to be lighter.

The thing about a bridge is that you can design pretty much anything on paper, or in CAD, but at some point, it has to be built. You have to get the structure into place above the area it spans, and that can be a tricky thing. Consider an arch bridge. That arch can’t do its arch thing until it’s a continuous structure member. Before that, forces have to be diverted through some other temporary structure or falsework, usually something underneath. For one, that requires engineers to design, essentially, several different versions of the same bridge, where (in some cases) the construction loads actually govern the size and shape members rather than the final configuration. For two, if building extra vertical supports was easy, then we would just design the bridge that way in the first place.

Check out this timelapse of the construction of the I-11 bridge over the Colorado River downstream of the Hoover Dam. If you look carefully, you can see that before the arch is complete, it is supported by cable stays! And this is where you see the huge advantage that cable-stayed bridges have: constructability. The flow of forces during construction is the same as when the bridge is complete. But it’s not just that; the construction itself also is much simpler.

Look at a conventionally anchored suspension bridge. You have to build the towers and anchorages first. Only when they’re complete can you hang the main cables. That’s a process in itself. Main cables are too heavy and unwieldy to be prefabricated and hoisted across the span, so they are generally built in place, wire by wire, in a process called spinning. Then you have to attach the suspenders, and only then can you start building the road deck. It’s an intricate process where each major step can’t start until the one before it is totally finished. Self-anchored suspension bridges are even more complicated, because you have to have the entire deck built before the cable can be anchored, but you have to have the cable to suspend the deck. It’s a chicken and egg problem that you have to solve with temporary supports.

None of this is true with cable-stayed bridges. You can have your chicken and egg, and eat it too! You start with the pylons, and then as you build out the bridge deck, you add cable stays along the way, slowly cantilevering out from the towers. Since they’re usually symmetrical, the forces balance out the whole time. The loading is the same during construction and after, and there’s no need for falsework or temporary supports, dramatically lowering the cost to build them. Some bridges can even begin work on the deck before the tower is even finished, speeding up the construction timeline and reducing costs even more. This constructability also has a positive feedback loop when it comes to contractors and manufacturers as well. As the popularity of cable-stayed bridges has exploded since the second half of the twentieth century, more and more contractors have recent and relevant experience, and more and more manufacturers can produce the necessary materials, reducing the costs even further and making them more and more likely to be chosen for new projects.

But once you put up a bridge, you also have to keep it up. Maintenance is another place cable-stayed bridges shine. Besides the stays themselves, most of their parts are easily accessible for inspection. Most structures don’t rely heavily on coatings to protect the steel, so you don’t have to contract with specialized, high-access professionals for maintenance. And just using more concrete instead of steel means fewer problems with corrosion. With more rigidity, you get less fatigue on materials. And they’re redundant. Suspension bridges rely on the two massive main cables for all their structural support. You can’t take one cable out of service for repair or replacement without very complicated structural retrofits. With cable-stayed bridges, it’s no problem. The stays are designed to be highly redundant, so if one breaks or you need to replace them, the remaining cables can still effectively support the bridge's load. And each cable can be tensioned individually, so the structure can be “tuned” to match the design requirements just like a piano, and adjusted later if needed.

You might be looking at all these examples and thinking, this is kind of obvious. But there are a lot of reasons why cable-stayed bridges only started becoming popular in the last few decades. Part of that is in the field of engineering itself. Where the deck, tower, and main cables of a suspension bridge behave fairly independently, a cable-stayed structure is much more interdependent. Each stay is tensioned independently, meaning you have lots of different forces on the deck and towers that depend on each other, and they have to be calculated for each loading condition. Solving for all the forces in the bridge is a complicated task to do by hand, so it took the advent of modern structural analysis software before engineers could gain enough confidence in designs to push the envelope.

And that brings me to a deeper point about structural elements resisting forces. Cable-stayed bridges just make such efficient use of materials, many of which have existed for centuries, but have been refined and improved over time. A lot of engineering sometimes feels like designing around the weaknesses of various materials, but cable-stayed bridges take full advantage of materials’ strengths. We put the towers and deck in compression and make them out of high-strength concrete, a material that loves compressive stress. We put the stays in tension and make them out of high-strength steel. They love tension. We’ve slowly gained confidence in the innovations that make these bridges possible, like parallel wire strands, concrete-to-cable anchoring systems, segmental construction, and prestressed concrete. And all these gradual improvements in various aspects of construction and material science added up to create the pinnacle of engineering technology.

You want to know the other reason why cable-stayed bridges are becoming more popular? It’s taste. Bridges are highly visible structures. They are tremendous investments of public resources, and the public has a say in how they look. I hate to even say the word outloud, but oftentimes, there are architects involved in their design. The swooping shapes of suspension structures were in vogue during the heyday of long-span bridge design, but no more!

One of the huge benefits of cable-stayed bridges is that they’re flexible. Not structurally flexible of course, but architecturally. Most bridges do have a few rules of thumb - the tower height is usually about a fifth of the main span length, and the side spans about two fifths of the main span. However the number of variations on the theme is practically endless. Let me show you some examples.

For short spans, you’ll typically see single cable planes. Each of the masts of the Millau viaduct has a single cable plane, connecting the cables along a central line of the bridge deck. Go a little bigger and you’ll see double cable planes. This is the Russky Bridge in Russia, the current world record holder with a main span of 1,100 meters or 3,600 feet. The two cable planes give the structure extra stiffness. Double planes can be parallel like you see in the Øresund bridge in Denmark. Or, cable planes can be inclined towards one another, like in the Charilaos Trikoupis bridge in Greece. They can use the radial or “fan” style, where the stays originate from the pylons near a single point at the top, like the Pasco Kennewick bridge. Or they can use the harp style, where the stays are more or less parallel. Lots of structures use a style somewhere between the two.

If the pylons get tall enough, they might get connected by a cross member, giving H pylons. Continuing in the alphabetical trend, another option is A-frames with inclined cable planes. If an A-frame gets too tall, though, you end up requiring two foundations per pylon, which can quickly get pricey or just too challenging to construct. In that case, tuck the legs back in towards each other, and you’ve got stunning diamond frames.

You might see asymmetrical designs like Malaysia’s famous Seri Wawasan bridge or Spain’s Puente del Alamillo. You’ve got Sao Paolo’s Octávio Frias de Oliveira Bridge with its iconic X-shaped pylon holding two curved roadways, each with double cable planes inclined and crossing each other. Even my home state of Texas boasts some impressive cable-stayed bridges. Corpus Christi’s Harbor Bridge will be finished soon, now that they got the construction issues worked out. Houston has the double diamond-framed Fred Hartman bridge. And Dallas has the iconic Margaret Hunt Hill Bridge with its high arched single pylon gracefully twisting its single cable plane through the third dimension.

You can see how these simple structural principles work together to allow architects to really get creative while still allowing the engineers and contractors to bring it into reality. I mean, just look at this. There’s nothing extraneous. Nothing extravagant. This is the highest form of utility meets beauty. Have you ever seen something like this?

I hope you can see why we’re in the heyday of cable-stayed bridge construction. This is my opinion, and maybe I’m a little bit biased, but I don’t think there’s a better example in history where all the various factors of a technical problem converged into a singular solution in this way. Many consider the Strömsund Bridge in Sweden, completed in 1956, to be the first modern cable-stayed bridge. But it’s only been over the past three or four decades that things really took off. Now, there are more than 15 with spans greater than 800 meters or 2600 feet, not including the Gordie Howe Bridge, which will soon be the longest cable-stayed bridge in North America.

Even the famously hard-hearted US Federal Highway Administration declared their affection for the design, stating, “Today, cable-stayed bridges have firmly established their unrivaled position as the most efficient and cost-effective structural form in the 150-m to 460-m span range.” And that range is only growing.

We humans built a lot of long bridges in the 20th century, and a lot of them are reaching the end of their design lives. I can tell you what kind of bridge most of them are going to be replaced with. And I can tell you that any time a new bridge that needs a span less than 1000 meters or 3,300 feet goes into the alternatives analysis phase, it’s going to get harder and harder not to choose a cable-stayed structure. They’re structurally efficient, cost-effective, easy to build, easy to take care of, and easy to love. The very longest spans in the world are still suspension bridges, but I would argue: we don’t really need to connect such long distances anyway. Doctors don’t tell you this, but engineers don’t actually have heartstrings; they have pre-fabricated parallel wire heart strands, and nothing tugs on them quite like a cable-stayed bridge. Happy Valentine's Day!

February 14, 2025 /Wesley Crump

What’s Inside a Manhole?

February 04, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

For as straightforward as they are, there’s a lot of mystery to sewers. They’re mostly out of sight, out of mind, and ideally out of smell too. But there’s one familiar place you can get a hint of what’s happening below your feet, and that’s the manhole. Sanitary engineers know that there’s actually a lot of complexity in this humble hallmark of our least-glorified type of infrastructure. So, I set up a see through sewer system so you can see what’s happening inside. I’m Grady, and this is Practical Engineering.

There are a lot of kinds of manholes. If it’s a utility of any kind and you put it underground, there’s a good chance you’ll need some access to it at some point in time. But I figure if you picture a manhole in your head, it probably leads to a sewer system: either a sanitary sewer that connects to your drains and toilets, a storm sewer that connects to storm drains, or a combined system that carries it all. Unlike what you see in movies, most sewer systems aren’t huge tunnels full of totally tubular turtles and their giant rodent mentor. They’re mostly just simple pipes sized according to the amount of flow expected during peak periods.

Sewer networks have a tributary structure. Gravity carries waste along downward sloping pipes that converge and concentrate into larger and larger lines, like a big underground river system…but grosser. Terminology varies place to place, but in general, it goes like this. Pipes that service individual buildings are usually called laterals, and those servicing particular streets are branches. Larger pipes that collect wastewater from multiple branches are called mains or trunk sewers. And the most significant lines furthest downstream in the system are usually called interceptors. And connecting each one is a manhole.

This is my model sewer system. I’m just pumping water into an upper manhole and letting it flow through the system by gravity. I chose to do this with nice blue water for anyone watching while having lunch. In real life, the color in a sewer isn’t quite this nice. Unlike regular plumbing, where you use “fittings” to connect lengths of pipes together, sewers lines are connected with manholes. Any change in size, direction, alignment, or elevation is a place where debris can get caught or turbulence can affect the flow. So instead of elbows or tees in the pipe, we just put a manhole instead. In fact, unlike many underground utilities, you can usually trace the paths of a sewer network pretty easily, because it’s all straight lines between manholes. They provide a controlled environment where the flow can change direction, and more importantly, a place where technicians can get inside to inspect the lines, remove clogs, or perform maintenance (hence the name).

Unlike fresh water distribution systems that can usually go a long time without any intervention, sewers are a little… more hands-on (just make sure you wash your hands afterwards). There’s just no end to the type of things that can find their way into the pipes. Fibrous objects are particularly prone to causing clogs, which is why so many sewer utilities have campaigns encouraging people not to flush wipes, even if they say “flushable” on the package. Fats, oil, and grease (or FOG, in the industry) are also a major problem because they can congeal and harden into blockages sometimes not-so-lovingly known as “fatbergs”. Of course, a lot of people aren’t aware of what’s safe to flush or wash down the drain, and even for people who know, it’s easy to let something slide when it’s not your problem anymore. And in most cases, the rules aren’t very strictly enforced outside of large commercial and industrial users of the system. But if you use a sewage system, in a way, obstructions really ARE your problem because a portion of your taxes or fees that pay for the sewer system go toward sending people - not always men (despite the name) - into manholes to keep things flowing. And the more often things clog up, the higher the rates that everyone pays to cover the cost of maintenance.

There’s quite a lot of sophistication in keeping sewers in service these days. It’s not unusual for a city or sewer district to regularly send cameras through the lines for inspection. Technology has made it a lot easier to be proactive. In fact, there’s a whole field of engineering called infrastructure asset management that just focuses on keeping track of physical assets like sewer lines, monitoring their condition, and planning ahead for repairs, maintenance, and replacement over the long term. A lot of the unclogging and cleaning these days is done by hydro jetting: basically a pressure washer scaled up. Rotating nozzles blast away debris and propel the hose down the line. In fact, one of the benefits of manholes is that, if a sewer line does need maintenance, it can be easily taken out of service. You can just run a bypass pump from one manhole to another and keep the system running. But maintenance isn’t the only thing a manhole helps with.

You can see a few more things in this demo. For one, manholes provide ventilation. Along with the solids and liquids you expect, gases can end up flowing through sewer pipes too. You can see the bubbles moving through the system. Air bubbles can restrict the flow of fluid in a pipe, and air pressure can cause wacky problems like backflow. Along with regular air, toxic, corrosive, or even explosive gases can also build up in a sewer if there’s no source of fresh air, so ventilation from manholes is an important aspect of the system. Sometimes you’ll even notice condensed water vapor flowing up from a manhole cover. In a few cities, like New York, that might be related to an actual steam distribution system running underground. But it can also happen in sewers when the wastewater from sources like showers and dishwashers is warmer than the outside air, especially in the winter.

I added a third manhole to my model so you can see how a junction might look. It just provides a nice way to confluence two streams into one pipe, which is an important job in a sewer system, since a “sewershed” all has to flow to one place. The manhole acts kind of like a buffer, smoothing out flows through the system. At normal flows, that’s not a super important job. It’s basically just a connection between two pipes. But the peak flows for most sewers, even if they’re not storm sewers, happen during storms. Drains may be improperly connected to sanitary sewers, plus surface water often finds a way in through manhole covers and other means. In fact, a lot of places require sealed and bolted covers if the top of the manhole is below the floodplain. That’s why you sometimes see these air vents sticking up out of the ground. Many older cities use combined systems where stormwater runs in the same pipes. So rainwater in sewers can be a major challenge. And you can see when you get a big surge of water, the manhole can store some of it, smoothing out the flow downstream.

These storm flows are actually a pretty big problem of the constructed environment. You may have heard about the trouble with holding swimming events in the Seine River in Paris during the 2024 summer olympics. Same problem. Wastewater treatment plants can only handle so much flow, so many places have to divert wastewater during storms, often just discharging raw, if somewhat diluted, sewage directly into rivers or streams. In fact some of the most impressive feats of engineering in progress right now are ways to store excess wastewater during storms so it can be processed through a treatment plant at a more manageable rate. But overflows can also happen way upstream of a treatment plant if the pipes are too small. Sometimes that storage available through manholes isn’t enough. I can plug up the pipe in my demo to simulate this. If the sewer lines themselves can’t handle the flow, you can get wastewater flowing backwards in pipes, and if things get bad enough, you can get releases out of top of manholes. And of course, this doesn’t have to be the result of a storm. Even a blockage or clog in the line can cause wastewater to back up like this. Obviously, having raw sewage spilling to the surface is not optimal, and many cities in the US pay millions of dollars in fines and settlements to the EPA for the contamination caused by backups.

Another thing this model shows is that not all pipes have to come in at the bottom. They call this a drop manhole when one of the inlets is a lot higher than the outlet. The slope of a sewer line is pretty important. I’ve covered that topic in another video. There’s a minimum slope to get good flow, but you don’t want too much slope either. Wastewater often carries rocks and grit, so if it gets going too quickly, it can wear away or otherwise damage the pipes. So if you’re running a line along a steep slope, sometimes it’s a better design to let some of that fall happen in a manhole, rather than along the pipe. It’s not normally done this way where my pipe just juts in. You usually don’t want a lot of splashing and turbulence in a manhole, again to avoid damage, but also to avoid smells. So most drop manholes use pipes or other structures to gently transition inlet flow down to the bottom.

I hope it’s clear how useful manholes are by now. Doing it this way - by making the plumbing junctions into access points - just provides a lot of flexibility, while also kind of standardizing the system so anyone involved, whether its a contractor building one or a crew doing maintenance, kind of knows what to expect. In fact, if you live in a big city, there’s a good chance that the sewer authority has standardized drawings and details for manholes so they don’t have to be reinvented for each new project. In many cases, they’re just precast concrete cylinders placed into the bottom of an excavation. Those cylinders sit on temporary risers, and then concrete is used to place the bottom, often with rounded channels to smooth the transition into and out of the manhole. I did a video series on the construction of a sewage lift station and showed how a few of these are built if you want to check that out after this and learn more.

Constructing manholes reminds of that famous interview riddle about why manhole covers are round. There’s a lot of good answers: a round object can’t fall down into the hole, it can be replaced in any orientation, it’s easy to roll so workers don’t have to lift the entire weight to move it out of the way. A professor of mine had an answer that I don’t think I’ve heard before. Manhole covers are round because manholes are round. It’s almost like asking why pringles lids are round. And manholes are round for a lot of good reasons: it’s the best shape for resisting horizontal soil loads. It’s easier to manufacture a round shape than a rectangular one. For those reasons, manholes are usually made of pipes, and pipes are round because it’s the most efficient hydraulic section. It’s one of those questions, like the airplane on a treadmill, that can spawn unending online debate. But I like pipes, so that’s my favorite answer.

February 04, 2025 /Wesley Crump

Why are the Dutch So Famous for Waterworks?

January 21, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Veluwemeer Aqueduct in Harderwijk, Netherlands. It solves a pretty simple problem. If you put a bridge for vehicles over a navigable waterway, you often have to make it either very high up with long approaches so the boats can pass underneath or make it moveable, which is both complicated and interrupts the flow of traffic, wet and dry. But if you put the cars below the water, both streams of traffic can flow uninterrupted with a fairly modest bridge. Elevated aqueducts aren’t that unusual, but this one is just so striking to see, I think, because it looks just like a regular highway bridge, except…the opposite.

When I was a little kid, I read this book, The Hole in the Dike, about a Dutch boy who plugged a leak with his finger to save his town from a flood. And ever since then, as this little kid grew up into a civil engineer with a career working on dams and hydraulic structures, I’ve been kind of constantly exposed to this idea that the Netherlands is this magical country full of fascinating feats of civil engineering, like Willy Wonka’s chocolate factory but for infrastructure. I’m not necessarily proud to say this, but I think it’s true for a lot of people (especially here in the US) that my primary cultural touchpoint with the Netherlands is just that they’re really good at dealing with water. You know, you don’t have to browse the internet for very long to find viral (and sometimes dubious) posts about Dutch infrastructure projects. Sometimes, it feels like half of my comment section on YouTube is just people telling me that the Dutch do it better.

I’m naturally skeptical of things that seem larger-than-life, especially when it comes to engineering. And without context, I think it’s hard to separate myth from facts (this TikTok video being a myth, by the way.) Here’s the actual scale of a cruise ship compared to the aqueduct. So let’s take a look at a few of these projects and find out if the Dutch really have the rest of the world outclassed when it comes to waterworks. And I’ll do my best to pronounce the Dutch words right too. Ik ben Grady, en dit is Practical Engineering.

The first hint that the Dutch really do lead the world in water infrastructure is in the name of the country itself: The Netherlands translates literally to the lowlands, and that’s a pretty good description. A large portion of the country sits on the delta of three major rivers - the Rhine, the Meuse/Maas, and the Scheldt - that drain a big part of central Europe into the North Sea. Those rivers branch and meander through the delta, forming a maze of waterways, islands, inlets, and estuaries along the coast. About a quarter of the country sits below sea level, which creates a big challenge because it’s right next to the sea!

As early as the Iron Age, settlers were involved in managing water. Large areas of marshland were drained with canals and ditches to convert them into land that could be used for agriculture. These plots of land, which, through human intervention, were hydrologically separated from the landscape, became known as polders. And the tradition of their engineering would continue for centuries to the present day. Unfortunately, that marshland, being full of organic material, decomposed over time. That, combined with the drainage of groundwater, caused the polders to sink and subside, increasing their vulnerability to floods.

And that is kind of the heart of it. The Netherlands is a really valuable and strategic area for a lot of reasons: it’s flat; it has great access to the sea and major rivers providing for fishing and trade; it has prime conditions for farming and pastures, making it the second largest exporter of agricultural products in the world. The problem is that all those factors come with the downside of making the country extremely susceptible to floods, both from the North Sea and the major rivers that flow into it. So for basically all of its history, people were building dikes, embankments of compacted soil meant to keep water out of low-lying areas. Over the centuries, huge portions of the sinuous Dutch coastline became lined with dikes, and the individual polders were often ringed with dikes as well to keep the interior areas dry.

Of course, you still get rain inside a polder, plus irrigation runoff and sometimes groundwater, so they have to be continuously pumped out. And before the widespread use of electric motors and combustion engines, the Dutch used the source of power they’re famous for: the wind. Windmills - or more accurately windpumps, since they weren’t milling anything in this context - could be used to turn paddle wheels or Archimedes screws to move water up and over dikes, keeping canals and ditches within the polders from overflowing. Over time, poldering dry-ish land, the Dutch realized they could use exactly the same technique to reclaim land from lakes. Typically land reclamation is done by using fill - soil and rock brought in from elsewhere to raise the area above the water. But it’s not the only way to do it, and it’s not that useful if you want to use that area for agriculture since the good soil is under the fill. Another option is to enclose an area below the water level, and then just get rid of the water. In this way, you can create arable land just for the cost of a dike and a pump. If you love cheese, you might be interested to learn that one of the first polders in the Netherlands reclaimed from a lake was Beemster. The soil of the ancient marsh provides a unique flavor of the famous Beemster cheese.

One glaring issue with reclaiming land by drawing down the water instead of building up is that the low-lying polders are still vulnerable to floods. In 1916, a huge storm in the North Sea coincided with high flows in several rivers, flooding the Zuiderzee, a large, shallow bay between North Holland and Friesland. The flood broke through several of the dikes, leading to catastrophic damage and casualties. Although the idea had been in discussion for years, the event provided the impetus for what would become one of the grandest hydraulic engineering projects in the world.

One of the major issues with the Zuiderzee flooding from a surge in the level of the North Sea is the sheer length of the coastline that has to be protected. Building adequately large and strong enough dikes to protect it all would be prohibitively expensive and just plain unrealistic. So Dutch engineers devised a deceptively simple solution: just shorten the coastline. If the effective coast of the Zuiderzee could be substantially shorter, resources could go a lot further toward protecting the area against floods. So that’s just what they did.

Between the late 1920s and early 1930s, a 20-mile (or 32-kilometer) dam and causeway called the Afsluitdijk was built across the Zuiderzee, cutting it off from the North Sea. Construction spread outward from four points, the coast on either side, and two small artificial islands built specifically for the project. The original dam was built from stones, sand, glacial till, stabilizing “mattresses” of brushwood, and thousands upon thousands of hand-laid cobblestones.

Cutting off the Zuiderzee from the ocean turned it into a large, and ultimately freshwater lake called the Ijsselmeer, named for the river that empties into it. But that inflow is an engineering challenge. Without a way for it to reach the sea, the lake would just overflow. So, these sluices are like gigantic outflow valves that allow excess freshwater constantly building up in the Ijsselmeer to be discharged into the sea, as it would have been back when it was still the Zuiderzee. The sluices, which are titanic hydraulic engineering structures themselves, typically use gravity to drain water during low tide. When that passive discharge isn’t enough, new high-volume pumps can be used to make sure the level of the Ijsselmeer stays within the ideal range.

Over the last few years, the Afsluitdijk has been undergoing a major facelift. With sea levels rising and the frequency of extreme weather events rising with it, the Dutch have completed a major overhaul, raising the crest of the dam by about 2 meters, adding thousands of huge concrete blocks to break waves and strengthen the structure. The larger blocks that are always in contact with the sea are truly gigantic, over 70,000 of them weighing six and a half metric tons EACH!

The project also included upgrades to the lock complexes and sluices. And the highway that runs along the top is also getting upgrades (including, in true Dutch fashion, the bike lanes too). And human passage isn’t the only consideration for the project either. The Fish Migration River will allow fish to swim between the North Sea and the Ijsselmeer and river ecosystems upstream. The stark contrast between freshwater and saltwater is hazardous to fish, so the migration river spreads out the salinity gradient into something more manageable. It’s like a fish ladder, but on top of having an elevation gradient, it also is a ramp of saltiness.

With the shallow Zuiderzee protected from the North Sea, the Netherlands saw an opportunity to increase its food supply by creating new land. Over the middle decades of the 20th century, the Dutch built four gigantic polders in areas that were once the seafloor. These polders were built using the same principles as before, just with scaled-up 20th-century technology. There are even examples of our old friends, Archimedes screws being used, albeit with modern electric motors. Wieringermeer and Noordoostpolder were built first, but the Dutch faced a problem. With such large areas of land dried up, the groundwater in adjacent areas flowed out and into the polders, causing subsidence and loss of freshwater needed for agriculture. The following polders, a pair of adjacent tracts called Eastern and Southern Flevoland, avoided this by retaining a small series of connected lakes. These bordering lakes keep the polders hydrologically isolated from the mainland, and this is also where you’ll find the Veluwemeer aqueduct. The later three polders became Flevoland, a totally new province of dry land reclaimed from the sea. A succession of carefully selected crops were grown to rehabilitate the salty soil, making it fertile enough to farm. All you need to do to see how well it worked is look at these aerial photos of all the farmland in Flevoland!

There were plans for a fifth polder called the Markerwaard, and a huge dike was actually constructed for it. Hangups going as far back as the German Occupation of the Netherlands in the Second World War, to later environmental concerns, stopped the polder from being completed. The dike did create another freshwater reservoir, the Markermeer, and only recently, an artificial archipelago called the Marker Wadden was built as a nature conservation project and host to migratory birds, fish, and ecotourists alike.

Even as the Zuiderzee Works protected parts of the Netherlands, many parts of the country were still facing threats from flooding. In the winter of 1953, an enormous storm in the North Sea raised a major storm surge, crashing into the delta, causing floodwaters to overwhelm much of the already existing and extensive flood control structures of the Netherlands. A staggering 9% of all of the farmland in the whole country was flooded, 187,000 farm animals drowned, nearly 50,000 buildings were damaged or destroyed, and over 1,800 people perished. It was one of the worst disasters in the history of the country.

Just as with the Zuiderzee, the extraordinary length of the coastline of this area meant that adequately strengthening all the dikes in response to the storm wasn’t feasible. So, an incredibly intricate plan called the Deltawerken or Delta Works was put into motion to effectively shorten the coastline with a series of 14 major engineering projects, including dams, dikes, locks, sluices, and more. Unlike with the Zuiderzee Works, fully enclosing the area and cutting off the sea wasn’t an option. Firstly, the Rhine and Meuse/Maas have gigantic flows. The Rhine is one of the largest rivers in Europe, and that can’t just be walled off. There are also concerns about environmental impacts and ensuring the easy movement of the huge amount of shipping that uses this waterway. So, many of these structures have to be functionally non-existent until they’re needed. The resulting projects, along with the Zuiderzee works, have shortened the Dutch coast by more than half since the 19th century. These feats are so impressive they are on the American Society of Civil Engineering’s list of wonders of the modern world. And it’s easy to see why when you take a look.

This is the Oosterscheldekering, the largest of all the Delta Works. It was initially designed to be a closed dam, similar in some ways to the Afsluitdijk. If constructed as initially conceived, it would create another large freshwater lake. But, by the time it was under construction in the 1970s, environmental impacts were much more appreciated than they were in the 20s and 30s. So the dam was designed to include huge sluice gates to allow massive tidal flows during normal conditions while retaining the ability to fully close off the inland portion of the Delta from the sea during storm conditions.

The Oosterscheldekering comprises two artificial islands and three storm surge barrier dams connecting them. The larger of the islands also contains a lock, allowing for ships to pass through. The floodgates are staggering in scale; there are 62 steel doors, each 138 feet (or 42 meters) wide and weighing up to 480 metric tons! Even the piers between them were a monumental effort. They were built offsite, maneuvered into place with custom-built ships, then filled with sand and rock to sink them into place. Special ships also had to compact the seabed with vibration before placing the pillars.

Another notable structure in the Delta Works is the Stormvloedkering Hollandse IJssel, a storm surge barrier protecting Europe’s largest seaport. The project has it all: a lock to allow for the passage of ships, a bridge for road traffic with a fixed truss and a moveable bascule portion crossing the lock, and two gigantic, moveable storm surge barriers crossing the main sluice. Each of these barriers is strengthened by a truss arch which makes them look like sideways bridges when viewed from above.

And then, there’s the Maeslantkering. This is probably the most impressive storm surge barrier on the planet. Those tiktoks showing out-of-scale cruise ships crossing Veluwemeer should have just shown actual gigantic ships cruising through the huge ship canal safeguarded by the Maeslantkering. It’s hard to communicate the scale of the two gates; they’re considered one of the largest moving structures on earth. And moving them is a process. The gates normally sit in dry docks. When it’s time to close them, the dry docks are flooded, and the hollow gates float in place. Then they’re pivoted around gigantic ball-and-socket joints at the ends of the truss arm. Each door is 690 feet (or 210 meters) wide, and once in place, they are flooded with water, so they sink to the bottom, completely blocking even the fiercest storm surge. In the event that the doors remain closed long enough for the flow of the Rhine to build up dangerously high on the inland side, they can be partially floated, allowing for excess river water to run out to sea.

Since its completion in 1997, aside from annual testing, the Maeslantkering has only been closed twice: once in 2007 and again in 2023. And to me, that tells the story of Dutch waterworks more than anything else. It’s all a huge exercise in cost-benefit analysis. Look at two alternate realities: one where the Delta Works weren’t built and one where they were. And then just compare the costs. In one case, the costs are human lives, property damage, agriculture losses from saltwater, and all the disaster relief efforts associated with, so far at least, just two big storms. And in the other case, the costs are associated with designing, building, and maintaining an infrastructure program that rivals anything else on the globe. The question is simple: which one costs more? Look at many other places in the world, and the answer would probably be the Delta Works. Just the capital cost was around $13 billion dollars, and that doesn’t include the operation and maintenance, or environmental impacts of such massive projects. But in the Netherlands, where a quarter of the country sits below sea level, it’s a fraction of the cost of inaction.

In the United States, most flood control projects are designed to protect up to the 1-in-100 probability storm. In other words, in a given year, there’s a 99% chance that a storm of that magnitude doesn’t happen. In the Netherlands, those levels of protection are much higher. River structures go from the 1-in-250 all the way to 1-in-1,250 and flood protection from the North Sea goes up to 1 in 10,000-year event. It only makes sense because practically the entire country is a floodplain; massive investment in protection from flooding is the only way to exist. And those projects come with other costs too. The Zuiderzee Works cost the entire area’s fishing industry their livelihoods, and some consider converting such a large estuary into a freshwater lake one of the country’s greatest ecological disasters.

So there are no easy answers, and the Netherland's battle against the sea will never really be over. Major waterworks are just the reality of the country, and they keep evolving their methods. One example is the Room for Rivers program which is restoring the natural floodplain along rivers in the delta. Another is the sand engine, an innovative beach nourishment project that relies on natural shoreline processes to distribute sand along the coast. The Dutch government expects the North Sea to rise 1 to 2 meters (or 3 to 7 feet) by the end of this century, meaning they’ll have to spend upwards of 150 billion dollars just to maintain the current level of protection.

That sounds like a staggering cost, and it is, but consider this: that investment in protection for a major part of the country over three-quarters of a century is approximately equal to the economic impact of Hurricane Katrina, a single storm event in the US. Of course, the damage during Katrina was amplified by engineering errors, and we’re far from comparing apples-to-apples, but I think it’s helpful to look at the scale of things. Decisions of this magnitude are difficult to make, and even harder to execute, because we can’t visit those alternate realities to see how they play out. But what we can do is look at the past to see how decisions have played out historically, and there’s no place on Earth with a longer history of major public water projects than the Netherlands. In fact, the US Army Corps of Engineers and the Dutch government agency in charge of water, the Rijkswaterstaat, have had a memorandum of agreement since 2004 to share technical information and resources about water control projects. And in the aftermath of Hurricane Katrina, the Army Corps consulted with the Rijkswaterstaat to help decide how to rebuild New Orleans’s flood defense system.

In 2021, those systems were put to the test when the region was pummeled by Hurricane Ida. It was an extremely powerful storm, and the torrential rains and violent winds did enormous damage. But the storm surge was repelled by the levees, barriers, and floodgates built with the assistance of Dutch waterworks engineers. Many signs point to storms getting stronger and surges getting higher, which means that practically the whole world is in an uphill battle with floods. So we all benefit from that relatively small country with its low-lying delta lands, buttressed against the sea, and the expertise and knowledge gained by Dutch engineers through the centuries.

January 21, 2025 /Wesley Crump

The Hidden Engineering Behind Texas's Top Tourist Attraction

January 07, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

I am on location in downtown San Antonio, Texas, where crews have just finished setting up this massive 650-ton crane. The counterweights are on. The outriggers are down. And the jib, an extension for the crane's telescoping boom, is being rigged up. This is the famous San Antonio River Walk, a city park below street level that winds around the downtown district. It’s one of the biggest tourist attractions in the state, connecting shops, restaurants, theaters, and Spanish missions (the most famous of them being the Alamo). Every year, millions of people come to see the sights, learn some history, and maybe even take a tour boat on the water. It’s easy to enjoy the scenery without considering how it all works. But, how many rivers do you know that stay at an ideal, constant level, just below the banks year-round? One of the critical structures that make it all possible is due for some new gates, and it’s going to be a pretty interesting challenge to replace them without draining the whole river in the process. I’ve partnered up with the City of San Antonio and the San Antonio River Authority to document the entire process so you can see behind the scenes of one of my favorite places. I’m Grady, and this is Practical Engineering.

After a catastrophic flood in 1921 took more than 50 lives in San Antonio, the city took drastic measures to try and protect the downtown area from future storms. Back when my first book came out, I took a little tour of some of those measures, one historical - Olmos Dam - and one more modern - the flood diversion tunnel that runs below the city. But another of those projects eventually turned into one of San Antonio’s crown jewels. A major bend in the river, right in the heart of downtown, was cut off, creating a more direct path for floodwaters to drain out. But rather than fill in the old meander, the city decided to keep it, recognizing its value as a park. Gates were installed at both connections, allowing the bend to be isolated from the rest of the river. Later a dam was built downstream on the San Antonio River with two floodgates. During normal flows, these gates control the level upstream on the river, maintaining a constant elevation for the Great Bend and the cutoff. If a flood comes, these gates can be shut to maintain a constant level in the bend, and these gates can be opened to let the floodwaters pass downstream.

Essentially, this pair of floodgates are pivotal parts of the San Antonio River Walk. They hold back flow during sunny weather to keep water levels up, and they lower to release water during storms to keep downtown from being flooded. They were installed way back in 1983 and already planned for replacement. Then this happened. One of the floodgates’ gearboxes had a nut with threads that had worn down, and eventually stripped out. It caused one side of the gate to drop, damaging several components and rendering the floodgate inoperable. The City of San Antonio immediately installed stop logs upstream of the gate to block the flow and prevent the water level in the River Walk from dropping. But the gate is still unable to lower in the event of a flood, halving the capacity of this important dam. So they sprung into action to design replacements for these old gates. It’s been a long road finding a modern solution that fits within this existing structure. But it’s finally time to remove the old gates and bring this dam into the 21st century.

There’s a lot of work to do before the broken gate can come out. The first job is just to get the water out. This dam has a place for stoplogs, both upstream and downstream of each gate. Historically, they’d be wood, hence the name, but modern stoplogs are heavy steel beams that stack together to create a relatively watertight bulkhead on either side. Those stoplogs have been installed since the gate went out of service, and while they hold back a whole lot, they aren’t completely watertight. Inevitably, some water gets through to fill up the area between them, making it challenging to work in this area. The contractor has brought in a large diesel pump and perched it on the bank next to the broken gate. They get it running, and it’s not long at all before the area between the upstream and downstream stoplogs is dry enough to work.

The first thing to go is the drive shaft between the two gate operator gearboxes. When these gates are functioning, this shaft delivers power to the opposite side of the gate and keeps both sides raising or lowering at the same rate. But now it’s just in the way and needs to come out. It is disconnected, and the crane lowers it to the ground. The next piece is the support beam between the two operators. Same as before: it is detached by the crew, rigged to the crane, and lifted away from the dam. It’s flown across the site to the staging area and set down. All this equipment will eventually be hauled away and recycled for scrap.

It might be obvious, but even though it’s broken, this gate is still attached to the rest of the dam, at the bottom with hinges, and at the top, with the two stems that would raise and lower the leaf when it was working. Before the crew can detach the gate, it will need some additional support. The crane lowers its hook. And the crew wraps two massive chain slings around it. Then the crane cables up to provide support for the gate while it gets detached.

It’s not easy doing big projects like this in the downtown core of a major city. The River Authority has had to lease the parking lot next door for a place to put the crane and other equipment. There are strict rules about when they can work to make sure the project doesn’t cause too much disturbance to all the neighbors. And, this is part of the River Walk, which means it's a heavily trafficked pedestrian route. The contractor has to set up barricades during work hours and then take them down at the end of each day. They also have safety spotters who make sure there are no wayward pedestrians or workers within the swing of the crane during heavy lifts.

If you’ve worked on a device or turned a wrench, you’ve probably been faced with a stuck bolt before. But what do you do if the bolt is as big around as your arm? Pretty much the same thing you’d do at a smaller scale. Apply some penetrating oil… Beat it with a hammer… Use a cheater bar on the wrench… Bring out a hydraulic press… And then you just decide to cut the whole thing off. This gate’s being scrapped anyway so there’s no use treating it with kid gloves. The crew gets out the oxyacetylene torch to cut the ears off the top. First one. And then the other.

Next come the hinge pins that connect the gate at the bottom. A few come out pretty easy. A few take a little extra effort. With a chain hoist pulling, the hydraulic toe jack pushing, and a little percussive persuasion, this crew eventually gets them all out.

Just cutting and hammering and pushing and pulling all the connections this gate has to the dam is an entire day’s work. These are big, heavy items in awkward positions, so each time they move, disconnect, or lift something out of the work area, they have to do it thoughtfully and carefully to ensure it's done safely. By the end of the day, the gate is finally free, but the crew decides to set it down and wait until tomorrow for the critical operation of lifting it out.

The next morning, it’s time for the big lift. The chain slings are re-secured around the gate, and the crane reaches over the trees and river to slowly remove it from the dam. It’s a big moment, so the whole crew gathers around to watch. Safety spotters coordinate with the crane operator to pull the gate free from the dam, then hoist it up and over. Safety personnel are making sure no one wanders into the area, but just in case, a horn sounds when the load is over the sidewalk. Eventually, the gate makes it to the staging area in the parking lot - on dry ground for the first time in 40 years. It did its job admirably, it was a great gate, but it’s easy to see from its condition that it was definitely time for retirement.

With the gate out, a boom lift is lowered into the area to help remove some of the remaining pieces. Most of the day is spent cutting and removing pieces of the gate and attachment hardware. At this point, the area will mostly sit idle while the new gate is being fabricated. But there’s more work to do in the meantime.

Another part of this project is the nearby pump room. The flows in the San Antonio River often drop to a mere trickle, and this is something the city designed for when these gates were installed back in the 80s. With these gates keeping the water up at a constant level, the River Walk works kind of like a bathtub; it takes a big volume of water to fill up the channel that snakes around downtown. But, if water leaves the River Walk faster than it can be replenished, that level will drop, kind of like trying to fill a bathtub without stopping up the drain. So this dam was designed with a pump to lift water from downstream into the channel above if needed.

This is a screw pump, one of the oldest and simplest hydraulic machines, sometimes called an Archimedes Screw. A motor turns a steel cylinder with a screw inside. As the screw rotates, water is lifted upwards until it spills out at the top. In this case, it falls into a flume that flows out to the river above the dam. It’s ingenious in its simplicity, and apparently worked great when it was first installed. But, not long afterward, San Antonio built its landmark flood control tunnel that allows floodwaters to bypass downtown. It’s an incredible project of its own, and it included the means to recirculate water in the San Antonio River from downstream to up. That keeps the river flowing during dry times, maintaining the level in the River Walk downtown, and rendering the old screw pump obsolete. So it never got turned on again and has been sitting here unused for many years.

This new project is going to repurpose the area to create a bypass for the two gates. It will add a bit more capacity, but more importantly, it will help create some circulation in the stagnant area downstream of the dam. Still water allows sediment to build up, collects debris, and grows algae and mosquitoes. With the screw pump not running, this area just doesn’t quite see enough water movement, so the bypass will allow it to be easily flushed out when needed. But first, the screw pump has to come out. This is the same story as the gate: oxyacetylene torches and hammers. Piece by piece, the pump is cut away and hauled off as scrap.

With the pump out, the room gets some modifications. Some concrete is taken out… And new concrete is installed to create a chute for the water. And then it gets its own new gate to control the flow. Luckily this small pump room has an overhead crane, because getting this gate into place was a tight fit.

Back outside, crews start working on the retrofits to the dam to get ready for the new gate. Unlike the electric motors used for the old gates, the new ones will use hydraulics. These piers that flank the gates have to be modified to fit the new system. The tops of the piers get some careful demolition to accommodate the hydraulic cylinders. And the hinges from the old gate still need to be removed. This area will also have some concrete modifications so the new gate fits perfectly in the old slot.

Nearly a year after the old gate was cut out, the new gate finally arrives on site. It sounds like a long time, but this project was specifically scheduled around the fabrication of these gates. They aren’t just parts you can pick up at the local hardware store. A lot of design, construction, testing, and finishing touches went into each one. And they’re so big, they have to be delivered in two parts. Today’s job is to connect them into a single gate. The halves get a layer of sealant to prevent leaks, and then a whole bunch of bolts to attach them together.

And finally, this gate is ready to install. You know I love crane day. And it’s even better when there’s a small crane to assemble the big crane. This 650-ton capacity monster is configured with a luffing jib to reach out over the trees and water. But the first step is to get the gate off the stands. It has to be lifted horizontally from these saw horses, but it will be installed vertically. So the gate is rigged for the first lift, moved to the ground, and then rerigged for the main event.

I’m a sucker for heavy lifts so this was a pretty fun thing to see in person. It’s incredible how much work and setup went into a milestone that only took less than an hour to complete. It’s the civil engineering equivalent of a rocket launch. The crane swings the gate up and over the trees and down to the dam. As it gets closer, the movements are slower and more deliberate. Each time the crane moves, the crew waits for the massive gate to stabilize before calling for the next step. They carefully move it into position, and when everything is lined up just right, it sits down on the base plates, ready to be connected.

While it’s held by the crane, the crews begin installing the bolts that attach the gate to the concrete. This is allowed by safety regulations, but only under a set of rigid guidelines, so safety is at the top of everyone’s mind. A detailed lift plan, a pre-work safety briefing, and several spotters make sure that there are no wrong moves. These bolts are torqued to the specifications one by one, on both the upstream and downstream side of the gate. And once it’s firmly attached, the crane lowers it to the ground.

The next day, the beam across the top of the piers and the hydraulic cylinders are flown into place. These cylinders will lift and lower the gate, working against the immense water pressure pushing on the upstream face. They’ll attach to these beefy hinge points on the side of each gate. The cylinders are attached to a new hydraulic power unit installed in the pump room. This unit has the valves, pressure regulators, pump, and oil reservoir to make these gates operate more efficiently and reliably than the old electric motors did. Everything is operated from the City’s tower that overlooks the dam. From here, operators can control all the city’s flood infrastructure, including the dams and gates on the river and the flood bypass tunnels that run below ground. And I have to say, it’s a pretty nice view from the top. And in fact, some of the timelapse clips I’ve shown are from a camera mounted on top of this structure. This is run by the US Geological Survey, and I’ll put a link below where you can go check out the dam in real-time.

Once everything is hooked up, it’s time to test this gate out. Unfortunately, you can’t schedule a flood. Since there are just ordinary flows at the moment, the crews have to be careful not to drain the entire River Walk while they do it. The gate gets lowered just a bit to make sure nothing is binding and that the hydraulic system is working. Of course, it’s a big day to see it all working for the first time, so everyone involved in the project is on-site to see it happen. And the test went flawlessly. But it’s not the end of the project.

These stop logs were installed in early 2021, and it’s finally time to pull them out nearly four years later. You can see they grew some nice foliage during their service. This process requires a professional diver to rig each one for the crane. It’s just one of the many steps made much more complicated because this structure still has to serve its purpose during the entirety of the project, and more importantly, the River Walk can’t be drained. The stop logs get lifted out of the slots. Then they’re moved directly next door to get ready for the next gate.

I didn’t document as much of the second gate, because it was pretty much identical to the first one, although it went a lot faster since the gate was already ready.

The area was pumped out, the old gate removed, and the new one lifted into place. And pretty soon this old dam had two new gates, plus a bypass, ready to serve the city for the next several decades. If you visited the River Walk during construction, you wouldn’t have even known it was happening, and that was the entire goal of the project: revitalize a critical part of the city’s flood control infrastructure without causing any negative impacts on one of its crown jewels. And being on site to see it happen in real time was a lot of fun.

I have to give a huge thanks to the City of San Antonio, the San Antonio River Authority, the engineer, Freese and Nichols, the general contractor, Guido, and all their subcontractors for inviting me to be a part of this project and document it for you. It was a pretty incredible experience, and I hope it gives you some new appreciation for all the thought, care, and engineering that goes into making our cities run.

January 07, 2025 /Wesley Crump

The Hidden Engineering of Wildlife Crossings

December 17, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Wallis Annenberg Wildlife Crossing under construction over the 101 just outside Los Angeles, California. When it’s finished in a few years, it will be the largest wildlife crossing (*of its kind) on the planet. The bridge is 210 feet (64 meters) long and 174 feet (53 meters) wide, roughly the same breadth as the ten-lane superhighway it crosses. Needless to say, a crossing like this isn’t cheap. The project is estimated to cost about $92 million dollars; it’s a major infrastructure project on par with similar investments in highway work. And it’s not the only example. The Federal Highway Administration recently set aside $350 million federal dollar to fund projects like this. The reasons we’re willing to invest so much into wildlife crossings aren’t as obvious as you might think, and there are some really interesting technical challenges when you’re designing infrastructure for animals. I’m Grady, and this is Practical Engineering.

Roads fundamentally change the environments they cross through. And while on its face, it might seem that it’s always a disaster for wildlife, there are actually some winners amongst the losers. For vultures, crows, coyotes, raccoons, insects, and other decomposers, roads provide a buffet for nature’s scavengers. And they sometimes make for pretty good housing too, at least if you’re a swallow or a bat. In fact, cliff swallows are now so famous for nesting on the underside of highway overpasses that they’re often referred to as bridge swallows. The sides of highways have clear zones kept free from trees and similar obstacles for vehicle safety, but the lack of shade allows tender greens to thrive, creating a salad bar for species from monarch butterfly caterpillars to white-tailed deer.

Of course, especially in the case of deer, this can attract animals into spending time eating dinner in danger. And the truth is that roads mostly range from a mild inconvenience to totally catastrophic for wildlife. In the battle between the two, wildlife usually loses, and in more ways than just getting squished. The ecological impacts of roads extend beyond the guardrails. Habitat loss and fragmentation, noise pollution, runoff, and of course, injecting humans into otherwise wild places are all elements of the environmental challenges caused by roads. It’s actually a pretty complicated subject, and there are even road ecologists whose entire careers are dedicated to the problem. And it’s not just wildlife that’s affected.

According to the Federal Highway Administration, there are over 1,000,000 wildlife-vehicle collisions annually on US roadways. That results in tens of thousands of injuries, about 200 human fatalities, and over 8 billion dollars of damages per year. Even if you haven’t personally been involved in a collision like this, there’s a good chance that you know somebody who has. Along with the astronomical numbers reported by the FHA, it’s likely that a huge portion of wildlife collisions go unreported. There are lots cases that just don’t get counted, like if an animal is too small to notice, or if it survives the impact and escapes, or is collected by somebody practicing the dubious art of roadkill cuisine (yes, that’s a real thing and there are multiple cookbooks out there for it).

There’s a wide range of consequences from animal collisions, from minor vehicle damage to human fatalities. When you average them out, researchers estimate that in 2021, the average cost of hitting a deer was $9,100. Of course, the bigger the animal, the bigger the economic loss. For a moose, that number is over $40,000 per collision. Regardless of how you might feel about environmental issues and wildlife, the economic impacts alone can justify the sometimes enormous costs required to let them safely cross our roadways.

Luckily for the animal and human populations alike, there’s been increasing interest in reducing the negative impacts roads have on wildlife over the past few decades. I’m no stranger to infrastructure built for animals. It is fairly unusual for fish to get hit by cars, but they have their own manmade barriers to overcome, and I released a series of videos on fish passage facilities for dams you can check out after this if you want to learn more. Like aquatic species, there is a lot of engineering involved in getting terrestrial animals across a barrier. But fortunately, a lot of that research and guidance has been summarized in a detailed manual. I may not be a road ecologist, but I am an engineer, and I love a good Federal Highway Administration handbook!

One of the most important decisions about building a wildlife crossing is where to put one. You might imagine that the busiest roads are where most of the collisions occur. And it’s true up to a point. As the number of cars on a road increases, the percentage of wildlife crossing attempts that end in a safe critter on the other side drops, and the fraction that are killed grows. But, if we keep increasing the daily traffic numbers, something unexpected happens: the number of “killed” animals declines! Eagle-eyed viewers may realize that so far, this graph is incomplete; these percentages don’t add up to 100%. That’s because there’s a third category: “repelled” animals. As highway traffic increases, you reach a point where the vehicles form a kind of moving fence, and all but the most brazen bucks will turn away.

Road ecologists sometimes struggle to drum up support for wildlife crossings at high-traffic freeways (like the Annenberg crossing in LA) because of this effect. For some people, if they don’t see actual road kills on the shoulder, they struggle to accept the greater impact on wildlife populations. Habitat fragmentation caused by roads can be difficult for any species, but it’s especially hard-hitting for migratory species who HAVE to cross in order to survive and reproduce. For example, following the opening of I-84 in Idaho, biologists recorded the starvation of hundreds of mule deer mired in the snow, unable to cross to food sources.

And it’s not quite as simple as the graph makes it seem. A study by Sandra Jacobsen breaks down animals into four categories of crossing style. Some animals, like frogs, are non-responders who cross roads as if they aren’t there at all. Their wild instincts compel these animals to cross without regard for their own safety, and they’re often too small for most motorists to notice.

Next, you have the pausers, like turtles. These creatures, when spooked on the road or elsewhere, instinctively hunker down and stay put. While the shell of a box turtle might be impenetrable to a curious coyote, it is, sadly, no match for a box truck. Then you’ve got avoiders. This group often includes the most intelligent members of the local fauna. Grizzly bears, cougars, and other carnivores often fall into this category. For them, even low-traffic rural backroads can cause significant issues with habitat fragmentation, leading to poor genetic diversity. The small gene pool of a number of southern California cougars is one of the major drivers of the construction of the Annenberg bridge. Deer fall into the last category, speeders. As the name implies, these are fast, alert animals who, given the chance, will burst across a road to get to the other side.

But even these categories have their exceptions. The poster-cat of the US-101 project, a cougar called P-22, famously crossed the 10-lane highway and took up residence in the shadow of the Hollywood sign. There just is no one-size-fits-all approach for getting animals across roads. Engineers and ecologists use a wide variety of mapping, including aerial photography, land cover, topography, habitat, plus ecological field data and even roadkill statistics to choose the most appropriate locations for new wildlife crossings. And in many cases, what works for one species may be completely ineffective for another. So most designs are made for a so-called “focal species,” with the hope that it works well for others too.

But before you have a crossing, you have to get the animals to it. In most cases, that means fences, and even that is complicated. Do the focal species have a habit of digging under fences like badgers or bears? Well, then you’ll want to bury a few feet of fence to maintain its integrity. And where do they start and stop? Ideally, fences will terminate in areas that are intentionally hard to cross so animals don’t end up in a concentrated path across roadways. Sometimes boulders will be placed at the end of a wildlife fence to make it less likely that animals will choose to wander on the wrong side. But, inevitably, it happens. You don’t want to trap animals on the highway side of a fence, so many feature ramps or “jumpouts” that act almost like one-way valves for animals. There are even hinged doors for moderate-sized animals that allow wayward creatures to escape through fences.

Once you’ve got a site selected, the next big choice is over or under. It turns out that going under a road is often the easiest option. In fact, in many cases, existing bridges and viaducts can naturally create opportunities for wildlife to get across our roadways. Sometimes it’s as simple as building fencing to funnel animals into existing underpasses.

Another option for small animals is to use culverts as crossings. The engineering and materials for culverts are pretty well established since they’re used so much for getting drainage across roadways, so it’s not a big leap to do it with animals too. But it can be tricky getting them to use it. Since amphibians are also pretty lousy at walking long distances, it’s common to have many small tunnels installed near one another with special fencing to maximize survival. In some cases, they’re combined with buried collection buckets. During peak migration periods, the buckets are checked, and collected amphibians are manually transported across the road!

Larger animals won’t fit in a culvert (or a bucket), but there are some special considerations to getting them to travel beneath a highway bridge. Many animals are hesitant about dark areas during the daytime, so it's important to get as much natural light in as possible. Lighting also affects the vegetation that grows under a bridge. More light means more natural-feeling areas, which means more animals will be willing to cross under. And of course, keeping people out is important too. Disturbance from the public can really affect animals' willingness to incorporate a new, unusual route into their routine. Many crossings are designed with cover objects like logs, rocks, and brush that can help encourage a wider variety of wildlife to take advantage of the intended path.

But, for some species, underpasses just don’t work at all. You can’t FORCE a moose to do anything really, especially something like walking through a tunnel it doesn’t trust. In certain instances, the only effective way to allow safe passage across a road is over the top. For some particular focal species, an overpass might not need to be that grand. Canopy bridges just connect trees on either side of a road so primates and other tree-living creatures can get across. In Longview, Washington, there’s even a series of tiny bridges for squirrels, like the famous “Nutty Narrows” bridge.

Of course, the most impressive, usually the most effective, and often the most expensive wildlife crossings are designed as overpass bridges. Examples include the famous ecoducts of the Netherlands, overpasses of the Canadian Rockies in Banff National Park, and American structures like the Wallis Annenberg Wildlife Crossing. I actually have one of these nearby. Opened in 2020, the Robert LB Tobin Land Bridge crosses the six-lane Wurzbach Parkway in San Antonio, Texas. These are full-on bridges designed specifically for the use of animals.

Structures like these have all the same design issues as regular bridges for humans, plus their own engineering challenges as well. They have to hold up their own weight with a significant margin of safety, be designed to weather the elements for decades, and be inspected just like other bridges. They ALSO have to be engineered to be covered in thick layers of soil and vegetation (sometimes including trees), and be sized appropriately to accommodate focal species that might travel in huge herds or be wary of tight spaces. They have to be built to provide appropriate lines of sight for nervous crossers and often have walls that shield wildlife from the noise and light of the traffic below. One fun upside is that, at least in mountainous areas, the approaches can be a lot steeper than you might use for a vehicular bridge. An elk is pretty well suited for off-roading after all.

As for the design of the bridges themselves, they’re built a lot like highway bridges, usually beam bridges or arches, just with dirt instead of concrete for the deck. While the distance across a highway is long for a wandering moose, it’s not generally enough to require a structure of more heroic engineering like cable-stayed or suspension bridges. Unlike vehicular bridges, the approaches often flare out when viewed from above, making it easier for animals to locate the bridge and for better sight lines across it. This, plus the fact that they are usually covered in native vegetation, means that wildlife overpasses are among the most striking bridges you can see. It also means that from the perspective of the wildlife crossing them, these bridges can blend into the scenery. Ideally, a herd of pronghorn wouldn’t even realize they’re on a bridge at all.

It’s hard to think of any humanmade structures that have transformed the landscape more than modern roadways. They have an enormous impact on so many aspects of our lives, and it's easy to forget the impact they have on everything else that we share the landscape with. Sometimes when it comes to mitigating the negative impacts of roads on wildlife, the best thing is to just be more careful about where or IF we build a road at all. But for many of the roads we already have and the ones we might build in the future, it just makes sense - for safety, the economic benefits, and just being good stewards of the earth - to make sure that our engineering lets animals get around as easily as we can.

December 17, 2024 /Wesley Crump

What’s the Deal with Base Plates?

December 03, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

A lot of engineering focuses on structural members. How wide is this beam? How tall is this column? But some of the most important engineering decisions are in how to connect those members together. Take a column, for example. You can’t just set it directly on a foundation, at least not if you want it to stay up. It needs a way to physically attach to the foundation. This may seem self-evident, maybe even completely obvious to most. But in that humble connection that’s so ubiquitous you rarely even notice it, there is so much complexity. Baseplates are the structural shoreline of the built environment: where superstructure meets substructure. And even understanding just a little bit of the engineering behind them can tell you a lot of interesting things about the structures you see in your everyday life. I’m Grady, and this is Practical Engineering.

Let me start us out with a little demonstration. If you’re a regular viewer, you know how much you can learn from our old friends: some concrete and a benchtop hydraulic press. I cast two cylinders of concrete about a week ago, and now it’s time to break them for science. These were cast from the exact same batch of concrete at the exact same time. For this first one, I’m pushing with a fairly narrow tool. I slowly ramp up the force until eventually… it breaks. I had a load cell below the cylinder, so we can see the force required to break this concrete. This scale isn’t calibrated, so let’s say it broke at 1400 arbitrary Practical Engineering units of force. Practicanewtons? KiloGradys? What would you call them? Now let’s do the same thing with a wider tool. At that same loading, this concrete cylinder is holding steady. In fact, it didn’t break until 3100 units. Here’s a trick question. Was the second cylinder stronger than the first one? Hopefully it’s obvious that the answer is no.

Most materials don’t care about force. I mean, in the strictest sense, most materials don’t care about anything. But what I mean is that the performance of a material against a loading condition usually depends not on the total force, but how that force is distributed over an area. It’s pressure; force divided by area. Increase the area, lower the pressure. And pressure is what breaks stuff. So that’s what a lot of baseplates do. They transfer the vertical forces of a column to the foundation over a larger area, reducing the pressure to a level that the concrete can withstand.

And that’s the first engineering decision when designing a baseplate. How big does it need to be? If you know the force in the column and the allowable pressure on the foundation, you can just divide them to get the minimum area of the plate. That’s the easy part. Because steel isn’t infinitely stiff. If I put this column on a sheet of paper, I think it’s clear that there’s no real load distribution happening here. The outside edges of the paper aren’t applying any of the column’s force into the table; I can just lift them. But this can be true for steel too. I filled up an acrylic box with layers of sand to make this clearer. If I use a thin base plate, the forces from my column don’t distribute evenly into the foundation. You can see that the baseplate flexes and the sand directly below the column displaces a lot more. I can try this with a thicker, more rigid baseplate, and the results are a lot different. Much more even distribution of pressure. So the second engineering decision when designing a baseplate is the stiffness of the plate, usually determined by the thickness of the steel, based on the loads you expect and how far the plate extends beyond the edges of the column. And in heavy-duty applications like steel bridge supports, vertical stiffeners can be included to make the connection even more rigid.

So far, though, the baseplate isn’t really much of a connection. That’s the thing about compressive loads: gravity holds them together automatically. There are no bolts in the Great Pyramid of Giza. The blocks just sit on top of each other. And that could be true for some columns too. The main load they see is axial, along their length, pressing the plate to the ground. But we know there are other loading conditions too. A perfect example is a sign. Billboards and highway signs are essentially gigantic wind sails. They don’t actually weigh all that much, so the compressive force on their base isn’t a lot, but the horizontal forces from the wind can be significantly higher than that. Those horizontal forces can increase the compression force on one side of the base plate, so you have to account for that in the design. But they also can result in shear and tension forces between the baseplate and foundation, so you’ve got to have something in place to resist those forces too. That’s where anchors come in.

There are a lot of ways to attach stuff to concrete. There are anchors that epoxy into holes, screw into place, or use wedges to expand into the hole. And of course, if you’re extra careful and precise, you can even embed anchor rods or bolts into the concrete while it’s still wet. There’s a huge variety of styles and materials that offer different advantages depending on your needs. Here’s just one manufacturer’s selection guide for the anchors and epoxies they provide. But like third year engineering students, all of those anchors can fail if they’re overloaded. And they can fail in a lot of different ways under tension or shear forces. The anchor rod itself can fracture or deform. It can lose its bond with the concrete and pull out. It can break out the surrounding concrete. Or if it’s too close to the edge, it can blow out the side. Calculating the strength of the anchor bolt and concrete connection against each of these failure modes is a lot more complicated than just dividing a force by a pressure to determine the baseplate area. So most engineers use software that can do the calculations automatically.

But, there’s another challenge about baseplates I haven’t mentioned yet, and it has to do with tolerances. Concrete foundations can be pretty precise. As long as you set the forms accurately and make them strong enough to avoid deflection while the concrete is being placed, you can feel confident in the dimensions of the structure that comes out of them. But there’s usually one surface that isn’t formed: the top. Instead, we use screeds and trowels and floats to put a nice finish on the top surface of a concrete slab or pier. But it’s rarely perfect enough to put a column directly on top. That’s not to say it can’t be done. I’ve seen concrete finishing crews do amazing work. But it’s usually not worth the effort to get a concrete surface perfectly level at the exact elevation needed for every column, especially when you have the time pressure of concrete setting up. And those tolerances matter. Just one degree off of level will put a 16-foot or 5-meter column out of plumb by more than 3 inches or 80 millimeters. Unless you’re in certain parts of Tuscany, that’s not gonna work. It’s more than enough to misalign some bolt holes. And that only magnifies for taller columns like signpoles. So, we usually need some adjustability between the plate and the concrete.

Sometimes that means shimming the baseplate to get it perfectly level. And the other primary option is to use leveling nuts underneath the plate. I welded up a custom-branded column and and baseplate that was laser-cut by my friends at Send-Cut-Send to show you how this works. These parts turned out so nice. By adjusting these nuts up or down, I can get the column to point in the exact direction required. And I can get it to the exact right elevation too. But maybe you see the problem here. All the work we did to make sure the baseplate distributes the vertical load even across its area is lost. Now the vertical loads are just being transferred through some shims or through the bolts directly into the anchors. So, in a lot of cases, we add grout between the plate and the concrete to bridge the gap. Grout is basically concrete without the large aggregate, mixed with a low viscosity so it flows more easily into gaps. And it often includes additives to prevent it from shrinking as it cures, making sure it doesn’t pull away from the surfaces above and below. When it hardens, the grout can transfer and distribute the loads into the foundation. So if you pay attention to baseplates you see out in the built environment, you’ll notice it’s pretty common that they sit on a little pedestal of grout and not directly on the concrete below. But even this comes with a few problems.

First is load transfer. Even with the grout, some of the vertical loads are still going into the anchor bolts that might not have been designed for compression. So now we’ve added a few more new potential failure modes to the laundry list: punching through the bottom of a slab, and buckling of the rod itself. Sometimes contractors will use plastic leveling nuts that can hold the column during construction, but will yield after the column’s loaded so the grout supports all the weight. Second is fatigue. Especially for outdoor structures that see wind and vibrations, the grout under the baseplate might not hold up to repeated cycles of loading. Third is moisture. Grout can trap water, leading to problems with corrosion, especially for hollow columns like sign poles where condensation needs a way out. And the grout can hide that corrosion, making it difficult to inspect the structure. And fourth, adding grout below a baseplate is just an extra step. It’s kind of fiddly work to do it right, and it costs time and resources that might otherwise be spent somewhere else. In fact, there are a lot of cases where it’s an extra step worth skipping.

You can design anchor bolts strong enough to withstand all the forces a column will apply, including the compressive forces downward. And you can design a base plate stiff enough that those forces don’t have to be distributed evenly across the entire area. And if you do, you have a standoff base plate. It just floats above the concrete with only the anchors in between. It looks like a counterintuitive design. We think of a baseplate as kind of a shoe, so it should be sitting on the ground. And a lot of them are designed that way. But for other structures, a baseplate is really just a way to connect a foundation to a column through an anchor. So if you pay attention, you’ll see these standoff baseplates everywhere. A lot of state highway departments have moved away from using grout to make signs and light poles easier to inspect. And they often install wire mesh to keep animals out from hollow masts.

Clearly there’s a lot more to baseplates than meets the eye, and that means there’s also a few myths going around grout there. A common misconception is that standoff baseplates are meant to break away in the event of a collision. And I totally understand why. If an errant vehicle hits a signpost, a relatively minor deviation from the road can turn into a deadly crash. Smaller signs installed near roadways often do use breakaway hardware or features. You’ll often see holes drilled in wooden posts, bolts with narrow necks meant to snap easily, or slip bases like this one to make sure a sign gives way. But for larger structures like overhead signs and light poles, that’s generally not the case. Having one of these break away and fall across a highway could create an even bigger danger than having it stay upright. So, even though they might look similar, standoff baseplates are distinct from sign mounts designed to break loose in a collision. Instead, larger structures installed in the clear zones of highways are protected from crashes using a guardrail, barrier, or cushion.

Baseplates are like bass parts in music, it’s easy to overlook them at first, but once you notice them, you can’t stop paying attention to how important a role they play. And just like bass lines, they might seem simple at first, but the deeper you dig, the more you realize how complex they really are.

December 03, 2024 /Wesley Crump

Which Power Plant Does My Electricity Come From?

November 19, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In June of 2000, the power shut off across much of the San Francisco Bay area. There simply wasn’t enough electricity to meet demands, so more than a million customers were disconnected in California's largest load shed event since World War II. It was just one of the many rolling blackouts that hit the state in the early 2000s. Known as the Western Energy Crisis, the shortages resulted in blackouts, soaring electricity prices, and ultimately around 40 billion dollars in economic losses. But this time, the major cause of the issues had nothing to do with engineering. There were some outages and a lack of capacity from hydroelectric plants due to drought, but the primary cause of the disaster was economic. Power brokers (mainly Enron) were manipulating the newly de-regulated market for bulk electricity, forcing prices to skyrocket. Utilities were having to buy electricity at crazy prices, but there was a cap on how much they could charge their customers for the power. One utility, PG&E, lost so much money, it had to file for bankruptcy. And Southern California Edison almost met the same fate.

Most of us pay an electric bill every month. It’s usually full of cryptic line items that have no meaning to us. The grid is not only mechanically and electrically complicated; it’s financially complicated, too. We don’t really participate in all that complexity - we just pay our bill at the end of every month. But it does affect us in big ways, so I think it’s important at least to understand the basics, especially because, if you’re like me, it’s really interesting stuff. I’m an engineer, I’m not an economist or finance expert. But, at least in the US, if you really want to understand how the power grid works, you can’t just focus on the volts and watts. You have to look at the dollars too. I’m Grady, and this is Practical Engineering.

Electricity is not like any normal commodity we buy and sell. You can’t really go to the store and pick up a case of kilowatt-hours. It can’t really be stored or stockpiled on an industrial scale, which means it has to be created at essentially the exact instant it's needed. And the demand is fairly inelastic. We want our lights, stoves, air conditioners, and devices to turn on no matter the time of day. That requires the supply side to handle incredible volatility, ramping up or down to meet demands in real-time. And the whole business is incredibly capital-intensive: you need very expensive infrastructure for pretty much every step of the process. The only reason it can work is that we all share that infrastructure, spreading out the costs. Call me a nerd, but I think all of this creates some fascinating challenges, both on the technical side for engineers and the organizational side for the policymakers, regulators, and all the companies that participate in the electric power industry.

It wasn’t that long ago that the electric utilities did it all. As the pros say, they were “vertically integrated.” Each utility owned and controlled the three major pieces of the grid within their service areas: generation (or power plants), transmission lines (which carry electricity at high voltage across long distances), and a distribution system (which delivers electricity to most customers at lower voltages). That meant they had a monopoly. Customers couldn’t choose where their power came from or how they got it. And that meant that electric utilities had to be carefully regulated to make sure that, without any competition, they were still offering customers a reasonable price for power.

Over time, utilities realized the value of interconnecting so they could help each other in times of need. Electricity is a true commodity, even if it has some unusual properties. For the job it does, it mostly doesn’t really matter who made it - a kilowatt is a kilowatt, no matter where it came from. If one utility’s power plant went down or bad weather hit, they could work out a deal to share power with a neighbor and keep demand satisfied. As the practice grew more common, power pools developed where multiple utilities would interconnect and agree to share power. Every system is different; subject to different risks, different weather conditions, and outages at different times. It just made economic sense to spread out that variability and risk. Eventually, huge parts of North America were interconnected by transmission lines, creating the “grids” we know today. The major interconnections in the US and Canada are the Western, Eastern, Quebec, and Texas.

Historically, the wholesale price one utility would pay another for power was regulated just like the rates utilities charged their retail customers. It was usually based on the actual cost of generating that power, so the big utilities couldn’t price gouge smaller companies. But a lot of that changed in the 1990s when the federal government opened the door for deregulation. The idea is simple on the surface: if power can move fairly freely on the grid, there’s no need for major utilities to be the only ones producing it, and there’s no need to regulate the prices for which it’s bought and sold. Let’s let market forces drive the decisions. It will increase competition and efficiency, driving down prices, and the investment risks will fall to the investors, not the customers.

Quite a few states took the opportunity to deregulate the production of power, and quite a few didn’t. In fact, right now, it’s roughly half and half, but there’s a lot of variety between states when it comes to who produces power and how it’s bought and sold between utilities and other companies, and even big differences within individual states. In truth, the process of deregulation has been anything but simple, and actually created a whole new set of interesting challenges. Companies trying to game the system, like what happened in California, is only part of it. In fact, power professionals often say that certain states aren’t deregulated; they’re just differently regulated. But how does all this really work in practice? Let’s set up an analogy.

Say I live on one side of a big lake. The water isn’t mine, but there is a water company on the other side. If I want to buy some water, they could load it on a truck and haul it to me. Or they could just put it in the lake, and I could take out the same amount. It’s probably not the same water, but it doesn’t really matter. In this analogy, water is water. Let’s scale it up. Now, hundreds of people need water, and hundreds of companies are selling it. Each person can hire any company they want to provide their water. The distance between buyer and seller doesn’t really matter. Every seller puts the amount of water they’ve sold in the lake, and each buyer takes as much out as they’ve bought. As long as everyone keeps track of how much they buy and sell, the lake stays full, and basic laws of physics will sort out how the actual water flows. In one way, you know exactly where you get your water: the company you paid to provide it. But in another way, you have no idea. All the water from all the companies is comingled in the lake. This is what happens on the grid. In a way, the power coming to your house comes from the power plant or plants that your utility paid to create it. But the electrons themselves probably didn’t. Just like the water in the lake, electricity flows according to the laws of physics from high potential to low, sloshing and flowing according to what everyone on the system is doing.

This is a lot like how a deregulated grid works. Utilities that supply electricity to their customers don’t generate the power themselves. They enter contracts to get it from wholesale power providers, separate companies who only generate electricity. But you can see a challenge here. If I’m on my big lake wanting some water, I may not want to coordinate with every water company to see who’s got the best price, especially if my need for water varies day by day or even second by second, and honestly, I’m not even sure exactly how thirsty I’m going to be. And if I’m a water company on the lake, it’s a lot of overhead work to deal with all these customers and their different needs. It makes a lot more sense if there’s a marketplace. So it is on the grid.

Like I mentioned, this varies quite a bit depending on where you are, so I’ll try to be as general as possible. Outside of those direct contracts between one buyer and one seller, most wholesale electricity in deregulated areas is bought and sold on the day-ahead market. Wholesale purchasers like utilities submit bids with estimates of how much electricity they’ll need for each hour of the next day. And generators submit their offers to sell a specific amount of electricity for a given price that’s based on their production costs, availability of fuel, and operational constraints. The facilitator of each wholesale market takes all the bids for every hour and matches the supply and demand to get the right amount of energy on the grid at the right times for the lowest cost. Here’s a basic example of a single hour of the auction:

Let’s say four generators submit bids to provide electricity during this hour: A nuclear plant bids 1200 MW for a price of $20/MWh. A natural gas peaker plant bids 400 MW for $100/MWh. A coal plant bids 500 MW for $30/MWh. And a wind farm bids 400 MW for $0. Wind and solar can submit very low bids because they have no fuel costs. There’s pretty much no way for them to lose money if they’re connected to the grid, especially because many get outside incentives for every megawatt-hour they generate. They even submit negative bids in some cases, meaning they’re willing to pay money to stay connected to the grid. In any case, electricity generators in our hypothetical hour have offered 2,500 megawatts of power to the market.

Let’s say purchasers submitted 2,000 megawatts of demand for this hour. We arrange the generation bids in order of least cost to satisfy demand. This concept is known as economic dispatch. We buy power at the lowest cost possible. We’re going to dispatch the wind farm and nuclear plant, dispatch the coal plant at 80% of the capacity they bid, and we don’t need the peaker plant at all. The clearing price is the cost of the last unit of supply to be dispatched. In this case, it’s $30 per megawatt-hour. Every producer gets paid that price for the power they put on the grid for that hour, even if they bid lower, and every buyer pays that price for wholesale electricity. This is why wind and solar are incentivized to bid 0 dollars. They essentially guarantee that they’ll make the cut.

It seems like a simple process in our hypothetical hour, but in reality there’s a lot more to it. For one, many types of power plants can’t just be toggled on and off with the flip of a switch. They need significant lead time to start up and shut down. They have minimum and maximum output levels. And their costs can vary a lot depending on how long they run. So the market has to take those factors into consideration. Also, we can’t perfectly predict the future, even for the next day. There are always going to be differences in the day-ahead forecasts. Demand varies, equipment has problems, and other unforeseen events like sabotage or solar storms happen all the time.

So another market runs in real-time, sometimes with auctions every 5 minutes, to make up those differences and keep the supply in check with demand. For example, if a wind farm overproduces what they bid into the day-ahead market, they can sell the extra on the real-time market. And if they underproduce, they may need to buy power in the real-time market to make up for the shortfall. And if things really get tight with not enough reserves, the real-time markets usually include a way to boost prices upward, even beyond what the clearing price would be, to make sure they’re more closely reflecting the actual value of electricity. That includes the cost to society if people lose power, or put another way, the cost they would be willing to pay to avoid a disruption in electrical service. This concept is called the value of lost load, and it’s something that the generators usually aren’t taking into account in their bids.

But that’s not all the markets. Many areas have a capacity market intended to make sure there are enough generators available to meet demands over the long term. These auctions happen only once a year or so, and generators bid to create new capacity within three years. All the generators that win in the auction are rewarded for adding capacity to the grid, no matter how much of that capacity actually gets used in the future. This doesn’t happen everywhere though. Texas doesn’t use a capacity market and instead relies on prices in the day-ahead and real-time markets to encourage generating companies to make long-term investments in capacity

Many areas also have markets for so-called ancillary services, basically services needed to keep the grid stable and reliable. There are auctions for regulation, which accounts for very short-term fluctuations in supply and demand to keep the frequency stable. There are also auctions for reserves that can keep plants ready to get on the grid quickly if another resource trips offline. Other services to keep the grid stable are often contracted directly instead of using auctions. Reliability-must-run contracts pay for power plants that are on the verge of retirement to stay in service until the capacity is replaced. Inertia services pay to keep a certain amount of rotating mass connected to the system. I have a video on that topic if you want to learn more. Black start contracts pay for some generators to have the ability to go from a total shutdown to operational without assistance from the grid. I also have a video on that topic. And reactive power contracts help maintain the stability of the voltage on the grid. And, I have a video on that one too.

A potentially surprising thing about many of these markets is that it doesn’t just have to be generation resources bidding into them. The overall goal is just to get the supply to meet demand, and there are two ways to do that. You can increase the supply or decrease the demand. I said earlier that electricity demand is fairly inelastic, but there are a lot of situations where customers can reduce demand, especially if they’re compensated for doing it. Large industrial power users like refineries can shift schedules around or even turn on their own generators if resources on the grid are getting scarce. This is how you get wacky news stories about cryptocurrency miners making more money participating in electricity markets than in Bitcoin. There are even companies that will gather up a bunch of smaller power users who have some flexibility in their demands, package them up, and sell that demand reduction as a service in the wholesale electricity market. And some utilities coordinate similar demand response programs with their customers, offering credits on your bill if you have a smart thermostat. Deregulation of wholesale electricity markets just opens up this world of possibilities in how we manage the grid.

But there is one big way my lake analogy from earlier breaks down. Because that lake symbolizes the transmission and distribution lines that carry power between the buyers and sellers. And in reality, they’re not really like a lake, but more like a series of interconnected canals. And they didn’t just appear. Someone has to build them and maintain them, often at great cost, so those costs need to be covered by the rates we pay for electricity on top of the generation. In this case, there’s really no way to deregulate those costs. It doesn’t make sense to build parallel, competing networks of transmission and distribution lines. It would cost too much, and we’d just have too many wires across the landscape. So regulators oversee the rates that transmission and distribution companies charge utilities to use their wires to move power between users and generators. And of course, there’s a whole host of complex financial systems in place to make this happen. Wholesale purchasers not only have to buy power they need and the power that will be lost along the way, but also reserve capacity on the transmission system for that power to travel, and pay the transmission and distribution system operators for the privilege.

Confusingly, the flow of power isn’t really controlled on a line-by-line basis or sometimes even on a system-by-system basis. Power flows where it flows once it’s released on the grid, and there’s no simple way to keep track of who made it or who bought it at individual points on the network. Transmission reservations and tariffs are the law of the land, but the actual electrical power follows the laws of physics. So unlike at your house where you pay one-to-one for the actual power that flows through your meter, payments to transmission operators aren’t always a perfect reflection of how each buyer’s power moves through their system. Still, it’s the best mechanism we have to ensure electricity moves reliably across the grid and that the owners of the transmission assets are fairly compensated.

The other thing is that those canals don’t have infinite capacity. They can only move so much water, just like the transmission system can only move so much power. So in managing the wholesale electricity market, you don’t only have to consider what’s the next cheapest source of power, but also whether you can actually get that power to where it needs to go. Grid operators have to account for congestion, like rush hour for electrons. They usually do this by allowing prices to vary from place to place, an idea called Locational Marginal Pricing. You can see on this map of Texas how significantly prices can vary across the state, reflecting a difference in where the demand is versus where the generators are and the congestion on the transmission system that results. And hopefully at this point you’re seeing how complicated all this really is. Grid operators have to take into consideration all these details - power flows, weather, limitations of every kind of generator, second-by-second changes in the system - in order to match supply with demand at the lowest cost possible. And it gets even more complicated when you add distributed generation sources, like home solar installations, that put energy on the grid from the other side of the meter.

And this is only on the wholesale side of the grid. Even though most of those dollars moving around came out of our pockets, the end-users of the electricity, you and I really don’t participate in this segment of the grid. For many of us, the company we pay for electricity (the retail provider) didn’t generate that electricity, and in many cases, doesn’t own the infrastructure that it traveled along to reach our house or place of work. And for around a quarter of the US, the retail market is deregulated to the point where you can choose which company you buy your power from. So what do they actually do?

In essence, retail providers just buy power on the wholesale market and sell it to you. They’re middlemen, the car dealerships of electricity. They navigate all that complexity we just discussed so you don’t have to. Retail providers all provide essentially the same thing, but they can differentiate themselves by offering different kinds of rates that suit their customers better. One provider in Texas, Griddy Energy, famously offered their customers the real-time wholesale price, exposing them to the incredible volatility of the market. Unsurprisingly, Griddy filed for bankruptcy after the winter storm in Texas when their customers couldn’t pay the exorbitant bills. The other thing retail providers can do is connect your dollars to specific sources of generation like renewables. Instead of buying power in the auction, where you have no control over the sources, they contract directly with wind, solar, and other generators to purchase it directly on your behalf.

So next time you get your power bill, take a look at those line items. Maybe there’s a base rate set by your provider that covers all the various costs of operating the grid from generation to transmission to distribution. Or maybe they’re broken out according to all the various costs that it actually takes to run the bulk power system. Do you pay a separate rate for the distribution service? Does your bill have an adjustment for the variability in the wholesale market? Is there a charge for the Public Utility Commission or whatever agency oversees this whole financial web of complexity? Every bill looks a little different, but I hope this video clears up some misconceptions and encourages you to think about what the price you pay for electricity actually accomplishes on the grid.

November 19, 2024 /Wesley Crump

Why Are Cooling Towers Shaped Like That?

November 05, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is not smoke. And this isn’t a smoke stack (at least not the kind we normally think of). It serves a totally different purpose at a power plant than smoke stacks whose job is moving combustion products high into the air, allowing them to disperse away from populated spaces. Maybe you already knew that, or at least suspected it. After all, you saw the title of the video. Plus, this kind of tower is commonly associated with nuclear plants that don’t combust anything at all to create the heat that drives their generators. But that heat is the key. The largest class of power plants, called thermal power stations, use steam turbines (or tur-bines, depending on how you say it). But once that steam makes it through the turbine, it needs to be condensed back into liquid water. It’s kind of frustrating. You spend all those resources heating the water up, and then you spend even more resources to cool it back down. And a power plant isn’t much good if all the electricity it generates gets used up just trying to cool that steam back down. So, engineers have come up with some pretty creative ways to cool huge amounts of water, like millions of gallons or tens of thousands of cubic meters per hour, and do it relatively efficiently. Not all cooling towers look like this, but there are some really clever reasons for that iconic shape we all recognize, and I’m going to build one in the garage to show you how they work. I’m Grady, and this is Practical Engineering.

Power plants could just vent steam into the atmosphere, but generally, they don’t do that. For one, it wouldn’t be good for the environment. The heat, moisture, and noise would affect wildlife and the weather. For two, it would waste a lot of water. The feedwater for a boiler is often carefully treated to avoid corrosion and mineral buildup in the machinery. It’s expensive water, so it doesn’t make sense to set it free. And for three, it would waste a lot of energy. It’s generally less expensive to cool the steam down just enough to condense it back into liquid water so it can be reused as feedwater. But even that is an enormous challenge.

I talked a little bit about how power plants actually consume a lot of energy in a previous video. It’s a net positive, of course. But any energy you spend on all the industrial processes required to produce electricity at scale is energy not being sent out to the grid, and that includes cooling. So engineers want to do it efficiently. One simple option is to use a cooler stream of water that already exists, like a river, lake, or sea. And in fact, there are a lot of power plants near me that do exactly that. This plant draws water from the lake on the south side, sends it through condensers, and then releases it into a channel where it flows to the north side of the lake. The slow circulation gives that water time to cool down before it reaches the plant again. But it’s not feasible to have a lake or river for cooling water at every thermal power plant, and there are environmental impacts with the heat and intakes that require careful consideration. So instead, lots of power plants use cooling towers.

You might be familiar with the various machines humans have devised for cooling stuff down. You might even be enjoying the comfort of such a device right this very minute. But, like I mentioned, the simplest way to cool something is to simply let natural physical processes do the work, just wait for entropy to do its thing. After all, the temperature is usually less than boiling outside, so the heat from steam will naturally transfer to the ambient air if you let it. So that’s what many cooling towers do… kind of.

I designed a cooling tower in my garage so I can show you exactly how this works. This is made from laser-cut strips of acrylic with a carefully selected shape. And when I carefully tape these carefully sized strips together, I get a nice (somewhat transparent) cooling tower. This is a model of a natural draft tower. It’s not the most common type out there, but it is one of the simplest, and also the most iconic and recognizable, so it’s perfect for this demonstration. You may have noticed the holes I drilled in the bottom of each acrylic strip. This tower needs a way for air to get inside at the bottom. If you look closely at the real thing, you’ll see something similar. They aren’t continuous all the way down but actually open around the bottom.

Steam from the turbines doesn’t go to the cooling tower directly. Instead, there’s a separate stream of water, aptly called the cooling water. The steam is condensed into liquid water in a condenser that is cooled by cooling water, which then flows between the condenser and the tower. I’m simulating that here with a bucket of hot water and a beer brewing pump. That hot water gets pumped to sprayers inside the tower. If you know a little bit about thermodynamics, you know that we can only get this water as cool as the ambient air temperature. Heat naturally flows from hot to cold, so you can’t get any more heat transfer once the water reaches the outside temperature. But if you know a little more about thermodynamics, you know there’s a trick that can improve the performance of a system like this. And this layer of material below the sprayers is the key.

This is called fill. I’m just using a dehumidifier pad, but in an actual cooling tower, the fill is usually a layer of plastic, carefully designed to maximize the surface area of the water in the system. It does this by forcing the water to either splash into tiny drops or form thin sheets as it falls downward. The goal is to expose as much surface area of the hot water as possible to the air flowing through the tower. Water drips down. Air flows up. The pros call this counter-flow. And it’s the trick to this whole process. (Actually you can use cross-flow as well, but let’s jump over that rabbit hole for now.)

You might think that the outside air has just one temperature, but to cooling professionals, it has two. One we call the dry bulb temperature is what you normally encounter. That’s what shows up in the weather report. It’s what’s on the thermometer. But air also has a wet bulb temperature. If you soak the end of a thermometer and pass it through the air, that water will evaporate. The drier the air, the more easily water evaporates. This is why it feels so much hotter when it is also humid outside. It takes energy, called latent heat, to convert water from a liquid to a gas, and that energy is absorbed from the liquid water, cooling it down. So, as long as the ambient air isn’t already saturated (100% relative humidity), you can actually cool water below the dry bulb temperature using evaporation. And the lower the humidity of the air, the more evaporation can take place, so the bigger the difference in wet and dry bulb temperatures.

This isn’t anything revolutionary. Nature figured out evaporative cooling millions of years ago. It’s why we sweat when it’s hot. But using it at this scale is really impressive. And it’s not the only innovation in a natural draft cooling tower. For that, I need to show you a cool graph. This is a psychrometric chart. It looks pretty intimidating. You could spend an entire college course learning about this stuff, and there are probably a few HVAC professionals groaning at the screen right now. But I just want to use it to explain a few important things about the physical and thermal properties of air. First, as the temperature of air goes up, its capacity to hold water goes up too. Kind of like hot tea can dissolve more sugar than cold tea, hot air can hold more water than cold air. Next, as air temperature goes up, its density goes down. Confusingly, the psychrometric chart actually shows specific volume, which is the inverse of air density. So as you move up in temperature, these lines slope downward. Hot air rises. Most of us know that. But maybe less intuitively, it’s also true for humidity. If you hold the temperature constant, and just increase the amount of water in the air, its density goes down. Water molecules actually weigh less than the nitrogen or oxygen molecules in air. So, humid air is more buoyant than dry air. And this is the second key to a cooling tower: convection.

The hot water transfers its heat to the air. The warm air becomes buoyant, flowing upward in the tower and drawing fresh air in through the intakes. But some of that hot water is also evaporated, removing more heat from the water, and making the air even more buoyant. The process both cools the water down and creates a natural draft up through the tower, drawing in even more fresh, drier air as it does. Ignoring the pumps and minor control features, there are no moving parts. So for just the cost of spraying the water, you create this enormous natural convection in the tower, moving huge volumes of air into the bottom, up past the fill, and out at the top to reject large amounts of heat to the atmosphere. This is the “smoke” you sometimes see rising from a cooling tower. It’s not actual smoke; it’s just water vapor condensing into tiny droplets as the now-saturated air mixes with the cooler outside air at the top of the stack. It’s basically a cloud machine. That’s why the plume is usually more visible during the winter months.

I was really surprised at how well my little model tower worked. I was pumping water at about 120 F (50 C) and the water coming out was dropping by around 30 degrees F (17 degrees C). That’s like a perfect cup of coffee down to a lukewarm shower. The air coming out at the top was shockingly warm, and there was a lot of it. I was really surprised at how much airflow this thing could create just by spraying some hot water inside. I guess I just figured these processes wouldn’t scale well because of turbulence, but I was wrong. It was both pretty good at cooling the water down and looking cool on camera. And part of the reason this looks so cool, the shape of the tower itself, is also crucial to its function.

Natural draft cooling towers often feature this curved, swooping shape. The mathematicians call it a hyperboloid. You can actually make one yourself pretty easily. Put some sticks evenly spaced and connected in a circle around the top and bottom. Then twist. Actually the fact that it can be made from straight lines makes these easier to construct. But that’s not the only reason they’re built this way. After all, a cylinder has straight lines too. There are some aerodynamic benefits to using a hyperboloid as a chimney. The wide base provides more area for air to flow in at the bottom. The constricted center accelerates the flow upward. And the wider top helps promote mixing of the hot humid air with the cooler air outside. But, really, these are secondary benefits. The main reason for the shape is structural.

These towers are big. To get enough natural convection, you need a tall stack. The taller the tower, the more warm, humid air is contained inside, generating more buoyancy and more airflow. The largest natural draft towers are more than 650 feet or 200 meters tall, and more than 400 feet or 120 meters in diameter. And you want the walls to be as thin as possible. Less material means less cost and more area for airflow. But a really tall cylinder made of thin walls is not very strong. It’s basically a big empty coke can. But the double curvature of a hyperboloid stiffens the shell against vertical loads like the structure’s own weight and horizontal loads like wind. You can also try this yourself. A thin piece of paper has almost no stiffness. As soon as you put a curve in it, it’s much harder to bend perpendicular to the curve. And two curves are better than one. It’s the Pringle factor. My model shows this pretty well too. I started out with thin, floppy strips of acrylic. But even just taped together, this tower is really strong. Using a hyperboloid can cut the structural stresses in half compared to a cylindrical tower, making structures like this much more economical to build. So that’s why natural draft towers use that shape. But that definitely doesn’t mean this is the only kind of cooling tower.

In fact, hyperboloid natural draft towers are actually pretty rare in the same way that thermal power plants are pretty rare compared to large office buildings, hospitals, and schools that also often use cooling towers as part of the HVAC system. Those towers often use mechanical draft systems, basically using fans to create airflow instead of tall stacks. I talk a little bit more about this in my book. We still call them cooling towers, even though they usually aren’t too towery. And, in fact, lots of power plants, refineries, and chemical plants use mechanical draft cooling towers as well. They’re less dependent on ambient conditions to create the necessary airflow, they’re smaller, usually less expensive to build, and offer some flexibility if heat loads fluctuate.

And not all cooling towers use evaporative methods. Dry cooling towers just use heat exchangers inside, with the cooling water flowing in a closed loop. In dry systems, you’re limited by the higher dry bulb temperature instead of wet bulb, but you don’t lose any water to evaporation, and you don’t have to deal with the buildup of minerals that happens in wet systems as cooling water evaporates away.

The reason you see big natural draft towers at power plants has everything to do with scale. The long-term savings of not having to run big fans and maintain all the associated equipment outweigh the higher initial costs. Particularly at nuclear plants built with design lives of 50 years or more, you can amortize the cost over a longer duration. Also these facilities are usually already built in more remote locations where land is cheaper and height restrictions are less stringent, making it feasible to build such massive structures just for cooling. And they’re particularly common at nuclear plants for two reasons. Number one is reliability. Cooling is an essential part of safety at a nuclear plant. The fewer parts of a cooling system, like fans, that can go wrong in an emergency, the better. Number two is variability, or the lack of it. Nuclear facilities are usually baseload plants. Most of them run nearly nonstop at a constant output. So they can get away with a system that’s designed for a single heat load rather than mechanical cooling required to ramp up and down. But, even if the heat load doesn’t change at large baseload plants, the weather does, and not every climate is ideal for natural draft towers.

If you live in a dry place, you might be familiar with evaporative appliances that can cool and humidify the air. We called them swamp boxes when I was growing up. It makes sense that these work better in dry climates; there’s less moisture in the ambient air, so you get more evaporation, and thus more cooling potential. So, you might assume that natural draft towers work best in areas with low relative humidity, but that’s not necessarily the case. And this took me a little bit to wrap my head around. Let’s look back at that psychrometric chart. Say we’re in an area with a wet bulb temperature of 20 celsius, 70 fahrenheit. The water from our condenser comes in at 40 C, 100 F, so the air leaving the tower will be saturated at that temperature. And we’re trying to cool that water down to 30 C, 85 F.

If the ambient relative humidity is say, 20 percent, our air is starting here and going here. But it doesn’t go in a straight line. Since the air is coming in from the bottom, it’s not coming into contact with the warm water, but the coldest water first. So it actually heads toward the outlet temperature and gradually veers toward the water inlet temperature as it rises through the fill. If you look at the lines for specific volume you might see the problem. In the first part of the curve, the state of the air is moving parallel to the lines. In other words, it’s not gaining any buoyancy. It’s not going to rise up the stack. It might work right at startup, but as the water cools down, the airflow in the tower will slow down and stall, and you won’t be able to cool the water enough.

But watch what happens if you increase the relative humidity of the ambient air to 50 percent. The line still curves initially toward the outlet temperature before heading to the inlet temperature as it moves through the fill, but it decreases in density consistently along its entire path through the fill. So, cooling engineers say that, for a given wet bulb temperature, you get a better draft as relative humidity goes up. It seems counterintuitive, but another way to look at it makes more sense. Natural draft cooling towers just don’t work that well in hot climates. Even if the air is dry enough to evaporate a lot of water and create a lot of cooling, you just can’t get it to rise up a tower on its own. So if you pay attention, you’ll notice different types of cooling depending on where you are. There are two nuclear plants in Texas and both use reservoirs for cooling. That gives you a sense of the cost involved in cooling feedwater at a power plant. In both cases, it was cheaper to build and maintain a dam and huge lake than a cooling tower that would work well in our climate.

I know that’s a little in the weeds, but I think it’s so fascinating how much engineering goes into things like this, and I’ve just barely scratched the surface here. The economics of building large facilities like thermal power stations requires that we know for sure that each design is going to work before any construction starts, and that has driven a huge variety of types and styles of cooling towers. Engineers mix and match designs and styles according to what will work most efficiently for each application, so there’s practically no end to the designs you can spot if you keep an eye out. And actually, some newer cooling towers do put flue gas into the air stream, making the tower do double duty. So I kind of lied at the beginning of the video. Depending on the tower you’re looking at, there really might be some smoke in that plume coming from the top. But mostly, it’s just water. And in a world full of straight lines and right angles, I love that every once in a while, it just makes good engineering sense to use curvy shapes to accomplish a really important job.

November 05, 2024 /Wesley Crump

The Wild Story of the Taum Sauk Dam Failure

October 15, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Early in the morning of December 14, 2005, pumps were nearly finished filling the upper reservoir at the Taum Sauk power station, marking the end of the daily cycle. Water rose to the top of the rockfill embankment, reaching the concrete parapet wall that ran along the top of the dam. But the water didn’t stop. One of the two pumps shut off, but the other kept running, and soon, the water was lapping over the wall. Within minutes, those splashes turned into a steady stream cascading over the parapet, pouring against the embankment on the other side. The rockfill eroded slowly at first, but the hole grew deeper and wider. The pump finally shut off, but it was too late—the footing of the parapet wall had already been undermined. The wall tipped over, and a massive surge of water was unleashed down the mountainside headed directly toward a state park.

This award-winning pumped storage facility, considered a model of modern engineering, immediately became the center of intense scrutiny. And what the investigations found would change a lot about the field of dam safety. I’m Grady, and this is Practical Engineering.

When it was built in the 1960s, the Taum Sauk pumped storage plant was unlike really any other power plant in the world, at least in terms of size. South of St. Louis in the Ozark Mountains, it was designed to meet a very specific need. I’ve talked about pumped storage on the channel before, and Taum Sauk was one of the largest facilities of its time. Built by Union Electric, which eventually merged with Ameren, the whole plant is basically a battery. It’s actually a net consumer of electricity, which is normally not a good thing for a power plant. But managing the power grid isn’t only about how much electricity you can produce, but also when you can produce it. Large coal plants in the Missouri area could make lots of power, but they couldn’t ramp that production up and down to accommodate fluctuating demands throughout the day. So, Union Electric proposed a clever solution, one that’s pretty common today, but was innovative for its time. Two reservoirs were constructed: one low on the east fork of the Black River and another near the top of Proffitt Mountain, Missouri’s sixth-highest peak. Between them, a hydroelectric plant with two reversible turbines.

When electrical demands are low, rather than reducing the output of thermal power plants, that energy can go toward pumping water from the lower to the upper reservoir, usually overnight. Then when demand spikes during the day, all that stored potential energy created from cheap electricity can be harvested and put back on the grid by reversing the system to generate hydropower. Of course, you don’t get all the power out that you put in. Some of that water evaporates or leaks out, and there are losses of energy in the pumping and generation. But, with an overall efficiency of around 70%, it was more than enough to justify the enormous cost of building and operating two reservoirs and a power plant that doesn’t produce any of its own electricity.

The most striking part of the whole facility is the upper reservoir. It’s just such an unusual sight: a circular dam, sometimes called a ring dike or ring levee, perched on top of a mountain. This is not usually an efficient way to build a dam. We typically construct them across valleys so that the natural topography can form the sides and back of the reservoir. With a so-called “off-channel reservoir” you have to build the dam all the way around, increasing the costs and the engineering complexity. But there are no valleys at the tops of mountains, and that height is an essential part of a pumped storage facility. The power available from falling water is really simple to calculate: multiply gravitational acceleration, the density of the fluid, the volumetric flow rate, and the difference in height, called head. We can really only change two of these. So for a specific power output needed for a specific duration, you can trade height for flow. The greater the difference in height between the two reservoirs in a pumped storage facility, the less water you need to move, which reduces the size of all the infrastructure, and thus saves costs. The mountains in southeast Missouri provided a perfect location for the project, creating about 750 feet or 230 meters of height between the upper and lower reservoirs.

Actually the whole facility is named after the highest mountain in Missouri, Taum Sauk, which was the original site for the upper reservoir until there was too much pushback about building a project there, so they moved it to a slightly lower peak nearby. And they encountered some challenging conditions during construction, forcing the engineers to realign the dam to avoid an area of weak geology, giving it that unique kidney bean shape. The original dam was built as a rockfill embankment - basically just dumping a long pile of rocks around the perimeter of the reservoir. Rockfill usually works well as an embankment if you have a good source of material nearby. It’s really strong, doesn’t require a lot of compaction, and it doesn’t settle much over time like soil fills do. One thing rockfill doesn’t do well is hold back water. Too many spaces between the rocks. So concrete panels were installed all along the inside of the reservoir to make the embankment water-tight. A tunnel connected a morning glory inlet through the mountain to the generating plant. The inlet was set into a basin 20 feet or 6 meters below the bottom of the reservoir to suppress the potential for a vortex to form as it was drained each day. The whole project was designed to be operated remotely with no on-site technicians required, another innovation for the time, but one of the many decisions that would prove disastrous.

For most of its life, the Taum Sauk station operated on average around 100 days per year, usually during the hot summer months when electricity demands were more variable between night and day. Deregulation of electric power markets in the 1990s opened up the possibility of selling power to other utilities. Those 100 days per year went up to 300, meaning the upper reservoir cycled up and down, often twice per day, nearly every day of the year. And that was starting to cause some problems. The upper reservoir had dealt with leaks essentially since it started operating in the 1960s. Several projects were implemented throughout its life to deal with the issue, but the increased cycles of filling and draining were only making things worse. At one point, small ponds were built beside the reservoir to capture some of the leakage and pump it back inside. In the fall of 2004, Ameren decided to bring out the big guns and spent more than two million dollars to install a geomembrane liner to cover the entire reservoir. That essentially fixed the problem, but it caused a few new ones too.

About a year later, in September 2005, the Institute of Electrical and Electronics Engineers, or “I-triple-E” declared the plant an “Engineering Milestone” for its innovations in the world of electrical infrastructure. On the day before the ceremony, some of the participants took a tour of the upper reservoir and witnessed water pouring over the parapet wall on one side of the dam. The operators quickly switched from pumping mode to generation to get the water back down. They chalked up the issue to high winds from a remnant tropical storm that caused the overtopping, but just to be safe, they hired a dive inspection team to check on the level sensors. And what they saw was concerning.

When that geomembrane liner was installed in the reservoir, there was a valid concern that any penetrations might cause leaks in the future. But the reservoir needed level sensors installed for the control system to be operated remotely. So, instead of mounting those sensors directly to the concrete through the liner along their length, the engineers tried something different. Two cables were run between anchors at the top and bottom of the embankment slope. The conduits for the sensors were attached to those cables, minimizing the number of penetrations needed. Unfortunately, the mounting system was underdesigned. Those conduits were buoyant, and also subject to strong currents as the reservoir filled and emptied each day. Sometime after the spring of 2004, they had become dislodged and deflected, so the sensors inside were providing readings that were lower than the actual water level.

Based on those findings, the operators decided to reprogram the control system to subtract two feet from the upper set point on the pumps. The original design called for two feet or 600 millimeters of freeboard between the top of the wall and the maximum water surface. They figured that doubling that distance would be enough to avoid issues until permanent repairs could be made during the annual maintenance period when the reservoir was drained. Unfortunately, they would never get the chance.

Less than three months after that first time someone observed the reservoir overflowing, on December 14, it happened again, this time in the early dawn when no one was around to notice. Once the parapet wall collapsed, the water quickly eroded down through the dam, emptying roughly 6 billion liters or 200 million cubic feet of water down the steep mountainside straight toward Johnson’s Shut-Ins State Park, stripping away trees and rocks as it surged. By pure luck, the failure happened in the winter when the park was practically empty, but the park superintendent, his wife, and three kids (one of whom was only seven months old) were swept away when the water demolished their house. Incredibly, the entire family survived the event, but not without suffering from injuries and hypothermia. The wave of water flowed into the lower reservoir, where it would have gone anyway later that day, so there were no major downstream impacts.

The event was investigated by the Federal Energy Regulatory Commission, and the conclusions were surprising. Like most events of this kind, a series of small oversights, which on their own wouldn’t have resulted in a disaster, combined to cause hundreds of millions of dollars in damage, plus the effects on the family I mentioned. First was the embankment. That rockfill it was supposed to be built from wasn’t quite as rocky as the engineers who designed it intended. There was a lot more soil mixed into the fill, resulting in more settlement of the embankment over time. Unsound areas of soil in the embankment’s foundation were also not properly cleaned out, making the settlement even worse. From construction to failure, some parts of the parapet wall were a full two feet or 600 millimeters lower than where they started. That settlement wasn’t taken into consideration when the level sensors were replaced after the lining project in 2004. And with the sensors unattached and free to move around, there was no way for the logic controllers to know the actual elevation of the water in the reservoir.

Failsafe probes were installed on the parapet wall to provide a backup that would automatically shut off the pumps if the level got too high, but they were installed in a location that was actually higher than the top of settled parts of the embankment wall. If the water hit those sensors, it was already overtopping parts of the wall. And they were incorrectly programmed in a way that required both sensors to be activated before the pumps shut off. That first site visit when water was running over the wall didn’t trigger those failsafe sensors, but no one thought to check them. And rather than ground truth the important elevations like the top of wall and all sensor levels, they just decided to add a couple feet of margin and postpone a permanent fix. It would have been so easy to have someone on-site during those last few minutes of filling the reservoir each day to verify the levels against the electronic measurements, or even a closed circuit camera, especially after the enormous red flag of seeing it happen a few months earlier, but no one knew it was overtopping. And the owner hadn’t notified the regulator the first time it happened, so there was no oversight for how Ameren responded.

Probably the most significant error of all happened well before the facility was ever built. The design had no spillway. As an off-channel reservoir, there were only two ways water could get in: rain falling on top or water being pumped in. With enough freeboard for a rainstorm and the redundancies built into the control system, the designers never envisioned a need for a way to let water safely run over the top. Unfortunately, when you rely on complicated systems for safety, the likelihood for things to go wrong goes way up. These types of events are sometimes called “normal accidents,” a term coined by Charles Perrow. The idea is that, when systems are complicated, and especially when the safety measures themselves add to a project’s complexity, failures are much more likely, even expected. In other words, failure becomes normal. Compared to an industrial control system, a spillway is dead simple. Once the water gets to the crest, it just goes out. They’re not failproof - I’ve talked about several spillway failures in previous videos - but there are a lot fewer ways that things can go wrong.

FERC fined the owner 15 million dollars for the failure, the largest penalty they’ve ever issued. Five million of that went into a fund to improve the area around the project, although some recent reporting has alleged that those funds have been mismanaged. The State of Missouri also sued, and agreed to a 177 million dollar settlement, much of which went toward restorations at the state park, which held a reopening ceremony in 2010.

At the same time as Johnson’s Shut-Ins State Park was undergoing renovation, crews were working on rebuilding the upper reservoir at Taum Sauk. To avoid a relicensing process, the dam was built on the same alignment and to the same size as the original project. Rather than trying to repair the flawed rockfill embankment, Ameren and their consultants went with an innovative design. The new dam was built using roller-compacted concrete, a dry concrete mix that’s handled using earth-moving equipment and compacted into place using rollers. The new design would address both the settlement and leakage issues the original structure struggled with while still taking advantage of the material from the original embankment. That rock fill was crushed and processed into aggregate for the concrete, reducing the amount of material hauled into the remote site. Maybe most importantly, they included a spillway this time. The structure is the largest roller-compacted concrete dam in the United States. The plant reopened in 2010, was rededicated as an IEEE milestone, and the project won the US Society on Dams’ award of Excellence in the Constructed Project.

The failure at Taum Sauk was a wake-up call for the professional community. The regulator, FERC, implemented some big changes to its oversight of dam safety in the wake of the collapse. They put together a task force that issued a technical guidance document specifically addressing the challenges of pumped storage facilities that was circulated to the owners. They also updated rules for owners, requiring them to have an internal dam safety program and a Chief Dam Safety Engineer who is responsible for overseeing it, a role Ameren didn’t have at the time. The event triggered states as far as Hawaii to bolster their dam safety programs. And most importantly, the failure demonstrated the need for overflow spillways, even for off-channel reservoirs with redundant control systems meant to avoid overfilling.

If you’re paying attention to issues related to the electrical grid, you know the importance of storage. Particularly as intermittent sources of power become a large part of our portfolio, ways to balance out those mismatches in supply and demand are only becoming more important. Pumped storage has traditionally been the only large-scale way to do this economically, but obviously, it comes with a tradeoff. Dams are among the riskiest structures that humans build. They don’t fail very often, but when they do, those failures usually come with serious consequences to people, property, and the environment. And because they don’t fail that often, those lessons come slowly and tragically. But, with battery storage becoming cheaper and much more widespread, it will be interesting to see how the economics of pumped storage change. By 2030, some are predicting the US will have more than 400 gigawatt hours of battery storage on the grid; that’s more than 100 Taum Sauks. We’re right at the beginning of some major changes in how energy is stored. Those batteries have a lot of technical differences in how they interact with the grid, and they come with their own environmental challenges and safety considerations, but the risk profile is a lot different than building a major reservoir at the top of a mountain. As energy infrastructure keeps evolving, those differences in risks are probably going to shape the future of how we store power, and at what cost.

October 15, 2024 /Wesley Crump

Is the World Really Running Out of Sand?

October 01, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

If you have to know the answer right away, it’s no; or at least, my goal with this video is to convince you that the world is not running out of sand. But if it were that simple, I wouldn’t be here (right?) and you probably wouldn’t be either. In fact, I was really surprised by some of the things I didn’t know as I dug deeper into the topic further, and how some of the most widely spread sand “facts” are dead wrong.

The wide world of sand is complicated, and not in a boring, pedantic kind of way. This simple material touches nearly every part of our lives, and the science and engineering behind it is rich and deep, and to me at least, hard not to become obsessed with. There’s a good chance you’ve seen articles or videos over the past few years with essentially the same story about sand. The “Sand Wars” documentary kind of kicked off the modern discussion, and then Vince Beiser wrote an excellent book on the topic, “The World in a Grain.” Of course, a lot of the book’s best reviews focus on the fact that sand is kind of a topic that’s “taken for granted” or “neglected.” But, at least in civil engineering, it is one of the most glected materials out there. And I’d like to give you a peek behind the curtain and show you how we think about this seemingly unlimited resource and why it’s worth knowing a little more about it. But first, I need to head out to the garage and put some sand in a rock tumbler, because I want to do a material property shootout, and this is going to take a little while. I have something really cool in the other barrel, and I’ll show it to you later on in the video. I’m Grady, and this is Practical Engineering.

What the heck is sand anyway? It’s kind of a “know it when you see it”-type material. If we use the US Department of Agriculture’s soil textural triangle, sand is any granular material that is at least 85% sand… So, what the heck is sand? For a better answer, we have the Unified Soil Classification System. Geotechnical engineers sometimes say that “dirt” is a four-letter word. Maybe because it undermines the importance of soil (which is also a four-letter word, by the way), but I like to think it’s because we have better names for all the dirts around the world, and here they are. In fact, there are four specific kinds of sand, but they all fit this one criterion, and it’s all about the size of the particles. At least half of those particles have to make it through a Number 4 sieve (about 5 millimeters), but no more than half can go through a Number 200 sieve (less than a tenth of a millimeter, or about 75 microns). That is a pretty wide range of materials, but I think when you picture sand in your mind, you probably imagine what the USCS would call “clean sand” where less than 12% pass the 200 sieve. So, just to make it simple, let’s say that sand is a material where the particles fit through this, but they don’t fit through this. But still, that encompasses a huge range of different dirts. And I hope, at this point, you’re asking, “Who Cares?” because I would love to answer that question.

In his book, Beiser calls sand “the most important solid substance on earth…the literal foundation of modern civilization…” We use it to make glass, semiconductors, fiber optics, filters, and abrasives, use it to texture surfaces, to play in, for beauty, and more. But, probably more than anything else, sand is an essential ingredient in concrete. And, you know, I’m a civil engineer; this is a channel about the built environment; so I wanna talk about concrete. And, in fact, if this video sparks your curiosity about one of my favorite materials, I have a whole playlist of topics I’ve covered in the past so you can learn more after this. You can’t really overstate how important concrete is and how much of it we use. There’s a bigger conversation to be had about its environmental impacts, but when compared to alternative building materials, it just has so much going for it. It is an extremely low-cost, durable substance that can be made into just about any shape you can imagine. Concrete has enabled us to build structures that last for generations from some very simple ingredients that are (mostly) available across the world: water, cement, gravel, and sand.

Most of those ingredients are mined and used directly as raw materials. And they’re usually mined close by. Transportation makes up a big part of the cost associated with sands used for construction, so the distance between where they’re found and where they need to go is highly correlated with how economical they can be. And that often leads to environmental impacts, some worse than others, depending on local regulations. It turns out that the best sand for concrete often comes from rivers, and mining in rivers can be particularly destructive because the impacts can spread upstream and downstream through changes in the nature of the channel. (I have a series of videos on that topic, too, by the way). Sand isn’t spread evenly throughout the world, and it’s a non-renewable resource. Geologic processes produce it a lot slower than we can use it. So, it makes some intuitive sense to say that we could eventually run out. But here’s a fact that is often overlooked in the discussion: we can make sand.

And it’s not that complicated, either. I talked about the definition of sand a little earlier, but here’s another one: it’s just very small rocks. And we have engineered machines that can transform big rocks into small ones. In fact, I have such a machine in my garage. It’s called a hammer. Some might argue that this isn’t the best use of my time, but I spent about an hour to artisanally manufacture a batch of sand just to hammer this point home. First, you crush the rocks. Then you put them through the sieves to remove the stuff that’s too big or too small. It takes a little extra processing, but this is not grain surgery. And this has a lot of benefits compared to sourcing natural sand. Hard rock quarries and crushing operations are already out there producing coarse aggregates like gravel, so sometimes the smaller stuff is a waste product anyway. It opens up possibilities when natural deposits aren’t available and can move mining operations upland, away from rivers, where the environmental impacts are less severe. And, it can make the concrete stronger. Let me show you what I mean.

I took some sand out of my kids’ sandbox and put it in a rock tumbler for a week to try and simulate the erosion it might see over years in windswept dunes of a desert. Obviously, this erosion reduced the overall size of the particles. So, I classified both materials with the sieves to make them a closer match for a fair comparison. And both batches are within the spec for concrete sand in the US. Looking in the microscope, you can clearly see the differences. The tumbled sand is rounded with roughly spherical, smooth grains. The manufactured sand is jagged with sharp, angular corners. And watch what happens when I pile them up. I filled up two pieces of pipe with the same amount of sand, and then pulled the pipe away. It’s not a huge difference, but you can see the rounded sand spreads out a little further because the particles have less friction. It makes intuitive sense that concrete made from this sand would be weaker than with this sand. Let’s see if that’s true.

I mixed up a simple batch of concrete from the crushed sand, and the tumbled sand keeping the weights of all the ingredients equal. Here’s my recipe if you want to try this experiment yourself. Then I molded some concrete cylinders and let them cure for a week. Most concrete mixes are meant to reach their design strength after 28 days, but concrete strength gain is fairly predictable, so the relative difference between the samples should be consistent in time. Most importantly, my little benchtop hydraulic press would not be able to break these samples if I waited that long. And this load cell is not calibrated, so I’m doing this test in arbitrary Practical Engineering units of force. The tumbled sand cylinder broke at around 2500 units. And the manufactured sand concrete broke at 7500 units. You can easily see the difference in the results. “Goodnight!” It was 3 times as strong. Of course, this is my garage, not a testing lab, and I only did one sample because my arm got tired of hammering rocks. Luckily, people much smarter than me have tested this out, and the results are pretty conclusive that, if you keep everything the same, the angularity of fine aggregate increases the strength of concrete. And that’s the story you probably know if you’ve read anything on this topic. It’s the common explanation for why we don’t use dune sand, the most visible of earth’s sand resources, in concrete. It’s intuitive. Rounded grains don’t lock together. Beiser makes the claim not once but three times in his book. But it turns out it’s not that simple, because strength isn’t the only property of concrete that we care about.

Before concrete has to be strong, it has to be placed. Ask anyone who’s done this kind of work, and they’ll tell you it’s hard. Well, it’s liquid at first, but it’s hard to work with. Concrete is about two-and-a-half times the density of water. It’s heavy stuff, and to get it into the forms often can require a lot of different tools: wheelbarrows, buggies, chutes, pumps, hoses, and more. The better the concrete flows, the easier it is to do a good job of placing it. And that matters. If a mix is too stiff, it can clog up hoses, trap air bubbles, and ultimately lead to poor quality in the installed product. This property of concrete is usually called workability. It’s often measured using a slump test. Fill up a cone of concrete, pull the cone away, and see how far the concrete slumps. But the problem with workability is that, in one big way, it works against strength. And it all has to do with water.

What happens when the concrete truck shows up to your job site and the mix is too stiff? Depends on if the engineer is there or not, but in a lot of cases, you just tell the driver to add a few gallons of water to the mix. More water; better flow; easier to place. It’s pretty straightforward, but there’s a reason you don’t want the engineer to know: water decreases concrete strength. I’ve done a whole video about this with some garage demos, so again, check that out if you want to learn more. The gist is that the ratio of water to cement is one of the most important factors determining concrete's strength when it cures. Cement isn’t like some types of glue that harden as the water or solvent evaporates. It goes through a chemical reaction, incorporating the water into the final product. That’s why we say concrete “cures” instead of “dries”. But, cement can only react with around 35 percent of its weight in water, so any more than that is just taking up volume in the mix that could be used by the stronger ingredients. More water; less strong. And here’s where the shape of the sand grains comes into play.

I did a little garage slump test to gauge the workability of those two mixes I made for the earlier demonstration. Here’s the rounded sand mix… and here’s the manufactured sand. “Haha, no slump at all.” Honestly I expected this to be a subtle difference, but it was like night and day. They weren’t even close. So I wondered, what would happen if, instead of holding ingredient ratios constant, I used the workability as the controlled variable? Let’s find out. First I used the manufactured sand with enough water to get it to a workable level. 100 milliliters got it to here, which is a bit better than the first one. Then I did a second mix with the tumbled sand, slowly adding water and running the slump test until it was pretty close. It took only 70 ml of water to make them match, 30% less than the first batch. After a week, I tested the samples. The tumbled sand sample broke at 4,800 units. And the manufactured sand broke at only 4,300 units. The tumbled sand with the rounder grains was stronger this time (by about ten percent), and it’s all due to the lower water content in the mix.

So yes, if you use the same amount of water, more angular sand like you might find from a river or manufactured sand is better, but that’s not what happens in real construction. I say this with many, many caveats, but very generally, you only add as much water as you need for workability. Rounded sand gives you better workability, so you can add less water, and thus get stronger concrete. This idea that we can’t use wind-blown sands in concrete because of their shape is a myth. In fact, the American Concrete Institute has a bulletin that says it better than I can:

“The influence of fine aggregate shape and texture on the strength of hardened concrete is almost entirely related to the resulting water-to-cement ratio of the concrete…”

I tried to track down the original source of this idea that we can’t use rounded grains in concrete, but got nowhere. Beiser cites an article from the UN, which itself cites a 2006 paper about using two types of desert sand from China in concrete. But that paper doesn’t mention the roundness of the particles at all. They didn’t include any measure of the shape of the grains in their study, and they didn’t make any suggestions about how that particular property of the desert sand may have affected the results of their tests. In fact, the conclusion of that paper includes saying that desert sand is a feasible alternative to other types of fine aggregates used in concrete. And the whole reason it was a subject of scientific study at all has to do with size, not shape. This is a widely used specification for the distribution of particle sizes of fine aggregate for concrete. Any sand in this area meets the spec. And here’s the soil used in that paper. Even if the conclusion was that it doesn’t work, I think this would have a lot more to do with the results than the shape of the particles.

And that really gets to the heart of this whole discussion. Fine aggregates are found throughout the world. We can even make our own. And concrete is like baking; different ingredients can change the end results. But just like regional bread recipes evolved based on the availability of local ingredients, the construction industry has developed a lot of ways to use different local materials to achieve good structural properties. The real challenge, like many things in engineering, is cost.

It can be more expensive to manufacture sand compared to mining raw materials that can be put directly in a mix, especially when you factor in the other ingredients, like chemical admixtures, that might be required to make it more workable without adding too much water. It’s expensive to transport better quality sand from far away, rather than finding it close to a job site or batch plant. It’s expensive to mine sand in adherence to environmental regulations that are becoming stricter worldwide. It’s catchy to say there’s a scarcity of fine aggregates on earth, but I think it’s misleading. “Sand is getting a lot more expensive than it used to be” just doesn’t make as nice of a headline. And the tricky part is that, in many ways, those costs have always been there; we’ve just externalized them onto the environment and our future.

All the ingredients in concrete are mined or harvested just like other natural resources. It’s just that concrete is made on a scale that blows most other materials out of the water. It’s a huge business, and there’s lots of money flowing, which means a lot of potential environmental harm and social conflict as a result. That’s especially true in places that don’t have robust oversight and enforcement of how sand is extracted. And I think it’s important to point out that the low-cost of sand, because of its simplicity as a material and its abundance, is a big part of why we use so much concrete in the first place, even in situations where it’s not necessarily the best material choice in other respects. Everything in engineering is a tradeoff, and if the economics around sand change, the engineering and construction industries can change with them. Look at other examples of this.

Diamond used to be exclusively a mined material, but now we can make it in a lab. Synthetic diamond gemstones used in jewelry are now less expensive than mined diamonds, but, admittedly, there’s a lot more to that economy than the costs to make them. What I think is more interesting is that 99 percent of diamond used in the world for industrial purposes is synthetic. It used to be a rare mineral, but now you can pick up a diamond drill bit or saw blade from the hardware store for a fairly small premium.

Timber is another example. Natural forests used to be the only source, but now plantations, trees planted specifically for harvest, now make up more than a third of the wood we use globally. And engineered lumber like plywood, OSB, and structural composites can make more efficient use of raw materials. I’m pointing out these examples, not to say they’re good or bad - there are pros and cons in both cases - but just to illustrate how our demand for materials in the construction industry changes with the supply, and how technology can have a huge impact on that. And there’s another parallel between timber and sand: they both can be renewable.

I had another barrel in the rock tumbler not going to use, so I broke up some chunks of concrete and threw them in. I ran these through the grits, just like you would with any other rock in a tumbler. Concrete is pretty soft compared to most natural rocks, so it didn’t polish up that nicely, but the result is still pretty cool. You can really see the constituent materials of the concrete after it spent so long rolling around in there: the small and large aggregates and the cement paste locking them together. But the point of this demo is that concrete is pretty much just rock, that’s mostly what it’s made of in the first place. And just like the rocks I crushed to create manufactured sand, concrete can be recycled into aggregates that get reused in the construction industry, either in new concrete or other materials, reducing demand for virgin sources.

There’s a lot changing in the construction industry, and a lot of growth in the need for materials like sand and gravel. But I don’t think it’s fair to say the world is running out of those materials. We’re just more aware of all the costs involved in procuring them, and hopefully taking more account for how they affect our future and the environment.

October 01, 2024 /Wesley Crump

When Infrastructure Gets Hacked

September 17, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is a water tower, or as the pros would say, an elevated storage tank. Pretty common here in the US, especially in flatter areas where there’s no nearby hillside to build a ground-level tank. I have a whole video about how these work. In the most basic sense, a water tower is a buffer between the ever-changing demands for fresh water in a distribution system and the high-service pumps at the treatment plant that like to run at a constant rate. The level in the tank is a key measure of performance. If it’s high, pressure in the system is good, and the pumps can shut off… unless someone has messed with the computer system that controls that relationship.

In early 2024, that’s exactly what happened in Muleshoe, a small town in the Texas panhandle. A citizen noticed water spilling out of the elevated tank and reported it. When the city went to investigate the problem, they didn’t find a stuck valve, malfunctioning sensor, or broken pump contactor. The water tank was overflowing because of a deliberate attack by a hacking group linked to the Russian military

Some water was wasted, but ultimately, no one was hurt, and nothing was damaged in the attack. Muleshoe was probably just a victim of opportunity. Having grown up in the Texas panhandle, I think I’m safe saying that most towns there aren’t necessarily considered high value targets for international criminal campaigns. But that’s the thing with cyber security these days. It’s not just for the organizations with big secrets and lots of money. Even in tiny west Texas towns, critical pieces of infrastructure are run by computers, and a lot of them are connected to a network, making them vulnerable to bad actors. I’m not a security expert; I’m just a civil engineer. But, I’ve worked on a lot of projects where digital systems interact with infrastructure, and I’ve collected some really interesting stories about how that can go wrong that I thought would be fun to share. So let’s peek behind the control panel and talk about them. I’m Grady, and this is Practical Engineering.

Once upon a time, everything from the power grid… to drinking water distribution systems.. industrial manufacturing… to oil and gas…, dam operations, and more was run without the aid of computers. Calculations were done manually, and engineers carried slide rules. Decisions were made by skilled operators, valves were opened and closed by hand, wear and tear was measured by human eyes, and so on. It’s easy to see the opportunities for digitization. If you’re not relying on a person for everything, you can be more efficient, reduce the chance of error, and improve safety by not requiring workers to be so hands-on. And there are quite a few ways to computerize the control of industrial processes like operating a pipeline or a water system.

One of the most widely used is called SCADA or Supervisory Control and Data Acquisition. This is a fairly standardized architecture used in a wide variety of industries like manufacturing, oil and gas refining, and most of the utility systems we rely on like electricity, water, sewer, traffic lights, and more. Let’s look at an example of a basic municipal water system to see how it works.

Getting fresh water distributed to a city is a big undertaking that requires a lot of equipment, including valves, tanks, pipes, pumps, chemical systems, and more. Some of these will include sensors to take some kind of measurement, such as the water level inside a tank or the flow rate within a main line. Others will include actuators, devices that can do something like turn on a pump or open a valve. All the devices connect to one or more remote terminal units or RTUs. All the RTUs are then networked to a central supervisory computer that sends control commands and collects the data. This computer normally includes the Human Machine Interface or HMI. This is where an operator interacts with the system, and they’re usually set up as simplified diagrams of whatever’s being controlled.

Systems like this can be programmed to maintain certain conditions and automatically adjust equipment to keep everything running smoothly and as expected. Automated systems never get bored of doing the same thing over and over again, they don’t need to sleep, and they don’t mind being exposed to hazardous chemicals. For example, let’s look at the high service pumps and water tower. These are often configured in a lead-lag system with multiple pumps for redundancy and reliability. When the level in the tank drops below a set point, the lead pump turns on. With smaller demands, this will fill the tank to the upper set point, at which point the lead pump turns off. But under higher demands, the lead pump might not be enough. If the level continues dropping while the lead pump is running, a lag pump with a lower set point will kick on. With both pumps running, the tank will fill, eventually reaching the upper set point and kicking both pumps off. If you want to see an example of this in action, check out my Practical Construction series where I embedded on a construction site of a sewage pump station and documented the process from start to finish.

That’s a basic example, but you get a sense of how useful a SCADA system can be. You don’t have to manually control the pumps or be on site to check the tank level. And you can change those set points. Maybe during seasons when demand is low, you don’t want the tank full all the time, because the water spends too much time in there where its quality can degrade. You don’t have to hire an electrician to reconfigure a control panel or put a technician in a dangerous spot to adjust floats in the tank. Any trained operator can just change the values in the HMI. They’re designed for simplicity, and in fact, I’ve always thought they often look a lot like old video games. There’s something really nostalgic about the basic graphics HMI’s are often designed with. It’s easy to forget that they’re connected to real systems, often large and complex systems, where the stakes are high if something goes wrong, which is exactly what happened in Muleshoe.

According to security researchers, Muleshoe’s SCADA system was breached by a group called the Cyber Army of Russia Reborn with a portal set up so the city could have remote access. On January 18th, they posted this video supposedly showing them manipulating the HMI’s of two small Texas water systems. Judging by the haphazard clicking around, it seems that the hackers know a lot more about gaining access than they do about how water systems work. Most of the video seems to be someone clumsily navigating screens and changing values at random. Nevertheless, they managed to change a setpoint on one of Muleshoe’s booster pumps, leading it to stay on even after the water tower was full, and eventually causing it to overflow.

Ultimately the attack was pretty harmless, but it could have been worse. A similar event happened in Oldsmar, Florida in 2021 when a hacker reportedly changed the sodium hydroxide feed in the water treatment plant from 100 parts per million to 11,000. The event brought huge national attention to the issue of information security for critical infrastructure. Two years later, the FBI concluded it probably was an employee mistake and not an actual intrusion, but it was still a strong reminder of the type of havoc that could result from a SCADA system with poorly secured remote access.

Even further back than that, a SCADA system controlling sewer works in Maroochy Shire, Australia was hacked by a disgruntled ex-contractor, releasing thousands of gallons of untreated sewage into parks and waterways in 2001. And really, there’s no telling how many similar attacks have happened across the world. A lot of them don’t make the news, and even though they’re often investigated by authorities, the details aren’t released for fear of sharing potential vulnerabilities that aren’t patched up in other systems. It’s a constant arms race happening mostly behind the scenes. Hackers are constantly probing systems for vulnerabilities, especially ones that are previously unknown (called zero-days, because that’s how long they’ve been known about when exploited). But access to industrial control systems isn’t the only digital threat to infrastructure.

In May of 2021, the Colonial Pipeline Company, owners of the largest refined petroleum pipeline in the US, was attacked by another Russian group called Darkside. They didn’t gain access to any pumping or control systems. Instead, they installed ransomware on the billing computers, locking the company out, and stole sensitive information, threatening to release it if the company didn’t pay. Not knowing the extent of the threat, the company shut the pipeline down. Over the next six days, a gasoline panic struck the eastern US, with gas hoarding emptying out more than 12,000 filling stations. A state of emergency was declared, and rules governing tanker trucks were relaxed to allow for more fuel to travel by road.

With FBI oversight, Colonial paid the ransom, 75 bitcoins, or about 4.4 million dollars at the time, within hours of the attack. But the tool provided by the group to unlock the system was so slow, that they ended up using mostly their own backups to get the billing system back online. Some of that ransom was eventually recovered, but it took six days to get the pipeline started up again, and there’s still a ten million dollar reward out for information leading to key leaders of the group. So how did they do it?

Really it wasn’t very sophisticated. An employee was reusing a password that had been leaked in a database from a prior breach. They just logged into Colonial’s VPN with purchased credentials. That’s all it took to take down one of the US’s most important pipelines for six days. Again, with that kind of access, it could have been a lot worse. And one thing you’re probably thinking is, “why would you have the ability for remote access to critical systems like this at all?” Is it really worth exposing yourself to the entire world of nefarious actors, just to save a commute to the HMI? And, actually, a lot of critical systems don’t have an outside connection. They’re air-gapped. But even that’s not a foolproof system.

One of the first, and maybe well known, examples of infrastructure hacking, especially an example designed to cause permanent physical damage, is STUXNET. Although the details are pretty murky, STUXNET seems to have been developed by the US and Israel as a military-grade cyber weapon. It was a worm, first reported in 2010, that specifically targeted SCADA software on Windows computers. Stuxnet famously exploited four zero-day vulnerabilities to spread and interact with SCADA systems. If a computer didn’t have the target software, it would just do nothing except replicate. But when it found a computer with SCADA software and some very specific motor drives connected, it would send a command to rapidly speed up and slow down the motors while faking sensor data so that the SCADA system wouldn’t shut down or throw an alarm that something was awry. Those specific motor drives were pretty much only used in gas centrifuges used to enrich uranium so it could be used in nuclear weapons. It’s pretty clear what the worm was designed to target, and it did work. STUXNET reportedly destroyed around a fifth of Iran’s nuclear centrifuges, and probably shortened the lifespans of many more. And it was introduced to the facilities’ networks not through a remote connection (the system was airgapped) but from an infected USB drive.

And that really is the key to all this. Your cybersecurity is only as strong as one employee’s willingness to plug in a USB drive or reuse a personal password at work or click a deceptive link in an email or hold the door open for someone following behind them. And most of us are guilty of doing these things. At least, every once in a while… But, these days, no matter who you are or what you do, you probably use some kind of digital device in your life. And so whether you’re the operator of a tiny water system in rural Texas or manage the largest gasoline pipeline in the US, you also kind of have to be a cybersecurity expert too. The stakes are high. Digital systems interact with every aspect of our daily lives and basic needs: water, electricity, sanitation, public health, transportation, and more can all be seriously disrupted by someone or some group, anywhere in the world, if we let our guard down. With great computer power comes great computer responsibility. And just because many of these industrial control systems are only used or understood by a small group of people, security through obscurity just isn’t realistic anymore.

September 17, 2024 /Wesley Crump

The Hidden Engineering of Landfills

September 03, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Puente Hills Landfill outside of Los Angeles, California. The first truckload of trash was dumped here in 1957, and the trucks just kept coming. For more than five decades, if you threw something away in LA County, there’s a good chance it’s buried somewhere inside this mountain of waste. At its peak, Puente Hills was accepting around four million tons of trash every year, making it one of the largest landfills in the country. It closed in 2013, creating a time capsule of everyday life and consumption patterns over a span of 56 years. But Puente Hills is also a time capsule of landfill engineering itself. In 1976, right in the middle of its lifespan, sweeping federal regulations changed how we deal with solid waste forever.

You probably don’t think too much about where your trash goes, and that’s kind of the whole point of the solid waste industry: to make sure you have the ability to throw something away without it having a serious negative consequence on the environment or public health. There’s a larger conversation to be had about the amount of waste we generate and how much of it can be recycled or reused, but there is always going to be stuff that just doesn’t hold enough value to be kept. Trash is an inescapable element of the human condition. And, I think you’re going to be surprised how complicated that really is. When Puente Hills opened in the 50s, a landfill was pretty much just a hole in the ground where trash was dumped. By the time it closed, landfills were highly engineered holes where trash gets dumped. And I have a scale model of a landfill in the garage to show you how it all works. I’m Grady, and this is Practical Engineering.

There are lots of kinds of waste in this crazy world, but one of the biggest sources is just you and me throwing stuff in the trash. The technical term is municipal solid waste, since its collection is usually coordinated at the city level. There are a lot of ways to manage it once collected, but the most common by far is disposal in a landfill. And, one of the biggest parts of landfill engineering is just deciding where to put one in the first place. The main goal of a landfill is to maximize the volume of waste that can be stored there while minimizing the cost and the environmental impacts too, which turns choosing a suitable site into a giant geometry problem.

Digging a hole sounds like an obvious choice, but consider this: digging a hole is expensive, and not digging a hole is free. There are costs of excavating tons and tons of soil just to get it out of the way so it can be replaced with trash and costs of hauling away all that soil (since your goal is to maximize the volume on the site). Plus, you have to avoid the water table, any unsuitable geology, and the challenges of building and working deep below the surface of the earth. That’s why most landfills mostly build up into what sanitation professionals call the “air space.” Looking upward, it may seem like the sky is the limit, but anyone who’s built a tower of anything, let alone trash, knows better. The waste pile gets less stable as its height increases, requiring shallower slopes. And the pressures at the bottom go up too, which can lead to settlement and damage of facilities. Plus, there are visual impacts. The bigger the garbage heap, the bigger the eyesore, and people are only willing to look at a landfill so tall.

They can’t be too close to airports, because they attract birds that can interfere with planes. And they can’t be too close to homes, parks, playgrounds, and other places people congregate for obvious reasons. Of course, there’s floodplains and wildlife habitat to avoid as well. And you don’t just need a place to put the trash. You also need a scale house to weigh the trucks coming in and out, a shop and storage for the equipment, and sometimes a place for ordinary citizens to drop stuff off. Finally, you need a spot that can handle the huge increase in truck traffic coming and going, practically nonstop. Pretty much, if you can get a college degree in it, it’s going to come into play when siting a landfill: geology, geography, politics, archaeology, public relations, biology, every kind of engineering, and lot more.

But once you have your landfill, you can’t just start dumping trash. Let me show you why with a demonstration with some help from my shop assistants. I have my hole dug, and we’ll start adding some trash. So far, no major problems. But eventually, it’s going to rain. And you can’t immediately see the issue. Granted, this is more of a flood than a drizzle, but it gets the point across. All that water is going to filter through the garbage to the bottom of the hole, and, eventually, into the underlying soil. It might go without saying, but I’m going to say it anyway: We really don’t want garbage juice percolating into our soils. Mainly because it can contaminate sources of groundwater, but also because it can migrate well beyond the limits of the landfill, causing all sorts of environmental troubles. So, modern landfills use a bottom liner to keep waste separate from the underlying soils. Often this consists of a thick sheet of plastic, carefully tested and welded together into an impermeable membrane. Even the area between the plastic welds is tested using air pressure to make sure there are no leaks. Another option is thick clay soil compacted to create a watertight layer. In many cases, the two options are combined, so you end up with this intricate structure of different impermeable layers stacked together.

Maybe you still see a problem with this solution on its own. Now when it rains, the landfill just fills up with water. This causes issues with stability and settlement. It causes garbage to decompose more quickly, leading to odor and temperature problems. Plus, you just can’t work on top. There’s no way for trucks to unload trash on top of a garbage swamp. So we need a way to get the garbage juice out, without letting it flow into the soil below. By the way, garbage juice isn’t a technical term. It’s actually called leachate, so I’ll use that from here on out. And all modern landfills have sophisticated leachate collection systems to keep the waste as dry as possible and avoid the issues I mentioned. Usually, this consists of a system of perforated pipes covered in a layer of sand, draining to sumps, and eventually leading out of the waste.

I built a little leachate collection system in my model landfill using a small tube so you can see this in action. Now my clay bathtub has a drain. When the rain comes, the water that makes its way into the waste is able to flow out of the landfill, keeping it from becoming a swampy mess. This is a little simplified compared to a real landfill. I’ve made a video all about French Drains, which is much closer to what a leachate collection system consists of if you want to learn more after this. Obviously, in my example, the leachate system has to penetrate the bottom liner, which can be a potential source of leaks. So these penetrations are sealed really carefully in the real world, or the collection system just uses pumps and risers that run up the slope of the landfill to the top, so no penetration system is necessary.

Of course, now you have a stream of leachate you have to deal with. Actually, leachate management is one of the biggest costs of running a facility like this. Some landfills send it off to a treatment plant that can clean it up. Some have ways to treat it on-site with settling ponds, evaporation, biological treatment, and even plants that can consume and convert landfill leachate into waste that’s easier to dispose of (maybe even back into the landfill itself).

Finally, the bottom of our landfill has all the necessary pieces, but the work doesn’t stop there. Remember that volume is everything in a landfill. For as much effort goes into finding a location and building the infrastructure, it’s essential that we get the most trash in here as possible. You probably know this, but municipal garbage just isn’t that dense. Maybe you’ve had to smash a few more bags in the can because you missed the collection one week. If so, you know there’s usually a lot of room for densification. The trucks that collect garbage usually have a way to compact it to make more room in the box before needing to be emptied. But once the trash is at the landfill, there’s still an opportunity for compaction. Landfills often use massive roller compactors with enormous teeth and giant blades to grade out and compress waste and get as much as possible into the site. It saves money, and it’s good stewardship of the space. But density isn’t the only challenge with day-to-day operations.

Despite what you’ve heard, landfills are kind of gross. I mean, that’s their whole point is to accept the stuff we don’t want to put anywhere else. But putting it all in one place creates a lot of problems: pests, odors, windblown waste, fires, birds, and more. So to mitigate some of that, most places require that the garbage be covered up at the end of every day. This “daily cover” can take a lot of forms. The basic approach is just to put a layer of soil over the top of the working face at the end of the day.

When I do this in my model, you get a sense of the problem. All that clean daily cover is taking up precious space in the landfill. One option is to trim it back off each morning before trucks start arriving, but that’s a sisyphean task of just moving tons and tons of soil around each day. Other alternatives for daily cover are tarps, or just holding back certain types of waste that are more inert like foundry sand, foam, paper, and shredded tires. They’re going in anyway, so you might as well use them on top to cover the more disagreeable stuff overnight. Those alternatives can also help avoid leachate getting perched within the waste, encouraging it to continue downward to the collection system.

Ideally, a landfill will last for decades, slowly filling up by packing as much waste as possible. Throughout the course of operating a landfill, there’s constant testing of groundwater, surface water, leachate, air quality and more to make sure they’re not exceeding limits. Landfills are usually built in smaller cells so you don’t have to manage this huge area of waste all at once. A cell fills up, you put soil over the top (called interim cover), and start a new one within the landfill. But eventually, you reach the top of the airspace, and the landfill reaches the end of its useful life. And closing a landfill is not an easy job. Of course, you have to cover all that waste up, creating a mountainous sealed tomb of garbage. That final cover has to keep water out, to reduce the volume of leachate you’re having to collect and treat over time. But it also has to keep the garbage in, and not just the garbage itself, but anything else that comes with it like smells and leachate and pests. And it has to do it basically forever. So, just like the bottom liner, the final cover over a landfill is usually a system of multiple layers, including compacted soil, membranes, and fabrics. And then you have to get the grass to grow, to protect the soil from erosion and damage over time. I don’t have time to wait for grass to grow in my demo, so I’m cheating a little bit.

But the fun isn’t quite over yet. The waste may be sealed up, but that doesn’t mean it’s inert. In fact, there’s a lot of chemistry and biology happening inside a landfill, and a lot of those reactions generate gases like methane and hydrogen sulfide that can create pressure, heat, smells, greenhouse effects in the atmosphere, and the potential for explosions. So, one of the steps in landfill closure is to install wells that can collect the gases from the waste. Usually, these consist of vertical pipes connected to a blower that constantly draws air to a collection point. There’s a lot that goes into these systems too. You can’t pull too hard, or you might draw oxygen into the landfill, changing the reactions and microbiological processes, and creating a potential for a fire within the waste. Plus the gas includes a lot of humidity, so managing condensation creates another liquid stream that has to be collected and treated. Once it’s collected, the landfill gas can be flared, combusting it into less environmentally harmful constituents. Another option is to put it to beneficial use to create heat or even electricity. The Puente Hills landfill I showed earlier has a gas-to-energy facility that’s been running since 1987, and even though the landfill is now closed, it currently provides enough electricity to power around 70,000 homes.

Once a landfill is closed, there’s not a lot you can do with it after that. It’s a big, sealed up, mountain of trash, after all. Owners are generally required to look after a closed landfill for at least 30 years afterwards, inspecting for leaks, monitoring the air and water, and repairing any damage. Those costs have to be built into the rates they charge, since there’s not a lot of benefit (or revenue) after closure. But, with all that open space and carefully-maintained landscaping, one option that many landfill operators are trying out is parks. And I love this idea. They say, “We’re willing to put our money where our mouth is and invite the public to spend time here, to enjoy this place that used to be, you know, one of the least enjoyable places you can imagine.” Puente Hills in California has big plans, including trails on the slopes, biking, slides, gardens and more. It looks like it will be a really nice place to visit when it’s done. And it also puts the whole concept of landfills in perspective.

Of course, we have a lot of room for improvement in how we think about and manage solid waste in this world. Landfills seem like an environmental blight, but really, properly designed ones play a huge role in making sure waste products don’t end up in our soil or air or water. It’s not possible to landfill waste everywhere. Many places are too densely populated or just don’t have enough space. But where they are, the environmental impacts are relatively small. Just consider the resources that go into them. I pay about 20 dollars a month, probably a little on the low end of the national average, and that buys me 64 gallons (about a quarter of a cubic meter) of space in a municipal landfill per week. Of course, I don’t fill the can every week, and that trash gets compacted. But still, do that for a decade, and your 20 bucks a month has paid for the volume of a modest apartment. It’s covered the cost of building the lining and collection systems, the environmental monitoring, the daily operations, the closure, the gas collection, and the maintenance for at least three decades afterwards and for your trash to stay there effectively forever. It’s (almost) free real estate, not that you’d want to live there. But my point is: landfills are a surprisingly low-impact way to manage solid waste in a lot of cases. I hope the future is a utopia where all the stuff we make maintains its beneficial value forever, but for now, I am thankful for the sanitary engineers and the other professions involved in safely and economically dealing with our trash so we don’t have to.

September 03, 2024 /Wesley Crump

Why Are Texas Interchanges Texas So Tall?

August 20, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Dallas High Five, one of the tallest highway interchanges in the world. It gets its name from the fact that there are five different levels of roadways crossing each other in this one spot. In some ways, it’s kind of atrocious, right? It’s this enormous area of land dedicated to a complex spaghetti of concrete and steel; like the worst symbol of our car-obsessed culture. But in another way, it really is an impressive feat of engineering. 37 bridges and more than 700 columns are crammed into this one spot to keep the roughly half a million vehicles flowing in every direction each day.

They say everything’s bigger in Texas, but that’s not always true when it comes to engineering projects in the US. The tallest concrete dam is split between Arizona and Nevada. The longest bridge span is in New York. The longest road tunnel is in Alaska, and the longest water tunnel, not only in the US but the whole world, is the Delaware Aqueduct in New York. The largest hydroelectric plant is the Grand Coulee Dam in Washington, while the largest nuclear plant is in Georgia.

But one thing that Texas really does do bigger is highway interchanges. If you’ve driven from one major Texan highway onto or over another, you may have been astonished to find yourself and your vehicle well over a hundred feet or 30 meters above the ground. There’s no clearinghouse of data for flyover ramp heights, as far as I can find. Plus there’s the complexity of what a true height really means since many interchanges use excavation below grade for the lower level. Still, even the most conservative estimate puts the High Five taller than the Statue of Liberty from her feet to the top of her head. And if you do a little digging, you’ll find that many, if not most, of the tallest highway interchanges in the world are right here in the Lone Star State. Let’s talk about why. I’m Grady, and this is Practical Engineering.

The idea of a freeway really started in the 1920s with what’s now the Autostrada A8 in Italy: an automobile-only road with controlled access. Freeways are separated from local roads with limited ways to get on and off. And if you’ve driven a vehicle in the past century, the idea of a controlled-access freeway is pretty much taken for granted. Smooth curves and limited chances to enter or exit mean more speed and more capacity. But eventually, those big roads intersect other roads (sometimes other big roads) and that creates an obvious challenge.

Unlike most roads that cross at the same level on the ground, or as engineers say, “at grade,” freeways use grade separation at intersections. Roads go over or under one another. No traffic signals, stopping, or interruptions. Again, this is nothing groundbreaking. But what if you want to turn from one road onto the other? Just like that, we’ve gone from an intersection to an interchange. And this is where things get a lot more complicated. But we have to build up to it.

The diamond interchange is probably the simplest way to get grade separation because it kind of half doesn’t. Through traffic on the freeway flows right by, in most cases without any need to slow down. But that’s not true at the crossroad. Ramps enter and leave the highway at gentle angles and meet the crossroad nearly at right angles. Viewed from above, the ramps form a rough diamond shape, giving the interchange its name. The intersections of the ramps and the crossroad are just that: intersections. They are usually controlled by stop signs or traffic signals. Diamond interchanges can often get away with having just one bridge, a relatively small one carrying the crossroad over the highway. So, this can be the cheapest and easiest to build type of interchange to build. But, those intersections create limitations on how much traffic it can handle, so it’s really only used when the cross road is a minor one.

This kind of interchange is sometimes called a service interchange, in contrast to a system interchange, where two controlled access highways cross. As traffic increases, the only way to increase capacity is to eliminate at-grade intersections. So, the largest interchanges implement grade separation for every lane. The classic system interchange is the cloverleaf. Four ramps form a diamond, usually for the right-hand turns. These are directional ramps, that is, they curve toward the ultimate direction a traveler is trying to go. You exit right and end up driving to the right. The OTHER four ramps give the cloverleaf its name. The loop ramps, usually used for left-hand turns, curve around while ascending or descending so they can cross over themselves. So, you can get traffic flowing in any direction with no at-grade intersections and just one bridge.

The loop ramps make the whole thing look like a four-leafed clover, but finding yourself on this type of interchange doesn’t usually feel very lucky. For one, the loops are often pretty tight, requiring motorists to slow way down. And for two, there’s the weave. Consider traffic entering the highway from one of the loops. In the same place vehicles are trying to get back up to speed and merge left onto the freeway, drivers trying to exit the highway are slowing down and moving right. This inevitably creates traffic as people struggle to merge and cross paths with one another. Along with suboptimal traffic conditions, cloverleaf interchanges eat up a lot of of land. When cloverleaf interchanges were at their height of popularity in the mid-20th century, land was plentiful, and there were fewer cars, but as the volume of traffic increased AND the cost of land went up, engineers had to come up with new solutions to build better grade-separated highway crossings. And so they did.

Now, there’s such a huge variety of freeway interchange designs that it would be impossible to cover them all. The turbine, the windmill, the braided interchange, the ITL, mixes of various designs, and more. Each of these balances the constraints of a project like this in a different way: land requirements, cost, capacity, safety, et cetera. And the design that generally provides the most capacity, on the smallest footprint, (often for the highest cost), is the stack.

Like the cloverleaf, a stack has the four directional ramps, usually for the right-hand turns. But we move the exit for the left-hand turn off the main highway to avoid the weaving problem, and fly them over the middle of the intersection where they meet up with the opposite directional ramp. These ramps are often called flyovers, and it’s easy to see why. The gentle curves and elevation changes of the stack mean that drivers can safely maintain speed whether they’re going straight through the interchange or changing direction. The curved ramps often bank to the inside of the curve, called superelevation, making it even easier to maintain speed through the turn. This conventional configuration is called a four-level stack. There’s one level for the freeway, another for the crossing freeway to pass over, and two levels for the flyovers. It’s bridges on bridges, each one providing enough clearance underneath for large trucks. So these upper ramps end up pretty high off the ground. Four-level stacks are actually fairly ubiquitous in the US these days. These are impressive structures in their own right, but this is where Texas takes it to another level, literally. And it mostly has to do with feeder or frontage roads.

Lots of highways use frontage roads running parallel to connect areas alongside that would otherwise be cut off from the roadway network. They allow businesses to develop right up to and facing the freeway with easy access to those coming on and off it, basically keeping areas attached to the roadway network. Texas took the idea and ran with it. Apparently, they started as a way to reduce the cost of acquiring land for road projects. If you could promise the landowner access to a new highway along a frontage road, you're making their property more valuable, so they’re willing to sell a portion for the highway at a much lower cost. Now, Texas has over 6,400 miles (or 10,300 kilometers) of frontage roads. That’s almost the circumference of the moon, and as far as I can tell, way more than any other state in the US. I won’t go into the pros and cons of this approach here. Some research has shown pretty conclusively that the money saved on acquisition costs doesn’t make up for their many disadvantages. And Texas has since changed its policy to only include frontage roads on new freeways where necessary and justified. Although, from what I can tell seeing new construction these days, there don’t seem to be many projects where they’ve been left out. And one major effect of putting frontage roads alongside every highway happens at interchanges. Because these are more roads that need grade separation from all the others. So, at stack interchanges around the state, there aren’t just four levels but five.

In fact, this kind of interchange is often referred to as the Texas stack because it's so popular here. In a typical configuration, one freeway goes below grade at the bottom level. The frontage roads sit at grade. The crossing freeway is elevated. Then there are the two layers of flyovers. With a minimum vertical clearance of 16 feet or about 5 meters, plus the thickness of each bridge, vehicles on the highest flyovers are often more than a hundred feet or 30 meters above the ground. It’s a nice way to get a good look at the city, even if you only get to enjoy the view for a moment.

The Dallas High Five is probably the most famous interchange in Texas with its cool nickname, but it doesn’t stand alone. There are quite a few five-level stacks around the state and even a couple that qualify as six-level stacks with flyovers connecting to other highways. My friend Brian, better known as the Texas Highway Man, documents a lot of new construction in Texas, including this replacement of an old cloverleaf crossing with a five-level stack in San Antonio. These flyovers will be higher than a twelve-story building when they’re done. The frontage roads for this new interchange use a pretty innovative concept. Four partial roundabouts morph into one funny-shaped roundabout that’s been lovingly nicknamed the “fidget spinner.”

Of course, Texas stacks don’t exist only in the Lone Star State. The Big I is another famous interchange in Albuquerque decorated with a tumbleweed snowman each winter. The Judge Harry Pregerson Interchange in Los Angeles gets its fifth level not for frontage roads but the high occupancy lane. Plus, it has a railroad at the lowest level, which I always appreciate. Not just because I like trains, but also because it’s a reminder that these artfully sculpted ribbons of concrete carefully woven together represent a tremendous investment of public money, our money, into a way of getting people from A to B that has a lot of downsides. Everyone has different thoughts about what a city should look like, but there’s a growing recognition that the way we prioritize motor vehicle traffic in the US may not have been the best path forward. And so, I admit that my ideal city has a lot fewer of these towering interchanges that kind of stand as a testament to a transportation network that doesn’t necessarily reflect our highest values and aspirations. But, I still find them pretty impressive in their own right, and whenever I’m in a new city, I try to plan my driving to hit those tallest ramps at the top of the stack to get a bigger, if momentary, perspective on the built environment. It’s always a nice reminder of our capacity for grand designs and ambitious projects, even if they might not always be the best solutions.

August 20, 2024 /Wesley Crump

How French Drains Work

August 06, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In February of 2017, one of the largest spillways in the world, the one at Oroville Dam in northern California, was severely damaged during releases from heavy rain. You might remember this. I made a video about it, and then another one about the impressive feat of rebuilding the structure. In the forensic report following the incident, one of the contributing causes identified in the failure was the drainage system below the spillway. Rather than being installed below the concrete, each drain protruded into it, reducing the thickness of the concrete and making it more prone to cracking. But why do you need drains below a spillway in the first place? Put simply: water doesn’t just flow on the surface of earth. It also flows through the soil and rock below it. Water that gets underneath a structure creates pressure that can lift and move it. That’s especially true when the water is flowing. Dam Engineers deal with the challenge in two ways: make concrete structures like spillways massive (so gravity holds them in place) and use drains to relieve that pressure, giving the water a way out.

Even though we depend on it to live, water is the enemy of all kinds of structures. Pressure is far from the only problem it causes. Most of us have come face to face with it in some way or other. Water causes some soils to expand and contract. It freezes, promotes rot, erodes, and corrodes, wreaking all kinds of havoc on the things we build. On the surface, water is relatively easy to manage through channels and curbs and slopes. Below the ground, things get much more challenging. Subsurface drainage is a really interesting challenge, and it applies to everything from simple landscaping at your house to the biggest structures on Earth, and there are a lot of things that can go wrong if they’re not designed correctly. I’m Grady, and this is Practical Engineering. Today, we’re talking about French Drains.

The idea of a subsurface drain is really pretty simple. And I built a model here in the garage to show you how they work. This is just an acrylic box with a hole at the bottom. I filled the box with sand to simulate soil. And I left a small area of gravel in front of the hole. A few strategically-placed dye tablets will help with the visualization. When I turn on the rainfall simulator, watch what happens. Water percolating into the subsurface continues flowing within the sand. It moves toward the gravel, eventually flowing into the holes between the stones and out of the model. (Don’t pay attention to those dye traces on the left. Turns out there was a small leak in the box that was acting as a… secondary outlet to my drain). When the rain is over, the subsurface water continues to flow until the soil is mostly dries out.

This is a very simple model of what’s often referred to as a French drain. It’s not from France but named after an American farmer, lawyer, politician, and inventor Henry French whose 1846 book on Farm Drainage cataloged and described many of the practices being used around the world. Funny enough, he was explicit that he didn’t invent these drains, claiming “no great praise of originality in what is here offered to the public.” Still, I have to admit, after reading his book, I understand why he became the namesake of the drains he made famous. The man had a way with words:

“The art of removing superfluous water from land must be as ancient as the art of cultivation; and from the time when Noah and his family anxiously watched the subsiding of the waters into their appropriate channels to the present, men must have felt the ill effects of too much water, and adopted means, more or less effective, to remove it.”

Well before we worried about draining subsurface water to protect buildings and structures, farmers were doing it in one way or another to keep their fields from sogginess that affects the growth of crops and bogs down agricultural equipment. In fact, “tile drain” is another common term for subsurface drains because clay tiles were used to hold the drains open. And there are plenty of fields still drained using clay tiles today. But French pointed out that rocks sometimes work just as well:

“Providence has so liberally supplied the greater part of New England with stones, that it seems to the most inexperienced person to be a work of supererogation, almost, to manufacture tiles or any other draining material for our farms.”

He was mostly right, and gravel-filled trenches are used all over the place for simple and non-critical applications. The problem with rocks is that they clog up. You can kind of see how sand migrated into the spaces between the gravel in my demo. Since it’s sand, it’s not really a problem, but if this were a finer-grained soil, it would eventually reduce the drain’s ability to transport water, slowing down the drainage process. Tiles provided the benefit of holding open a clear space for water to flow. Over time, perforated or slotted pipes began to replace tiles for use in drains. You’ve probably seen these before; there are a hundred different styles and materials. Rather than flowing in through the joints between the tiles, the water just comes into the holes in the pipe. But which way should the holes face? Turns out it’s a debate as old as pipes themselves among engineers and contractors, and there are strong opinions on both sides.

If the holes are on the top, water has to fill the gravel to the top of the pipe before it can get in and be carried away. If the holes are on the bottom, the flow path isn’t smooth, so the water flows slower and is less likely to wash away any soil or debris that gets inside. From my research, it seems like most of the manufacturers recommend holes down so the gravel envelope doesn’t have to be completely saturated before water can enter the pipe. I think, in practice, it’s really not too important, and actually, a lot of perforated pipes you can buy for drainage have holes all the way around so you don’t even have to think about it. That’s the best kind of decision, in my book. But, if it seems counterintuitive to you to orient the holes downward, I can demonstrate it in my model.

With a pipe in the middle of the gravel layer, I can turn on the rain again. Just like before, water makes its way through the soil toward the drain, and eventually out of the model. Let’s watch that sped up. When the rain is off, the soil continues draining out until it’s no longer saturated. Hopefully it’s clear how beneficial this is. Without that drain, water will eventually dry out of the soil by flowing away or evaporating over time. But getting it out quickly, with a drain, gives it less opportunity to apply pressure to basement walls, freeze against a structure creating long-term movement, swell the soils, or cause rot and corrosion.

I’m using sand in my model to speed up these simulations, so this envelope of small gravel with a pipe inside is working pretty well to keep the soil in place. But, somewhat inconveniently, most places we want to drain aren’t overlain by playground sand. They have finer-grained soils, including silt and clay. These small stones are holding back the sand, but tinier particles would just flow right through the cracks. That can lead to erosion over time as water dislodges and carries soil particles away through the drain. Watch what happens when I try my French Drain model with large stones between the sand and the outlet. You can see the turbid water coming through the drain, indicating that soil particles are making their way out. And if you watch closely on the right side, you can see where they’re coming from. Eventually, enough sand washes through the rocks to create a sinkhole, and the rest of the water bursts through. Made a HECK of a mess (pardon my French drain). I’ve talked about internal erosion and sinkholes in a previous video, so check that one out if you want more details. This erosion can also result in clogging if the soil particles move into the gravel and pipe. In fact, clogging is the biggest problem with subsurface drains, so properly designed ones usually have some kind of filter.

The design you’re probably most familiar with if you’ve seen or installed a french drain yourself uses geotextile fabric. These are permeable sheets that have a wide variety of applications: separating different layers of soil or rock, protecting against erosion, adding reinforcement to backfill, and filtering soil particles out of flowing water. A typical french drain design uses geotextile fabric around the gravel envelope to keep the fines from migrating in. It’s sometimes known as a pipe-within-a-pipe. But geotextile has some limitations. It’s easy to damage during installation. It’s pretty much impossible to repair or replace once it’s in place. And it also gets clogged up. It’s just a thin mesh of fibers, after all, so once soil particles get stuck, they can quickly lead to a decrease in permeability and efficiency. But there is another option for filtration, and it’s most commonly used on dams.

It is hard to overstate the importance of properly filtered drains for dams. If you don’t believe me, take it from the Federal Emergency Management Agency in their 360-page report, Filters for Embankment Dams: Best Practices for Design and Construction. If that’s not enough, try the Bureau of Reclamation in their 400-page report, Drainage for Dams and Associated Structures. A civil engineer could spend an entire career just thinking about subsurface drains, and for good reason. Lots of high-profile dam failures have directly resulted from a lack of drains or ones that weren’t designed well, including the Oroville Spillway incident I mentioned. For embankment dams that are built from compacted soil, any movement of those soil particles can spell demise. And if you think about all the ways that water is terrible for structures, you can imagine how hard it is to design a structure whose literal job is to hold it back. That’s why they use filters of a different design. You can see it in bold right here in this FEMA status report: “It’s the policy of the National Dam Safety Review Board that geotextiles should not be used in locations that are critical to the safety of the dam.”

Instead, they use sand. Just like the gravel in my demonstration lets the water through while holding back the sand particles, sand can hold back smaller particles of silt and clay, acting as a filter. But it’s a little more complicated than that. Every soil consists of a variety of sizes of particles. I can show that pretty easily, again using sand as an example. I have a collection of sieves with different sizes of holes, each one finer than the one above. I put my sand in at the top. Then give it a little shake (a little razzle-dazzle). And when I open it back up, the sand is all sorted out. If you weigh out the fraction that got caught in each sieve and plot that on a graph, you get something like this: a grain size distribution curve, also called the soil’s gradation. Soils can have a wide variety of gradations. And it’s super important to understand in this case, because before you can design a filter, you have to know what you’re trying to filter out. Once you know the base soil’s grain size distribution, there are a number of engineering methods to find a material that will both allow water to flow while still holding the soil back. And in a lot of cases, that just happens to end up being some variation on the sand we’re used to using in concrete and sandboxes and demonstrations about french drains.

Actually, for dams, you often can get either the filtration you need or the capacity to let water through, but not both in the same material. So lots of dams use two-stage filters. The first stage filters the base soil material. The second stage filters the first stage, but lets water flow more freely. And then, you put a perforated pipe in the middle to get the water out of the drain as quickly as possible. So they look basically identical to the demonstration I built: sand, then gravel, then pipe.

As for dealing with the water once it’s out of the ground, there are really just two options. The easiest is to simply release it by gravity to the surface at some low point. But if you don’t have a low point on the surface nearby, the other alternative is to pump it. If you have a basement at your house, there’s a good chance you have a sump, which is just a low spot for drainage to collect, and if you have a sump, it’s a REALLY GOOD idea to have a sump pump, to move that water out and somewhere outside your house.

Of course, there’s a lot more to this. Dams have all kinds of drainage features depending on their design. Concrete dams often include a gallery or tunnel with vertical drains into the foundation. Embankment dams often feature a large internal drain called a chimney filter to keep water moving through cracks or pores from carrying soil along with it. And it’s not just dams. Plenty of structures, like retaining walls, rely on good subsurface drainage for protection against all the bad things that water does, not to mention their widespread use in agriculture. There are lots of interesting designs and maybe even more proprietary products on the market all trying to accomplish those two main tasks: get the water out without getting the soil out too. In the end, it’s all the same engineering whether you’re trying to protect a multi-million dollar structure or just keep your basement dry. I think Mr. French put it best:

“Indeed, the importance of this subject of drainage, seems all at once to have found universal acknowledgement throughout our country, not only from agriculturists, but from philosophers and men of general science.”

I don’t think anyone could reasonably call me a philosopher, but I do love drains, and I hope you agree that, from dams to fields to foundations of houses, they are pretty important.

French drains are one of those topics that be hard to sell in a pitch meeting, right? No studio executive would be like, “Yes, this is a million dollar idea!” But the thing I love about this channel is that it’s created a passionate community around seemingly mundane things like subsurface drains. TV used to be like that too: something for everyone. I loved the old History and Discovery channel shows. Now it’s all converged into reality shows and reruns, and I’ve found that pretty much everything I watch these days is done by passionate independent producers. If you feel the same way, I have a recommendation for you: The Getaway by my friend Sam at Wendover Productions.

It’s a gameshow with a hilarious premise, which is that all of the contestants (who are all big YouTubers, by the way) are snitches, but each one thinks they’re the only one. And it just leads to all these very funny situations where everyone is trying to secretly sabotage the contests. Plus the behind-the-scene cuts to the producers trying to keep all the confusion under control are wonderful. It’s such a great twist on a game show, and it’s one of those creative experiments that only works because it’s independently produced. The chaos of it is what makes it great, and that’s why it’s only available on Nebula.

I talk about Nebula a lot. It’s a streaming service built by and for independent creators, and it’s growing super fast. After the major overhaul of the home page, making it easier to find new stuff to love, we’ve leaned into producing really good original content, like The Getaway; basically allowing your favorite creators to make bigger budget videos without the fear of having it flop on YouTube’s algorithm. That means you get more creative, interesting, and thoughtful videos. My videos go live on Nebula before they come out here, and right now, a subscription is 40% off at the link in the description.

Plus if you already have a subscription, now you gift one to a friend. We have annual gift cards now. Give someone you love a year’s worth of thoughtful videos, podcasts, and classes from their favorite creators. It’s 40 percent off either way at nebula.tv/practicalengineering for yourself or gift.nebula.tv/practical-engineering for a friend. Thank you for watching, and let me know what you think!

August 06, 2024 /Wesley Crump
  • Newer
  • Older