Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

What Really Happened During the Yellowstone Park Flood?

August 16, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Every year, a deluge of tourists stream into Yellowstone National Park, America’s first and possibly most famous national park, and (I would argue) one of the most beautiful and geographically rich places on earth. But this past June of 2022, many of those tourists, along with some of the permanent residents of the area, found themselves at ground zero of a natural disaster. Torrential rainfall in Wyoming and Montana brought widespread flooding to the streams and rivers that flow through this treasured landscape and beyond. Homes, bridges, roadways, and utilities were swept away and over 10,000 people were evacuated. As of this video’s production, the National Park Service is still picking up the pieces and deciding how to restore the damaged infrastructure within the park, but while the NPS is busy with that monumental task, I wanted to share the engineering details we already know about what happened during the flood, how they might rebuild the roads and bridges stronger than before, and why they might not want to. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the 2022 Montana and Yellowstone floods.

If you didn’t grow up with posters of the Old Faithful geyser on your classroom walls and watching Yogi Bear raiding picinic baskets, that’s okay. I can give you a quick tour. Yellowstone National Park celebrates its 150th birthday this year since it was established in March of 1872. The park covers the northwestern corner of Wyoming and extends into Montana to the north and Idaho to the west. It’s a big place, roughly half the area of Wales, if that’s a helpful equivalency for those more familiar with the metric system. And there really is a lot to see. There are geysers here, here, and here where hot water and steam are ejected from the earth at regular or irregular intervals. In fact half of the world’s geysers are located in the park. There are hot springs, vents, and mudpots here, here, and here . There is a massive natural lake that freezes over each winter here. Waterfalls here and here. Plus mountains, valleys, wolves, bears, bison, and lots more spread throughout the entire park.

A series of roadways connects the five park entrances to the various attractions, lodges, campsites, and of course, their respective parking lots. Indeed, for better or for worse, the park service estimates that 98% of visitors never get more than a half mile away from their car. We bucked that trend during our visit in 2019, but only for a single hike. Otherwise we stayed on the beaten path along with the roughly 3 to 4 million other visitors per year that cram into the same 1% of the park’s total area.

Here’s why that’s important to the story: Many of the most visited areas of Yellowstone are along the rivers and streams that run through the park, largely due to the unmistakable beauty of those rivers and streams as they flow into and over the striking geologic features. However, that proximity of development to the watercourses in the park became a serious and nearly deadly complication this June. On the night of the 12th into the morning of the 13th, an enormous storm system dropped rain across nearly the entire Yellowstone area and large parts of Montana to the north. Some areas saw more than 4 inches or 100 millimeters of rain in less than 24 hours. What’s worse is that a lot of those inches and millimeters fell on top of snow-covered ground, rapidly melting the snowpack and exacerbating runoff. These so-called “rain-on-snow” events have a long history of contributing to floods, and the 2017 Oroville Dam spillway failure that I’ve also covered on the channel was partly a result of rain-on-snow flooding.

All this rain and snowmelt concentrated in the streams and rivers that flow through the park. The US Geological Survey has several stream gages spread throughout the park and southern Montana, so we can take a look at the data to see exactly what happened. And the National Park Service posted an album of aerial photos on their Flikr page so we can compare the streamflow records to the damage on the ground.

A few places on the edge of the storm only saw a small spike in streamflow. For example, the Firehole River that carries water from Old Faithful only went up by about a foot and a half (or 45 centimeters). That river comes together with the Gibbon River along the West Entrance Road, where, again, the increase in streamflow wasn’t overwhelming. But near the northern border of the park, things were much more serious. The river in the Lamar Valley, sometimes called America’s Serengeti for the huge populations of bison and other large animals, came up nearly 9 feet or about 3 meters, briefly surpassing the “moderate flood” stage, which is the level at which the National Weather Service expects damage to buildings and infrastructure to begin. At locations where the valley narrows, the torrent of water eroded and destabilized the river bank, threatening, and in some cases destroying the adjacent roadway. The Soda Butte Picnic Area was hit the hardest in this part of the park.

The Gardner River at the north entrance of the park came up about 2 feet (60 centimeters) at the stream gage, but that number doesn’t quite capture the devastation. A good portion of the flood damage in the park happened along a single stretch of road where the Gardner River created massive washouts and rockslides. In many places, the entire road has been completely washed away where the river altered its course to flow through where the road once was.

Many of these streams confluence into the Yellowstone River that flows through southern Montana, and flooding continued along this river out of the park. One employee housing structure fell completely into the river and floated away. The USGS estimated that the Yellowstone River exceeded the 500 year flood stage nearly all the way to Billings, wreaking havoc on the communities along the river. I’ve talked about this “blank-year” flood in a previous video, but I’ll explain it briefly here. Engineers can look at historical data to estimate a relationship between a flood’s magnitude and its likelihood of happening in a given year. The 500-year flood is just a point on this line. Obviously this is not an exact science (for a bunch of reasons), but it’s helpful for engineers, actuaries, and planners to think of flood magnitude in terms of its probability. Even though the name implies it can only happen once every five hundred years, the actual definition is a flood magnitude with a 0.2% percent chance of being exceeded in a given year.

With this widespread and tremendous flooding, more than 10,000 people were evacuated from Yellowstone National Park. Although the National Weather Service had rain in the forecast, there was no expectation of such significant rainfall, forcing employees to scramble overnight to close roads and get people out of harm’s way. Remarkably, not a single person was injured or killed in Yellowstone as a result of the flooding. Also incredibly, on July 2 (only two-and-a-half weeks after the flood occurred), the park announced the north loop was back open to vehicular traffic. As of this video, the only major parts of Yellowstone that are still closed are the two northern entrances and their respective roads leading into the park. This is due in large part to the fact that there were already roadway contractors working on other projects when the floods happened. We don’t have all the details yet, but it’s likely the Federal Highway Administration was able to amend one of those contracts to get help repairing some of the flood damages expeditiously. 

Speaking of those damages, we still don’t know their full extent. The Park Service has a lot of work ahead of them to inspect the condition of backcountry bridges, trails, campsites, and park infrastructure. Over $60 million dollars in “quick release” emergency funds have already been released to help with emergency repairs, and some news agencies have speculated that the total repairs will cost up to a billion dollars based on costs of similar repair projects at national parks.

The highest priority repairs will be those along the northern entrances to the park where the rivers changed their courses into roadways. It’s not just the park that is affected by those closures but the communities outside the park that depend on seasonal tourism. Damage in these areas will also be the most challenging and difficult repairs to complete, likely requiring completely new roadway alignments that will come with environmental and archaeological studies, public feedback, permits, geotechnical studies, and careful design all before construction begins.
As an example, the Yellowstone River Bridge replacement project started planning and design in 2019 and was set to start construction this year until floods delayed the project, so that’s a roughly 4-year pre-construction phase. Some people might call this unnecessary bureaucracy and red tape, and certainly the communities that depend on Yellowstone traffic will be hoping for much speedier temporary repairs to these roadways. But, many might also consider this careful planning and design as good stewardship for one of the most beautiful places on earth. Hasty engineering of large infrastructure can be extremely damaging to natural systems like those in Yellowstone, and you don’t want to invest millions of dollars into repairs that might be subject to similar flooding in the future. After all, we build parks (and roads to parks) to get closer to the natural environment and all its wildness, and there’s almost nothing more natural or wild than a flood.

August 16, 2022 /Wesley Crump

You Spend More on Rust Than Gasoline (Probably)

August 02, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In July of 1995, Folsom Lake, a reservoir created by Folsom Dam in Northern California, reached its full capacity as snow continued to melt in the upstream Sierra. With the power plant shut down for maintenance, the dam’s operator needed to open one of the spillway gates to maintain appropriate flow in the river below. As the gate began to rise, one side suddenly collapsed and swung open, allowing an uncontrolled torrent of water to flow past the gate down the spillway. With no way to control the flow, the water level of Folsom Lake began to drop… and drop and drop. By the time the deluge had slowed enough that operators could block the opening, nearly half the water stored in Folsom Lake had been lost.

Forensic investigation of the failure revealed that the gate malfunctioned because of corrosion of its pivot mechanism, called the trunnion, creating excessive friction. Essentially, the gate was stuck at its hinges. When the hoist tried to raise it, instead of pivoting upwards, the struts buckled, causing the gate to collapse. This gate operated flawlessly for 40 years before the failure in 1995. However, corrosion is an insidious issue. Because it occurs gradually, it’s hard to know when to sound the alarms. But, there are alarms to sound!

It’s been estimated that we lose roughly two-and-a-half trillion dollars per year globally because of the collective corrosion of the things we make and build. That is a colossal cost for a simple chemical reaction, and there’s an entire field of engineering dedicated to grappling with the problem. So, this is the first in a series of videos on corrosion engineering. Make sure you subscribe to catch them all. You probably don’t have a line item in your household budget for rust, but you might add one after this video. I’m Grady, and this is Practical Engineering. In today’s episode we’re talking about corrosion engineering for infrastructure.

It will come as no surprise to you that we build a lot of stuff out of metal. Entire periods of human civilization are named after the kinds of metals we learned to use, like the bronze age and the following iron age. These days nearly every humanmade object is made at least partly of metal or in a metallic machine, from devices and vehicles to the infrastructure we use everyday, including bridges, pipelines, sewers, pumps, tanks, gates, and transmission towers. Metals are particularly useful for so many applications, and we humans have invented a plethora of processes (like smelting, refining, and alloying) to assemble metallic molecules in various ways according to our needs. But, mother nature is resolved to dismantle (in due course) the materials we create through a process called corrosion. It seems so self-evident that structures deteriorate over time that it might not seem worth the fuss to worry about why. But, infrastructure is expensive and we all pay for it in some way or another, so we need it to last as long as possible. Not only that, but the failure of infrastructure has consequences to life safety and the environment as well, so keeping corrosion in check is big business. But what is corrosion anyway?

You’re here for engineering, not chemistry, so I’ll keep this brief. Corrosion is an electrochemical descent into entropy: a way for mother nature to convert a refined metal into a more stable form (usually an oxide). Corrosion requires four things to occur: an anode (that’s the corroding metal), a cathode (the metal that doesn’t corrode), a path for the electrical current between the two, and an electrolyte (typically water or soil) to complete the circuit. And the anode and cathode can even be different areas of the same piece of metal with slightly different electrical charges. The combination of these elements is a corrosion cell, and the process that corrode metals in nature are nearly identical to those used in batteries to store electricity. In short, corrosion is a redox (that is, reduction-oxidation) reaction, which means electrons are transferred, in this case from the metal in question to a more stable (and usually much less useful) material called an oxide. For corroded iron or steel, we call the resulting oxide, rust.

Here’s a little model bridge I made from steel wires in a bath of aerated salt water. I added a little bit of hydrogen peroxide to speed up the process so you could see it clearer on camera. This timelapse ran for a few days, and the corrosion is hard to miss. Of course, we don’t keep our bridges in aquariums full of salt water and hydrogen peroxide, but we do expose our infrastructure to a huge variety of conditions and configurations that create many forms of corrosion.

You’re probably familiar with uniform corrosion that happens on the surface of metal, like the beautiful green patina of copper oxides and other corrosion compounds covering the Statue of Liberty. But corrosion takes many forms, and corrosion engineers have to be familiar with all of them. These engineers know the common design pitfalls that exacerbate corrosion like not including drainage holes, leaving small gaps in steel structures, and mixing different types of metals. Corrosion can occur from the atmosphere or simply by allowing dissimilar metals to contact one another, called galvanic corrosion. Even using an ordinary steel bolt on a stainless steel object can lead to degradation over time. Corrosion can happen in crevices, pits, or between individual grains of the metal’s crystalline structure. Even concrete structures are vulnerable to corrosion of the steel reinforcement embedded within. When rebar rusts, it expands in volume, creating internal stresses that lead to spalling or worse.

Just as there are lots of kinds of corrosion, there are also many, many professionals with careers dedicated to the problem. After all, the study of corrosion and its prevention is a topic that combines various fields of chemistry, material science, and structural engineering. There’s even a major professional organization: the AMPP or Association for Materials Protection and Performance, that offers training and certifications, develops standards, and holds annual conferences for professionals involved in the fight against corrosion. Those professionals employ a myriad of ways to protect structures against this insidious force, that I’ll cover in this series.

One of the simplest tools in the toolbox is just material selection. Not all metals corrode at the same rate or in the same conditions, and some barely corrode at all. Gold, silver, and platinum aren’t just used in jewelry because they’re pretty. These so-called noble metals are also prized because they aren’t very reactive to atmospheric conditions like moisture and oxygen. But, you won’t see many bridges built from gold, both because it’s too expensive and too soft.

Steel is the most common metal used in structures because of its strength and cost. It simply consists of iron and carbon. Steel is easy to make, easy to machine, easy to weld, and quite strong, but it’s also one of the materials most susceptible to corrosion. I’ve got another demonstration set up here in my garage. This is a tank full of salt water, a bubbler to keep the water oxygenated, and a few bolts made from different materials. I’ll let the time lapse run, and let you guess which bolt is made from steel. It doesn’t take long at all for that characteristic burnt orange iron oxide to show up. Even the steel bolt to the left that has a protective coating of zinc is starting to rust after a day or two of this harsh treatment. That humanmade protective layer on the galvanized bolt gives a hint about why the other ones shown are able to avoid corrosion in the saltwater. Unlike iron oxide that mostly flakes and falls off, there are some oxides that form a durable and protective film that keeps the metal from corroding further. This process is called passivation. Metals that passivate are corrosion resistant precisely because they’re so reactive to water and air.

In my demo I included several metals that undergo passivation, including an aluminum bolt (or aluminium for the non-north-americans), which is typically quite corrosion resistant in air, but struggled against the saltwater. I also included a bronze bolt which is an alloy of copper and (in this case) silicon. Finally, I included two types of stainless steel, created by adding large amounts, sometimes as much as 10%, of chromium and nickel to steel. There are two major types of stainless steel, called 304 and 316 in the US. 316 is more resistant to saltwater environments, but I didn’t really notice a difference between the two over the duration of my test.

I should also note that there are even steel alloys whose rust is protective! Weathering steel (sometimes known by its trade name of Corten Steel) is a group of alloys that are naturally resilient against rust because of passivation. A special blend of elements, including manganese, nickel, silicon, and chromium don’t keep the steel from rusting, but they allow the layer of rust to stay attached, forming a protective layer that significantly slows corrosion. If you keep an eye out, you’ll see weathering steel used in many structural applications. One of my favorite examples is the Pennybacker bridge outside of Austin. The U.S. Steel Tower, the tallest building in Pittsburgh, Pennsylvania, was famously designed to incorporate corten steel in the building’s facade and structural columns. Rather than fireproof the columns with a concrete coating, the engineers elected to make them hollow and fill them with fluid so the corten steel could remain exposed as an exemplification of the material. Corten steel is in wide use today. Architects love the oxidized look, engineers love that it’s just as strong as mild steel and almost as cheap, and owners love not having to paint it on a regular schedule. That saves a lot of cost. In fact, the cost of corrosion is the main point I want to express in this video.

In 1998, the Federal Highway Administration conducted a 2-year study on the monetary impacts of corrosion across nearly every industry sector, from infrastructure and transportation to production and manufacturing. They found that the annual direct costs of corrosion in the U.S. made up an astronomical $276 billion dollars, over three percent of the entire GDP. Assuming we still spend roughly as much today, that amounts to over 1,400 dollars per person per year, more than the average American spends on gasoline! Of course, you don’t get a monthly rust bill. Corrosion costs show up in increased taxes to pay for infrastructure; increased rates for water, sewer, electricity, and natural gas; increased costs of goods; and shorter lifespans for the metal things you buy (especially vehicles). But corrosion has costs that go even beyond money.
In 2014, the City of Flint Michigan began using water from the Flint River as their main source of drinking water to save money. The river water had a higher chloride concentration than the previous supply sourced from Lake Huron, making it more corrosive. Many cities add corrosion inhibitors to their water supply to prevent decay of pipe walls over time, but the City of Flint decided against it, again to save on costs. The result was that water in the city’s distribution system began leaching lead from aging pipes, exposing residents to this extremely dangerous heavy metal and sparking a water crisis that lasted for 5 years. A public health emergency, nearly 80 lawsuits (many of which are still ongoing), government officials fired and in some cases criminally charged, and upwards of 12,000 kids exposed to elevated levels of lead all resulted because of poor management of corrosion. Sadly, it’s just a single example in a long line of infrastructure problems caused by corrosion. Metals are so necessary and important to modern society that we’ll never escape the problem, but the field of corrosion engineering continues to advance so that we can learn more about how to manage it and mitigate its incredible cost.

August 02, 2022 /Wesley Crump

What Happens When a Reservoir Goes Dry?

July 19, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In June of 2022, the level in Lake Mead, the largest water reservoir in the United States formed by the Hoover Dam, reached yet another all-time low of 175 feet or 53 meters below full, a level that hasn’t been seen since the lake was first filled in the 1930s. Rusted debris, sunken boats, and even human remains have surfaced from beneath the receding water level. And Lake Mead doesn’t stand alone. In fact, it’s just a drop in the bucket. Many of the largest water reservoirs in the western United States are at critically low storage with the summer of 2022 only just getting started. Lake Powell upstream of Lake Mead on the Colorado River is at its lowest level on record. Lake Oroville (of the enormous spillway failure fame) and Lake Shasta, two of California’s largest reservoirs, are at critical levels. The combined reservoirs in Utah are below 50% full. Even many of the westernmost reservoirs here in Texas are very low going into summer.

People use water at more or less a constant rate and yet, mother nature supplies it in unpredictable sloshes of rain or snow that can change with the seasons and often have considerable dry periods between them. If the sloshes get too far apart, we call it a drought. And at least one study has estimated that the past two decades have been the driest period in more than a thousand years for the southwestern United States, leading to a so-called “mega-drought.” Dams and reservoirs are one solution to this tremendous variability in natural water supply. But what happens when they stop filling up or (in the case of one lake in Oklahoma), what happens when they never fill up in the first place? I’m Grady, and this is Practical Engineering. On today’s episode we’re talking about water availability and water supply storage reservoirs. 

The absolute necessity of water demands that city planners always assume the worst case scenario. If you have a dry year (or even a dry day), you can’t just hunker down until the rainy weather comes back. So the biggest question when developing a new supply of water is the firm yield. That’s the maximum amount of water the source will supply during the worst possible drought. Here’s an example to make this clearer:

Imagine you’re the director of public works for a new town. To keep your residents hydrated and clean, you build a pumping station on a nearby river to collect that water and send it to a treatment plant where it can be purified and distributed. This river doesn’t flow at a constant rate. There’s lots of flow during the spring as mountain snowpack melts and runs off, but the flow declines over the course of the summer once that snow has melted and rain showers are more spread out. In really dry years, when the snowpack is thin, the flow in the river nearly dries up completely. In other words, the river has no firm yield. It’s not a dependable supply of water in any volume. Of course, there is water to be used most of the time, but most of the time isn’t enough for this basic human need. So what do you do? One option is to store some of that excess water so that it can keep the pumps running and the taps flowing during the dry times. But, the amount of storage matters.

A clearwell at a water treatment plant or an elevated water tower usually holds roughly one day’s worth of supply. Those types of tanks are meant to smooth out variability in demands over the course of a day (and I have a video on that topic), but they can’t do much for the reliability of a water source. If the river dries up for more than one day at a time, a water tower won’t do much good. For that, you need to increase your storage capacity by an order of magnitude (or two). That’s why we build dams to create reservoirs that, in some cases, hold trillions of gallons or tens of trillions of liters at a time, incredible (almost unimaginable) volumes. You could never build a tank to hold so much liquid, but creating an impoundment across a river valley allows the water to fill the landscape like a bathtub. Dams take advantage of mother nature’s topography to form simple yet monumental water storage facilities.

Let’s put a small reservoir on your city’s river and see how that changes the reliability of your supply. If the reservoir is small, it stays full for most of the year. Any water that isn’t stored simply flows downstream as if the reservoir wasn’t even there. But, during the summer, as flows in the river start to decrease, the reservoir can supplement the supply by making releases. It’s still possible that in those dry years, you won’t have a lot of water stored for the summer, but you’ll still have more than zero, meaning your supply has a firm yield, a safe amount of water you can promise to deliver even under the worst conditions, roughly equal to the average flow rate over the course of a dry year.

Now let’s imagine you build a bigger dam to increase the size of your reservoir so it can hold more than just a season’s worth of supply. Instead of simply making up a deficit during the driest few months, now you can make up the deficit of one or more dry years. The firm yield of your water source goes up even further, approaching the long-term average of river flows, and completely eliminating the idea of a drought by converting all those inconsistent sloshes of rain and snow into a perfectly constant supply. Beyond this, any increase in reservoir capacity doesn’t contribute to yield. After all, a reservoir doesn’t create water, it just stores what’s already there. 

Of course, dams do more than merely store water for cities that need a firm supply for their citizens. They also store water for agriculture and hydropower that have more flexibility in their demand. Reservoirs serve as a destination for recreation, driving massive tourism economies. Some reservoirs are built simply to provide cooling water for power plants. And, many dams are constructed larger than needed for just water conservation so they can also absorb a large flood event (even when the reservoir is full). Every reservoir has operating guidelines that clarify when and where water can be withdrawn or released and under what conditions and no two are the same. But, I’m explaining all this to clarify one salient point: an empty reservoir isn’t necessarily a bad thing.

Dams are expensive to build. They tie up huge amounts of public resources. They are risky structures that must be vigilantly monitored, maintained, and rehabilitated. And in many cases, they have significant impacts on the natural environment. Put simply, we don’t build dams bigger than what’s needed. Empty reservoirs might create a negative public perception. Dried up lake beds are ugly, and the “bathtub ring” around Lake Mead is a stark reminder of water scarcity in the American Southwest. But, not using the entire storage volume available can be considered a lack of good stewardship of the dam, and that means reservoirs should be empty sometimes. Why build it so big if you’re not going to use the stored water during periods of drought? Storage is the whole point of the thing… except there’s one more thing to discuss:

Engineers and planners don’t actually know what the worst case scenario drought will be over the lifetime of a reservoir. In an ideal world, we could look at thousands of years of historical streamflow records to get a sense of how long droughts can last for a particular waterbody. And in fact, some rivers do have stream gages that have been diligently collecting data for more than a century, but most don’t. So, when assessing the yield of a new water supply reservoir, planners have to make a lot of assumptions and use indirect sources of information. But even if we could look at a long-term historical record as the basis of design, there’s another problem. There’s no rule that says the future climate on earth will look anything like the past one, and indeed we have reason to believe that the long-term average streamflows in many areas of the world - along with many other direct measures of climate - are changing. In that case, it makes sense to worry that reservoirs are going dry. Like I said, reservoirs don’t create water, so if the total amount delivered to the watershed through precipitation is decreasing over time, so will a reservoirs firm yield

That brings me to the question of the whole video: what happens when a reservoir runs out of water? It’s a pretty complicated question, not only because water suppliers and distributors are relatively independent of each other and decentralized (capable of making very different decisions in the face of scarcity), but also because the effects happen over a long period of time. Most utilities maintain long-term plans that look far into the future for both supply and demand, allowing them to develop new supplies or implement conservation measures well before the situation becomes an emergency for their customers. Barring major failures in government or public administration, you’re unlikely to turn on your tap someday and not have flowing water. In reality, water availability is mostly an economic issue. We don’t so much run out as we just use more expensive ways to get it. Utilities spend more money on infrastructure like pipelines that bring in water from places with greater abundance, wells that can take advantage of groundwater resources, or even desalination plants that can convert brackish sources or even seawater into a freshwater source. Alternatively, utilities might invest in advertising and various conservation efforts to convince their customers to use less. Either way, those costs get passed down to the ratepayers and beyond.

For some, like those in cities, the higher water prices might be worth the cost to live in a climate that would otherwise be inhospitable. For others, especially farmers, the increased cost of water might offset their margins, forcing them to let fields fallow temporarily or for good. So, while drying reservoirs might not constitute an emergency for most individuals, the impacts trickle down to everyone through increased rates, increased costs of food, and a whole host of other implications. That’s why many consider what’s happening in the American southwest to be a quote-unquote “slow moving trainwreck.”

In 2019, all the states that use water from the Colorado River signed a drought contingency plan that involves curtailing use, starting in Arizona and Nevada. Those curtailments will force farmers to tap into groundwater supplies which are both expensive and limited. Eventually, irrigated farming in Arizona and Nevada may become a thing of the past. There’s no question that the climate is changing in the American Southwest, as years continue to be hotter and drier than any time in recorded history. It can be hard to connect cause and effect for such widespread and dramatic shifts in long-term weather patterns, but I have one example of an empty reservoir where there’s no question about why it’s dry.
In 1978, the US Army Corps of Engineers completed Optima Lake Dam across the Beaver River in Oklahoma. The dam is an earth embankment 120 feet (or 37 meters) high and over 3 miles or 5 kilometers long. The Beaver River in Oklahoma had historically averaged around 30 cubic feet or nearly a cubic meter per second of flow and the river even had some major floods, sending huge volumes of water downstream. However, during construction of the dam, it became clear that things were rapidly changing. It turns out that most of the flows in the Beaver River were from springs, areas where groundwater seeps up to the surface. Over the 1960s and 70s, pumping of groundwater for cities and agriculture reduced the level of the aquifer in this area, slashing streamflow in the Beaver River as it did. The result was that when construction was finished on this massive earthen dam, the reservoir never filled up. Now Optima Lake Dam sits mostly high and dry in the Oklahoma Panhandle, never having reached more than 5 percent full, as a monument to bad assumptions about the climate and a lesson to engineers, water planners, and everyone about the challenges we face in a drier future.

July 19, 2022 /Wesley Crump

How Do You Steer a Drill Below The Earth?

July 05, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In December 2019, the City of Fort Lauderdale, Florida experienced a series of catastrophic ruptures in a critical wastewater transmission line, releasing raw sewage into local waterways and neighborhoods. Recognizing the need for improvements to their aging infrastructure, the City embarked on a plan to install a new pipeline to carry sewage from the Coral Ridge Country Club pumping station across 7 miles (or 12 kilometers) to the Lohmeyer Wastewater Treatment Plant. But just drawing a line on the map hides the enormous complexity of a project like this. Installing an underground pipeline through the heart of a major urban area while crossing three rivers is not a simple task.

Underground utilities are usually installed by a technique known as trenching. In other words, we excavate a trench down from the surface, place the line, backfill the soil, and repair whatever damage to the streets and sidewalks remains. That type of construction is profoundly disruptive, requiring road closures, detours, and pavement repairs that never quite seem as nice as the original. Trenches are also dangerous for the workers inside, so they have to be supported to prevent collapse. Beyond the human risk, in sensitive environmental areas like rivers and streams, trenching is not only technically challenging but practically unachievable because of the permits required. In fact, trenching in urban areas to install pipelines these days is for the birds. When the commotion of construction must be minimized, there are many trenchless technologies for installing pipes below the ground. One of those methods helped Fort Lauderdale get a 7-mile-long sewer built in less than a year and half, and is used across the world to get utility lines across busy roadways and sensitive watercourses. I’m Grady and this is Practical Engineering. On today’s episode, we’re talking about horizontal directional drilling.

If you’ve ever seen one of these machines on the side of the road, you’ve seen a trenchless technology in action. Although there are quite a few ways to install subsurface pipelines, telecommunication cables, power lines, and sewers without excavating a trench, only one launches lines from the surface. That means you’re much more likely to catch a glimpse. Like laparoscopic surgery for the earth, horizontal directional drilling (or HDD) doesn’t require digging open a large area like a shaft or a bore pit to get started. Instead, the drill can plunge directly into the earth’s surface. From there, horizontal directional drilling is pretty straightforward, but it’s not necessarily straight. In fact, HDD necessarily uses a curved alignment to enter the earth, travel below a roadway or river, and exit at the surface on the other side. Let me show you how it works and at the end, we’ll talk about a few of the things that can go wrong.

The first step in an HDD installation is to drill a pilot hole, a small diameter borehole that will guide the rest of the project. A drill rig at the surface has all the tools and controls that are needed. These rigs can be tiny machines used to get a small fiber-optic line under a roadway or colossal contraptions capable of drilling large-diameter boreholes for thousands of feet at a time. As such, many of the details of HDD vary across projects, but the basic steps and equipment are all the same.

As the drill bit advances through the earth, the rig adds more and more segments of pipe to lengthen the drill string. Through this pipe, drilling fluid is pumped to the end of the string. Drilling fluid, also known as mud or slurry, serves several purposes in an HDD project. First, it helps keep the drill bit lubricated and cool, reducing wear and tear on equipment and minimizing the chances of a tool breaking and getting stuck in the hole. Next, drilling fluid helps carry away the excavated soil or rock, called the cuttings, and clear them from the hole. Finally, drilling fluid stabilizes and seals the borehole, reducing the chance of a collapse. 

I have here an acrylic box partly full of sand, a setup you’re probably quite familiar with if you follow my channel. Turns out a box of sand can show a lot of different phenomena in construction and civil engineering. Compared to soils that hold together like clay, sand is the worst case scenario when it comes trying to keep a borehole from collapsing. If I pull away this support, the simulated borehole face caves in no time. If I add groundwater to the mix, the problem is even worse. Pulling away the support, the wall of my borehole doesn’t stand a chance. Let me show you how drilling fluid solves this problem.

I’m mixing up a handcrafted artisanal batch of drilling mud, a slurry of water and bentonite powder. This is a type of clay created by volcanic ash that swells and stays suspended when mixed with water. It’s pretty gloopy stuff, so it gets used in cosmetics and even winemaking, but it’s also the most common constituent in drilling fluids. If I add the slurry to one side of the demo, you can see how the denser fluid displaces the groundwater. It’s not the most appetizing thing I’ve ever put on camera, but watch what happens when I remove the rigid wall. The drilling fluid is able to support the face of the sand, preventing it from collapsing. In addition to supporting the sand, the drilling fluid seals the surface of the borehole to reduce migration of water into or out of the interface. In most HDD operations, the drilling fluid flows in through the drill string and back out of the borehole, carrying the cuttings along toward the entry location where it is stored in a tank or containment pit for later disposal or reuse.

So far HDD follows essentially the same steps as any other drilling into the earth, but that first ‘D’ is important. Horizontal directional drilling means we have to steer the bit. The drill string has to enter the subsurface from above, travel far enough below a river or road to avoid impacts, evade other subsurface utilities or obstacles below the ground, and exit the subsurface on the other side in the correct location. I don’t know if you’ve ever tried to drill something, but so far when I do it, I’ve never been able to curve the bit around objects. So how is it possible in horizontal directional drilling?

There are really two parts to steering a drill string. Before you can correct the course, you need to know where you are in the first place, and there are a few ways to do it. One option is a walkover locating device that can read the position and depth of a drill bit from the surface. A transmitter behind the bit in the drill string sends a radio signal that can be picked up by a handheld receiver. Other options include wire-line or gyro systems that use magnetic fields or gyroscopes to keep track of the bit's location as it travels below the surface. Once you know where the bit is, you can steer it to where you want it to go.

I’ve made up a batch of agar, which is a translucent gel made from the cell walls of algae. I tried this first in the same acrylic box, but the piping hot jelly busted a seam and came pouring out into my bathtub, creating a huge mess. So, you’ll have to excuse the smaller glassware demo. My simulated drill string is just a length of wire. There are two things to keep in mind about directional drilling: (1) Although they seem quite rigid, drill pipes are somewhat flexible at length. If I take a short length of this wire and try to bend it, it’s pretty difficult, but a longer segment deflects with no problem. And, (2) you don’t have to continuously rotate the drill string in order to advance the borehole. You can just push on it, forcing the bit through the soil. 

My wire pushes through the agar without much force at all, and a drill string can be advanced through the soil in a similar way, especially when lubricated with water or drilling fluid. The real trick for steering a drill string is the asymmetric bit. Watch what happens when I put a bend on the end of my wire and advance it through the agar. It takes a curved path, following the direction of the bend. If I rotate the wire and continue advancing, I can change the direction of the curve. The model drill string is biased in one direction because of the asymmetry, and I can take advantage of that bias to steer the bit. I can steer the string left, then rotate and advance again to steer the bit to the right. I’m a little bit clumsy at this, but with enough control and practice, I could steer this wire to any location within the agar, avoid obstacles, and even have it exit at the surface wherever I wanted.

This is exactly how many horizontal directional drills work. The controls on the rig show the operator which way the bit is facing. The drill string can be rotated to any angle (called clocking), then advanced to change the direction of the borehole. Sometimes a jet nozzle at the tip of the bit sprays drilling fluid to help with drilling progress. If the nozzle is offset from the center, it can help create a steering bias like the asymmetric bit. Just like the Hulk’s secret is that he’s always angry, a directional drill string’s secret is that it’s always curving. The rig operator’s only steering control is the direction the drill string curves. My friend Daniel at the Coding Train channel built a 2D simulator so you can try steering one of these rigs for yourself. Here's me trying to navigate the drill string around an obstacle and exit on the other side. He’s got a video all about the programming of this simulator, so go check out his video after this one.

DUB: And hey, if that sounds like something you’d like to try for yourself, my friend Dan Shiffman over at the Coding Train YouTube channel built a 2D horizontal directional drilling simulator. This is an open-source project, so you can contribute features yourself, but it’s also really fun if you just want to play a few rounds. If you’re into coding or you're wanting to get started, there is no better way than working through all the incredible and artistic examples Dan comes up with for his coding challenges. Go check him out his video on HDD after this one.

Once the drill string is headed in the right direction, it can just be continuously rotated to keep the bit moving in a relatively straight line. The pilot hole for an HDD project is just an exercise in checking the location and adjusting the clock position of the drill string over and over until the drill string exits on the other side, hopefully in exactly the location you intended. But, not all soil conditions allow for a drill string to simply be pushed through the subsurface.

Rocky conditions, in particular, make steering a drill rig challenging. An alternative to simply ramming the bit through the soil is to use a downhole hydraulic motor. Also known as mud motors, these devices convert the hydraulic energy from the drilling fluid being pumped through the string to rotate a drill bit that chews through soil and rock. This allows for faster, more efficient drilling without having to rotate the whole drill string. The housing of the mud motor is bent to provide steering bias, and the drill string can be clocked to change the direction of the borehole.

Once the pilot hole exits on the other side, it has to be enlarged to accommodate the pipe or duct. That process is called reaming. A reamer is attached to the drill string from the exit hole and pulled through the pilot toward the drill rig to widen the hole. Depending on the size of the pipe to be installed, contractors may ream a hole in multiple steps. The final reaming is combined with the installation of the pipeline. This step is called the pull back. The pipe to be installed in the borehole is lined up on rollers behind the exit pit. The end of the pipe is attached to the remaining assembly, and the whole mess is pulled with tremendous force through the borehole toward the rig. Finally, it can be connected at both ends and placed into service.

That’s how things work when everything goes right, but there are plenty of things that can go wrong with horizontal directional drilling too. Parts of the drill string can break and get stuck in the pilot hole. Drilling can inadvertently impact other subsurface utilities or underground structures. The pipeline can get stuck or damaged on pullback. Or, the borehole can collapse. 

The controversial Mariner East II pipeline in Pennsylvania experienced a litany of environmental problems during its construction between 2017 and 2022. Most of those problems happened on HDD segments of the line and involved inadvertent returns of drilling fluid. That’s the technical term for the situation when drilling fluid exits a borehole at the surface instead of circulating back to the entrance pit. The inadvertent returns in the Mariner East II line created water quality issues in nearby wells, led to sinkholes in some areas, and spilled drilling fluid into sensitive environmental areas. The pipeline owner was fined more than $20 million over the course of construction due to violations of their permits, and they are still mired in legal battles and extreme public opposition to the project to date.

In the case of Mariner East II, most of the drilling fluid spills were partially related to the difficult geology in Pennsylvania. Clearly HDD isn’t appropriate for every project. But in most cases, trenchless technologies are the environmentally-superior way to install subsurface utilities because they minimize disruptions on the surface to the people in urban areas and sensitive habitat around rivers and wetlands.

July 05, 2022 /Wesley Crump

4 Myths About Construction Debunked

June 21, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Construction is something you probably either love or hate, depending on your commute or profession. Obviously, as a civil engineer, it’s something I think a lot about, and over the past 6 years of reading emails and comments from people who watch Practical Engineering, I know that parts of heavy construction are consistently misunderstood. Also, I talk a lot about failures and engineering mistakes in my videos because I think those stories are worth discussing, but if that’s all you ever hear about the civil engineering and construction industries, you can be forgiven for having an incomplete perspective of how things really work. So I combed through YouTube comments and emails over the past few years and pulled together a few of the most common misconceptions. I’m Grady and this is Practical Engineering. In today’s episode we’re debunking some myths about construction.

Myth: Construction Workers Just Stand Around

If you’re one of those people who hate construction, this is probably a frustrating image: one guy running the excavator and everyone else just standing around just watching. It seems like a familiar scene, especially when the project’s schedule is dragging along. But, looks can be deceiving. Think of it this way: contractors are running an extremely risky business on relatively thin margins. In most cases, they already know how much money they’ll be paid for a project, so the only way to make a profit is to carefully manage expenses. And what’s their biggest expense? Labor! A worker standing around with nothing to do is the first thing to get cut from a project. Individual laborers might be paid hourly, but their employers are paid by the contract and have every incentive to get the job done as quickly and efficiently as possible. So why do we see workers standing around? There are a few reasons.

Firstly, construction is complicated. Honestly, it’s a logistical nightmare to get materials, subcontractors, tools, equipment, and workers in the right place at the right time. Almost every task is sequential, which means anything that doesn’t line up perfectly affects the schedule of everything else. Construction is a hurry-up-and-wait operation, and the waiting is often easier to spot than the hurrying. Most of the folks you see on a construction site, whether they’re standing around or not, have been or will be hustling for most of the day, which leads me to my second point: construction is hard work.

Anyone working in the trades will tell you that it’s a physically demanding job. You can’t just show up at 6AM, run a shovel for 10 hours, go home, and do it again the next day. You need breaks if you’re working hard. Standing around is often as simple as that: workers resting from a difficult task. Plus a person with a shovel isn’t that useful when you have a tracked excavator on site that can do the work of 20. So, the laborers you see outside of machines are often doing jobs that are adjacent to the actual work like running tools, directing traffic, or documenting. That leads me to my third point: not everyone on a construction site is a tradesperson.

Keeping an eye on things is an actual job, and in some cases it is more than one. Inspectors are often on site to make sure a contractor doesn’t misinterpret a design or build something incorrectly. An engineer may be on site to answer questions or check on progress. And trust me, you don’t want us anywhere near the cab of a crane or excavator. Safety spotters are sometimes required to keep workers out of dangerous situations. Plus, foremen and supervisors are on site to direct their crews. These folks are doing necessary jobs that might look just like standing around if you’re not familiar with the roles.

Lastly, construction is often out in the open unlike many other jobs. Confirmation bias makes it easy to pass by a construction site in a car and notice the people who aren’t actively performing a task while ignoring the ones who are. If those construction workers stepped into any office building, they might see you hanging around the water cooler talking about your favorite YouTube channels and start a rumor that office workers are so lazy.

Myth: Ancient Roman Roads and Concrete Were Better

I made an entire video comparing “Roman concrete” to its modern equivalent, but I still get emails and comments all the time about the arcane secrets possessed by the ancient Romans that have since been lost to the sands of time. It’s not true, really. I mean, the ancient Roman concrete used in some structures did have some interesting and innovative properties, and the Romans did invest significantly in the durability of their streets and roads. But, I think a Roman engineer would be astounded to learn that most modern highways handle hundreds of thousands of trucks that can weigh upwards of 80,000 pounds before being replaced. And, I think a Roman engineer might wet their toga if they were to see a modern concrete-framed skyscraper. There are a few reasons why it seems that the Romans had us outclassed when it comes to structural longevity.

First there’s survivor bias. We only see the structures that lasted these past 2,000 years and not the vast majority of buildings and facilities, which were destroyed in one way or another. Second, there's the climate. I haven’t personally been to the parts of the world surrounding the Mediterranean Sea, but I hear most of them are quite nice. Cycles of freezing and thawing are absolutely devastating to almost every part of the constructed environment. The ancient Romans were in an area particularly well-suited to making long-lasting concrete structures, especially compared to the frozen wastelands that some other of Earth’s denizens call home. Finally, there’s just a difference in society and government. Ancient Rome was wildly different from modern countries in a lot of ways, but particularly in how much they were willing to spend on infrastructure and how they were willing to treat laborers. Modern concrete mixes and roadway designs are far superior to those of ancient Rome, but our collective willingness to spend money on infrastructure is different too.

I think a lot of the feedback I get on Roman construction is based on the extremely pervasive sentiment that “we just don’t build stuff like we used to.” It’s an easy shortcut to equate quality with longevity, especially for infrastructure projects where we aren’t directly involved in managing the costs. I regularly have people tell me that we shouldn’t use reinforcing steel in concrete, because when it corrodes, it decreases the lifespan (which is completely true). But also, unreinforced concrete is essentially just brick. And not to disparage masonry, but there’s a lot it can’t do in structural engineering.

A lot of people even go so far as to accuse engineers of using planned obsolescence - the idea that we design things with an intentionally limited useful life. And I don’t know anything about consumer goods or devices, but at least in civil engineering, those people are exactly right. We always think about and make conscious decisions regarding lifespan during the design of a project. But it’s not to be nefarious or create artificial job security. It’s because, in simplistic terms, the capital cost of a construction project and its lifespan exist on either side of a spectrum, and engineers (by necessity) have to choose where a project sits between the two. Will you build a bridge that’s inexpensive, but will have to be replaced in 25 years, or will you spend twice the money for more concrete and more steel to make it last for 50? We make this decision constantly when we pay for things in our own lives, choosing between alternatives that have various costs and benefits. But it’s much more complicated to draw that line as the steward of tax dollars for an entire population.

That’s why engineering exists in the first place. With an unlimited budget, my 2-year-old could design a bridge that carries monster trucks over the English channel for a million years. Engineers compare the relative costs and benefits of design decisions from start to finish to meet project requirements, protect public safety, and do so with the limited available resources. Part of that is evaluating alternatives like the cheap bridge versus the expensive bridge, plus their long-term maintenance and replacement costs to see which one can best meet the project goals. In that case, planned obsolescence means being a good steward of public money (which is always limited), by not gold-plating where it’s not necessary so that funds can be used where they’re needed most.

Myth: Lowest Bidder = Lowest Quality

There’s a story about legendary astronaut John Glenn being asked by a reporter about what it felt like to be inside the Mercury capsule on top of an Atlas LV-3B rocket before takeoff. He reportedly said he felt exactly how one would feel sitting on top of two million parts - all built by the lowest bidder on a government contract. And indeed, most construction projects are contracted using bids, and regulations require that public entities award the contract to the lowest bidder. Those rules are in place to make sure that the taxpayer is getting the most value for their money. But, it doesn’t necessarily mean that our infrastructure projects suffer in quality as a result.

Most construction projects are bid using a set of drawings and a book of specifications that include all the detail necessary to build them. An engineer, architect, or both has gone to great lengths to draw and specify exactly what a contractor should build, often to the tiniest details about products, testing, and procedures. You can see for yourself; just google your city or state, plus “standard specifications,” and scroll through what you find to get a sense of how detailed contract documents can be. We go to that level of detail in defining the project before construction so that it can be let for bidding with the confidence that an owner will end up with essentially the same product at the end of construction, no matter which contractor wins the job.

Bidding on contracts is a tough way to win work, by the way. Imagine if on January 1st, your employer gave you a list of all the tasks that needed to be complete by the end of the year, and you had to guess how many hours it would take. And, if you guessed a higher number than your coworker, you got fired. And if you guessed lower than the actual number of hours it took, too bad, you only got paid for the hours you guessed. It might incentivize you to look for innovative ways to get your job done more efficiently, but (admittedly) it might also encourage you to cut corners and ignore opportunities to add value where it’s not explicitly required.

Many public entities are moving away from contracting using the lowest bidder model for types of procurement that allow them to recognize and award other measures of value than just cost like innovation, schedule, and past experience. These alternative delivery methods can help foster a more collaborative relationship between the owner, contractor, and designer, making the construction process smoother and more efficient. But, the lowest bidder model is still used around the world because it generally rewards efficient use of public funds. After all, John Glenn did make it safely to space, became the first American to orbit the earth, and came back with no issues on those two million parts provided by the lowest bidders.

Myth: Foundations Must Go To Bedrock

If you’ve ever played minecraft, you know that at a certain depth below the ground, you reach an impenetrable layer of voxels called bedrock. And indeed, in most parts of the world, geologic layers do get firmer and more stable, the farther down you go. Engineers often take advantage of this fact to secure tall buildings and major structures using deep foundation systems like driven or drilled piles. “Bedrock” is such a familiar concept that it’s easy to look at the world through minecraft-colored glasses and assume there’s (always and everywhere) some firm layer below - but not too far from - the surface of the earth, and all tall buildings and structures must sit atop it. But, the real world is a little more complicated than that. Different geologic layers may be considered bedrock, depending on whether you’re a well driller, foundation designer, pile driver, or geology textbook author. There’s no strict definition of bedrock, and there are vast spectrums of soil and rock properties that might make stable foundations depending on the loading and environmental conditions.
In engineering especially, there doesn’t always exist a firm geologic layer at a reasonable depth below the surface of the earth our buildings and structures can be attached to. And even if there is, that may not be the most cost-effective way to meet design requirements. There may be shallow foundation concepts that are appropriate (and much cheaper) depending on the situation. There’s a famous parable about a wise man who built his house on the rock, but not every wise man can afford a piece of property on the rocky side of town, especially in today’s real estate market. Civil engineers don’t always have the luxury of founding structures on the most stable of subgrades, so we’ve come up with foundations that keep structures secure on sand, silt, clay, and even floating on water. When the rain comes down, and the streams rise, and the winds blow and beat against our structures, they almost always remain standing no matter what the geology is below.

June 21, 2022 /Wesley Crump

The Bizarre Paths of Groundwater Around Structures

June 07, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In 2015, an unusual incident happened on the construction site for a sewage lift station in British Columbia, Canada. WorksafeBC, the provincial health and safety agency, posted a summary of the event on YouTube. A steel caisson had been installed to hold back soil while the lift station could be constructed. One worker on the site was suddenly pulled into a sinkhole when the bottom of the caisson blew out. The cause of the incident was related to groundwater within the soils below the site. We don’t all have to live in fear of the ground opening up below our feet, but engineers who design subsurface structures do have to consider the impact that groundwater can have. The solutions to subsurface problems are almost always hidden from public view, so you might never even know they’re there. This video is intended to shed some light on those invisible solutions (including what could have been done to prevent that incident in BC). I’m Grady and this is Practical Engineering. In today's episode, we’re talking about how groundwater affects structures.

Groundwater has always been a little mysterious to humanity since it can’t easily be observed. It also behaves much differently than surface waters like rivers and oceans, sometimes defying expectations, as I’ve shown in a few of my previous videos. One of the most important places where groundwater shows up in civil engineering is at a dam. That’s because groundwater flows from high pressure to low pressure, and a dam, at its simplest, is just a structure that divides those two conditions. And what do you know, I’ve got an acrylic box in my garage full of sand to show these concepts in real life.

You can imagine this soil sits below the base of a dam, and I can adjust the water levels on either side of the structure to simulate how groundwater will flow. Blue dye placed in the sand helps show the direction and speed of water movement below the surface. A higher level on the upstream side creates pressure, driving water in the subsurface below the dam to the opposite end of the model. I’ll be the first to say it: this is not the most mind-blowing revelation. You probably could have predicted it without the fancy model. But to a civil engineer, this is not an inconsequential phenomenon, and for a couple of reasons. 

First, water seeping below a dam can erode soil particles away, a phenomenon called piping. Obviously, you don’t want part of your structure’s foundation to be stolen from underneath it, and piping can create a positive feedback loop where failure progresses rapidly. I have a whole video on piping that you can check out after this one. The second negative effect of groundwater is less obvious. In fact, until around the 1920s, dam engineers didn’t even take it into account (leading to the demise of many early structures in history).

The engineering of a dam is largely an exercise in resisting hydrostatic pressure. Water in the reservoir applies an enormous force to the upstream face of a dam, and if not designed properly, that force can cause the dam to slide downstream or overturn. The hydrostatic force is actually pretty simple to approximate. Pressure in a fluid increases with depth, so you get a triangular distributed load. Once you know that load, you can design a structure to resist it, and there are a lot of ways to do that. One of the most common types of dam just uses its own weight for stability. Gravity dams are designed to be heavy enough that hydrostatic forces can’t slide them backwards or turn them over. But, to the dismay of those early engineers, pressure from the reservoir is not the only destabilizing force on a dam.

Take a look at this pipe I’ve included in the model that shows the water level between the two boundaries. If the base of a structure was below the water level shown here, the groundwater would be applying pressure to the bottom, counteracting its weight. We call this uplift pressure. Remember that the only reason gravity dams stay put is because of their weight, so you can see how having an unanticipated force effectively subtracting some of that weight would be a bad thing. Many concrete gravity dams have failed because this uplift force was neglected by engineers, including the St. Francis Dam in California that killed more than 400 people when it collapsed in 1928. Many consider this to be the worst American civil engineering disaster of the 20th century.

Unlike the hydrostatic force of a reservoir, uplift pressure from groundwater is a much more complicated force to characterize. It exists in the interface between the structure and its foundation, in the cracks and pores of the underlying soil, and even within the joints of the concrete structure itself. The flow of groundwater is affected by soil properties, the geometry of the dam, the water levels upstream and downstream, and even the subsurface features. How these factors affect the uplift pressure can be pretty challenging to predict. But engineers do have to predict it. After all, we can’t build a dam, measure the actual uplift force, and add weight if necessary. It’s gotta work the first time.

One way to characterize groundwater flow around structures is the flow net. This is a graphical tool used by engineers to estimate the volume and pressure of seepage in the subsurface. In simple terms, you divide the flow area into a curvilinear grid, where one axis represents pressure and the other represents flow. If this looks familiar, you might notice that a flow net is essentially a 2D solution to the Laplace equation, which also applies to other areas of physics including heat flow and magnetic fields. Developing flow nets is almost an art as much as a science, so it’s probably a good thing that groundwater problems are mostly solved using software these days. But, we can still use flow nets to demonstrate a few of the ways engineers combat this nefarious uplift force on gravity dams. And one common idea is a cutoff wall.

If water flowing below a dam causes so many problems, why not just create a vertical wall to cut it off? We do it all the time. But, how deep does it need to be? Some dams might have a convenient geological layer into which a cutoff can be terminated, creating an impenetrable envelope to keep seepage out. But, many don’t. Cutoff walls can still reduce the volume of flow and the pressure, even if seepage can still make its way underneath. Let’s take a look at the model to see why. I’ve added a vertical wall of acrylic below the upstream face of my dam, and we’ll see how it affects the flow. [Beat]. The groundwater flow lines adjust to go under the wall and back up to the other side of the model. If you look closely you’ll see a slight decrease in the uplift measurement pipe below the dam. The only thing I changed between this model and the last one was adding the cutoff wall. So why would the pressure decrease on the downstream side?

The flow of groundwater is described with a fairly simple formula known as Darcy’s law. Besides the permeability of the soil, the only other factor controlling the speed water flows is the hydraulic gradient, which consists of the difference in pressure over the length of a flow path. By adding a cutoff wall, I didn’t change the difference in pressure between one side of the model and the other, but I did increase the length of the flow path water had to take below the dam, reducing the hydraulic gradient. I can sketch a flow net over the model to make this clearer. The black lines are equipotentials; they connect areas of equal pressure. The blue lines show the directions of flow. Without a cutoff, the flow paths are shorter, and thus the equipotential lines are closer together. With the cutoff wall, the equipotential lines are spread out. That means both the volume of seepage and the uplift pressure at the base of the structure have been reduced.

Cutoff walls on dams have a long history of use, and nearly all large gravity dams have at least some kind of cutoff. It can be as simple as excavating a wide area of the dam’s foundation before starting on construction, and that’s a popular choice because it gives engineers a chance to observe the subsurface conditions and make sure there are no faults or problems before the dam gets built. Another option is to excavate a deep trench and fill it with grout, concrete, or a slurry of impermeable clay. For smaller or temporary structures, sheet piles can be driven into the subsurface to create a cutoff. One final option is to inject high pressure grout to create an impenetrable curtain below the dam.

The other way to deal with seepage and uplift pressure are drains. Drains installed below a dam do two important jobs. First, they filter seepage using sand and gravel so that soil particles can’t be piped out from the foundation. Second, they relieve uplift pressure by removing the water. Let’s see how this works in my model. Upstream of my uplift monitor, I’ve added a hole through the back of the model with a tube to drain seepage out. Instead of flowing all the way downstream, now some of the seepage flows up to and through the drain, and you can see this in the streamlines of dye flowing in the subsurface. Again, the effect is subtle, but the uplift pressure monitor is showing a slight decrease in pressure compared to the original configuration. There is less pressure on the base of the dam than there would be without the drain. Plotting a flow net over the model, you can see why it behaves this way. The drain relieves the uplift on the base by creating an area of low pressure below the dam. You can also note that the drain actually increases the hydraulic gradient by shortening the flow paths, so there’s actually more seepage happening than there would be without the drain. However, because the drains are installed with filters to reduce the chance of piping, that additional seepage is often worth the decrease in uplift pressure.

Many concrete dams include a row of vertical drains into the foundation, and some even use pumps to depress the groundwater level further, minimizing the uplift. I can simulate this by lowering the downstream level as if a pump was removing the water. Watch how the flow lines adjust when I make this change in the model. Like drains, these relief wells create more seepage below a dam because of the greater difference in pressure between the two sides, but they can significantly reduce the uplift pressure and thus increase a structure’s stability.

I’ve been using dams as the main example of managing groundwater flow, but lots of other structures have similar issues. Retaining walls and temporary shoring have to contend with groundwater challenges, including caissons, which are watertight chambers sunk into the earth to hold back soil during construction. Remember the worker I mentioned in the intro? He was on a site near a caisson. It’s typical to dewater a structure like this, meaning the water is pumped out, creating a dry area for construction crews to work. Let’s take a look at how this works in the model. I’m simulating the act of pumping water out of the caisson by draining out of the model at the bottom of the structure. When a caisson is dewatered, it is essentially working like a dam, separating an area of high pressure from low pressure within only a short distance between them. And, as you know, distance matters when it comes to groundwater, because the shorter the flow paths, the greater the hydraulic gradient, and thus the higher the volume and velocity of seepage.

If you look closely, you can see the sand boiling up as the seepage exits the soil into the bottom of the caisson. This elevated pressure in the subsurface and high velocity of flow means that the soil particles themselves aren’t being strongly held together. All it takes is a little agitation for the soil to liquefy and flow into the bottom of the caisson, creating a sinkhole that can easily swallow anything at the surface. One way of mitigating this hazard is dewatering the soil outside the caisson. Construction crews use well points, small evenly spaced wells and pumps, to draw water out of the soil so it can’t seep to areas of lower pressure. Caissons can also be driven deeper into the subsurface, creating a condition similar to a cutoff wall on a dam. They can even go deep enough to reach an impermeable layer, creating a better seal that prevents water from flowing in through the bottom. 

Thankfully for the worker in BC, his colleagues were able to rescue him before he was consumed by the earth. Next time you see a dam, retaining wall, caisson, or any other subsurface construction, there’s a good chance that engineers have had to consider how groundwater will affect the stability. Even though you’d never know they’re there, some combination of drains and cutoffs were probably installed to keep the structure (and the people around it) safe and sound.

June 07, 2022 /Wesley Crump

How We Track COVID-19 (And Other Weird Stuff) In Sewage

May 17, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

When the COVID-19 pandemic was just getting started in early 2020, every major city, state health department, and federal agency involved built out data dashboards you could access online to check case counts and trends. Public health officials could constantly be heard asking everyone to “flatten the curve,” that curve being a graph of infection rates over time. But how do you get such a graph? By and large, our measure of the pandemic came through individual case counts confirmed with laboratory testing and reported to a data clearinghouse like the local public health department or the CDC. There was a lot of confusion about testing, positivity rates, how that information applied to the greater population, and how it could be used to implement measures to slow the spread of disease. The limitations of individual testing data - including test shortages, reporting delays, and unequal access to healthcare - made public health decisions extremely challenging. Much of the controversy surrounding mask mandates and stay-at-home orders was provoked by the disconnect between what we could reliably measure and the reality of the pandemic on the ground. Public health officials were constantly on the lookout for more indicators that could help inform decisions and manage the spread of disease.

One of these measures didn’t really show up in the online data dashboards, but it was and continues to be, used as a broad measure of infection rates in cities. It’s a topic that combines public health, epidemiology, and infrastructure that didn’t get much coverage in the news. And there are both some interesting privacy implications and some really fascinating applications on the horizon. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about wastewater surveillance for public health.

If you are unfamiliar with the inner workings of a modern municipal wastewater collection system, boy do I have the playlist for you. But, if you don’t want to watch 5 of my other videos before you watch this one, I can give you a one-sentence rundown: Wastewater flows in sewers, primarily via gravity, combining and concentrating as it continues to a treatment plant where a number of processes are used to rid it of concomitant contaminants so it can be reused or discharged back into the environment. Just like a watershed is the area of land that drains to a specific part of a river or stream, a “sewershed'' isn’t an outhouse but an area of a city that drains to a specific wastewater treatment plant. The largest sewersheds can include hundreds of thousands, or even millions of people, all of whose waste flows to a single facility designed to clean it up.

Wastewater treatment plants regularly collect samples of incoming sewage to characterize  various constituents and their strengths. After all, you have to know what’s in the wastewater to track whether or not it’s been sufficiently removed at the other end of the plant. In the early days of sewage treatment, sampling consisted only of measuring the basic contaminants such as nutrients and suspended solids. But, as our testing capabilities increased, it slowly became easier and less expensive to measure other impurities, sometimes known as contaminants of emerging concern. These included pharmaceuticals, pesticides, personal care products, and even illicit drugs. It didn’t take too long to realize that tracking these contaminants was not only a tool for wastewater treatment but also a source of information about the community within the sewershed, the gathering of which is a notoriously difficult challenge in the field of public health.

Rather than coordinating expensive and arduous survey campaigns where many people aren’t always truthful anyway or going through the hoops of privacy laws to gather information from healthcare providers, we can just take a sample of sludge from the bottom of a clarifier, send it off to a lab, and roughly characterize in hours or days, the dietary habits, pharmaceutical use, and even cocaine consumption of a specific population of people. If you’re a public health researcher or public official, that is a remarkable capability. To quote one of the research papers I read, “Wastewater is a treasure trove of biological and chemical information.”

Think about all the stuff that gets washed down the drain and all the things you consume that might create unique metabolites that find their way out the other side of your excretory system. Although wastewater surveillance is a relatively new field of study, we’re already able to measure licit and illicit drugs, cleaning and personal care products, and even markers of stress, mental health, and diet. That’s a lot of useful information that can be used to monitor public health, but one particular wastewater constituent took center stage starting in early 2020. Of course, for decades, we’ve tracked pathogens in wastewater to make sure they aren’t released into the environment in treatment plant effluent, but the COVID19 pandemic created a vacuum of information on virus concentrations that had never been experienced before.

We realized early in the pandemic that the SARS-CoV-2 virus is shed in the feces of most infected people. Even before widespread tests for the virus were available, many public health agencies were sampling the wastewater in their communities as a way to track the changes in infection rates over time. Realizing the importance of coordinating all these separate efforts, many countries created national tracking systems to standardize the collection and reporting of virus concentrations in sewage. In the US, the CDC launched the National Wastewater Surveillance System in September of 2020, complete with its own logo and trademarked title. Let me know if I should try to license this design for my merch store.

Individual communities can collect and test wastewater for SARS-CoV-2, and then submit the data to the CDC for a process called normalization. Virus concentrations go up and down with infections, but they also go up and down with dilution from non-sewage flows and changes in population (for example, in sewersheds with large event venues or seasonal tourism). Normalization helps correct for these factors so that comparisons of virus loads between and among communities is more meaningful.

There are some serious benefits from tracking COVID-19 infections using wastewater surveillance. It’s a non-intrusive way to monitor health that’s relatively impartial to differences in access to healthcare or even whether infections are symptomatic or not. Next, it is orders of magnitude less expensive than testing individuals. Nearly 80% of US households are served by a municipal wastewater collection system, so you can get a much more comprehensive picture of a population for just the cost of a laboratory test. It can also provide an earlier indicator of changes in community-wide infection rates. Individual tests can have delays and miss asymptomatic infections, and hospitalization counts come well after the onset of infection, so wastewater surveillance can provide the first clue of a COVID-19 spike, sometimes by several days. Finally, now that vaccination programs are widespread and there is significantly less testing being carried out, wastewater surveillance is a great tool to keep an eye out for a resurgence in COVID-19 infections, and it can even be used to monitor for new variants.

Of course, wastewater surveillance has some limitations too, the biggest one being accuracy. The science is still relatively new, and there are lots of confounding variables to keep in mind. In addition to changes in dilution from other wastewater flows and sewershed population mentioned before, the quantity of viruses shed varies significantly between individuals and even  over the course of any one infection. Right now, wastewater surveillance just isn’t accurate enough to provide a count of infected individuals within a population, so it’s mostly useful in tracking whether infections are increasing or decreasing and by what magnitude.

There are also some ethical considerations to keep in mind. That term “surveillance” should at least prick up your ears a little bit. Monitoring the constituents in wastewater at the treatment plant averages the conditions for a large population, but what if samples were taken from a lift station that serves a single apartment complex, school, or office building? What if a sample was taken from a manhole on the street right outside your house? Could the police department use the data to deploy more officers to neighborhoods where illicit drugs are found in the sewage? Could a city or utility provider sell wastewater data to private companies for use in research or advertising? That’s a lot of hypotheticals, but I wouldn’t be surprised to see a Black Mirror episode where some tech company provides free trash and sewer service just to collect and sell the data from each household. If you wanted to open a new coffee shop, how much would you pay to learn which parts of town have the highest concentrations of caffeine in the sewage? Maybe it would be called Brown Mirror.


The truth is that public health professionals have put a tremendous amount of thought into the ethics and privacy concerns of wastewater surveillance, but (as with any new field of science), there are still a lot of questions to be answered. One of those questions is, “What comes next in this burgeoning field of wastewater surveillance,” where public health researchers have access to a literal stream of data. There are many measures of public health that can be valuable to policy makers and health officials, including stress levels, changes in mental health, and the prevalence of antimicrobial-resistant bacteria (one of the greatest human health challenges of our time). Of course, all the work that went into standardizing and building out capabilities of tracking infections will certainly give us a leg up on resurgences of COVID-19 or any future new virus, heaven forbid. My weather report already has a lot more information than it did 20 years ago, including pollen counts of various allergy-inducing tree species, air pollution levels, and UV-ray strength. We might soon see infection rates of the various diseases that spread through community populations to help individuals, planners, and public officials make better-informed decisions about our health. Sewers were one of the earliest and most impactful advents of public health in urban areas, and it’s exciting that we’re still finding new ways to use them to that end.

May 17, 2022 /Wesley Crump

How Wells & Aquifers Actually Work

May 03, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

It is undoubtedly unintuitive that water flows in the soil and rock below our feet. A 1904 Texas Supreme Court case famously noted that the movement of groundwater was so “secret, occult and concealed” that it couldn’t be regulated by law. Even now, the rules that govern groundwater in many places are still well behind our collective knowledge of hydrogeology. So it’s no surprise that misconceptions abound around water below the ground. And yet, roughly half of all drinking water and irrigation water used for crops comes from underneath the surface of the earth. You can’t really look at an aquifer, but you can look at a model of one I built in my garage. And at the end of the video, I’ll test out one of the latest technologies in aquifer architecture to see if it works. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about groundwater and wells.

Not all water that falls as precipitation runs off into lakes and rivers. Some of it seeps down into the ground through the spaces between soil and rock particles. Over time, this infiltrating water can accumulate into vast underground reservoirs. A common misconception about groundwater is that it builds up in subterranean caverns or rivers. Although they do exist in some locations, caves are relatively rare. Nearly all groundwater exists within geologic formations called aquifers that consist of sand, gravel, or rock saturated with water just like a sponge. It just so happens you’re watching the number one channel on the internet about dirt, and there are a lot of interesting things I can show you about how aquifers behave.

I built this acrylic tank in my garage to illustrate some of the more intriguing aspects of groundwater engineering. I can fill it up with sand and add blue dye to create two-dimensional scenarios of various groundwater conditions. It also has ports in the back that I can open or close to drain various sections of the model. And, on both sides, there’s a separation that simulates a boundary condition on the aquifer. Water can flow through these dividers along their height. Most of the shots you’ll see of this have been sped up because, compared to surface water, groundwater flows quite slowly. Depending on the size of soil or rock particles, it can take a very long time for water to make its way through the sinuous paths between the sediments. The property used to characterize this speed is called hydraulic conductivity, and you can look up average values for different types of soil online, if you’re curious to learn more. In fact, different geologic layers affect the presence and movement of groundwater more than any other factor, which is why there is so much variability in groundwater resources across the world.

Like all fluids, groundwater flows from areas of high pressure toward areas of low pressure. To demonstrate this, I can set the left boundary level a little higher than the one on the right. This creates a pressure differential across the model so water flows from left to right through the sand. I added dye tablets at a few spots so you can see the flow. This is a simple example because the pressure changes linearly through a consistent material, but any change in these conditions can add a lot of complexity. In purely mathematical terms, you can consider this model a 2D vector field because the groundwater can have a different velocity - that is direction and speed - at any point in space. Because of this, there are a lot of really neat analogies between groundwater and other physical phenomena. My friend Grant of the 3Blue1Brown YouTube channel has an excellent video on vector field mathematics if you want to explore them further after this.

We often draw a bright line between groundwater and surface water resources like rivers and lakes because they behave so differently. But water is water. It’s all part of the hydrologic cycle, and many surface waters have a nexus with groundwater resources, meaning that changes in groundwater may impact the volume and quality of surface water resources and vice versa. Let me show you an example. In the center of my model, I’ve made a cross section of a river. The drain at the bottom of the channel simulates water flowing along the channel, in this case leaving my model. If I turn on the pumps to simulate a high water table in the aquifer, the groundwater seeps into the river channel and out of the model. The dye traces show you how the groundwater moves over time. If you encounter a situation like this in real life, you might see small springs, wet areas of the ground, and (during the winter) even icicles along slopes where the groundwater is becoming surface water before your eyes.

Likewise, surface water in a river can flow into the earth to recharge a local aquifer. I’ve reconfigured my model so the pump is putting water into the river and the outer edges of the reservoir are drained, simulating a low water table. Some of the water in the river flows back out of the model through the overflow drain, showing that while not all the water in a river seeps into the ground, some does. You can see the dye traces moving from the river channel into the aquifer formation, transforming from surface water into groundwater as it does. As you can see, surface water resources are often key locations where underground aquifers are recharged.

This is all fun and interesting, but much of groundwater engineering has more to do with how we extract this groundwater for use by humans. That’s the job of a well, which, at its simplest, is just a hole into which groundwater can seep from the surrounding soil. Modern wells utilize sophisticated engineering to provide a reliable and long-lasting source of fresh water. The basic components are pretty consistent around the world. First, a vertical hole is bored into the subsurface using a drill rig. Steel or plastic pipe, called casing, is placed into the hole to provide support so that loose soil and rock can’t fall into the well. A screen is attached at the depth where water will be withdrawn creating a path into the casing. Once both the casing and screen are installed, the annular space between them and the bore hole must be filled. Where the well is screened, this space is usually filled with gravel or coarse sand called the gravel pack. This material acts as a filter to keep fine particles of the aquifer formation from entering the well through the screen. The space along the unscreened casing is usually filled with clay, which swells to create an impermeable seal so that shallow groundwater (which may be lower quality) can’t travel along the annular space into the screen.

Wells use pumps to deliver water that flows into the casing up to the surface. Shallow wells can use jet pumps that draw water up using suction like a straw. But, this method doesn’t work for deeper wells. When you drink through a straw, you create a vacuum, allowing the pressure of the surrounding atmosphere to push your beverage upward. However, there’s only so much atmosphere available to balance the weight of a fluid in a suction pipe. If you could create a complete vacuum in a straw, the highest you could draw a drink of water is around 10 meters or 33 feet. So, deeper wells can’t use suction to bring water to the surface. Instead, the pump must be installed at the bottom of the well so that it can push water to the top. Some wells use submersible pumps where the motor and pump are lowered to the bottom. Others use vertical turbine pumps where only the impellers sit at the bottom driven by a shaft connected to a motor at the surface.

All that pumping does a funny thing to an aquifer. I can show you what I mean in the model. As water is withdrawn from the aquifer, it lowers the level near the well. The further away from the well you go, the less influence it has on the level in the aquifer. Over time, pumping creates a cone of depression around the well. This is important because one well’s cone of depression can affect the capacity of other wells and even impact nearby springs and rivers if connected to the aquifer. Engineers use equations and even computer models to estimate the changes in groundwater level over time, based on pumping rate, recharge, and local geology.

One fascinating aspect of deeper aquifers is that they can be confined. My model isn’t quite sophisticated enough to show this well, but I can draw it for you. A common situation is that an aquifer exists at an angle to the ground surface. It can recharge in one location, but becomes confined by a less permeable geologic layer called an aquitard. Water flowing into a confined aquifer can even build up pressure, so that when you tap into the layer with a well, it flows readily to the surface (called an artesian well). It can happen in oil reservoirs as well, which is why you occasionally see oil wells blow out.

A part of the construction of wells that I didn’t mention yet is the top. A well creates a direct path for water to come out of an aquifer, and if not designed, constructed, and maintained properly, it can also be a direct path into the aquifer for contaminants on the surface. In my model, I can simulate this by dropping some dye into the well to represent an unwanted chemical spilled at the surface. Say some rainwater enters too, washing the contaminant through the well into the aquifer. Now, as groundwater naturally moves in the subsurface, it carries a plume of contamination along as well. You can see how this small spill could spread out in an aquifer, contaminating other wells and ruining the resource for everyone. So, wells are designed to minimize the chances of leaks. The uppermost section of the annular space is permanently sealed, usually with cement grout. In addition, the casing is often extended above the surface with a concrete pad extending in all directions to prevent damage or infiltration to the well.

We’ve been talking so much about how to get water out of an aquifer, but there are some times where we want to do the reverse. Injection wells are nothing new; deep belowground can be a convenient and out-of-the-way place to dispose of unwanted fluids including sewage, mining waste, saltwater, and CO2. But until recently, it hasn’t been a place to store a fluid with the intent of taking it back out at a later date. Aquifer Storage and Recovery or ASR is a relatively new technology that can help smooth out variability in water resources where the geology makes it possible. Large-scale storage of water is mostly restricted to surface water reservoirs formed by dams that are expensive and environmentally unfriendly to construct. With enough pressure, water can be injected through a well into an aquifer. You can see on my model that introducing water to the well causes the level in the aquifer to rise over time. Eventually, this water will flow away, but (as I mentioned) groundwater movement is relatively slow. In the right aquifer, you won’t lose too much water before the need to withdraw it comes again.

Taking advantage of the underutilized underground seems obvious, but there are some disadvantages too. You need a goldilocks formation where water won’t flow away too fast, but is also not so tight that it takes super-high pressure for injection. You also need a geologic formation that is chemically compatible with the injected water to avoid unwanted reactions and bad tastes. Of course, you always have costs, and ASR systems can be expensive to operate because the water has to be pumped twice - once on the way in and again on the way out. 

Finally, you can have issues with speed. In many places, the surplus water that needs to be stored comes during a flood - massive inflows that arrive over the course of a few hours or days. A dam is a great tool to capture floodwaters in a reservoir for later use. Injection wells, on the other hand, move water into aquifers too slowly for that. They’re more appropriate where surplus water is available for long durations. For example, one of the few operating ASR projects is right here in my hometown of San Antonio. When water demands fall below the permitted withdrawals from our main water source, the Edwards Aquifer, we take the surplus and pump it into a different aquifer. If demands rise above the permitted withdrawals, we can make up the difference from the ASR.

You can add more injection wells to increase the speed of recharge, but above a certain pressure, some funny things start to happen: underground formations break apart and erode in a phenomenon called hydraulic fracturing or just fracking. Breaking apart underground formations of rock and soil has been a boon for the oil and gas industry. But, just like that Texas groundwater in 1904, the regulation of fracking is mired in confusion and controversy, in no small part because it happens below the surface of the earth, hidden from public view. I’ll save those details for a future video.

May 03, 2022 /Wesley Crump

The Engineering Behind Russia's Deadlocked Pipeline: Nord Stream 2

April 19, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Since 2011, the Russian energy corporation Gazprom and a group of large investors have been working on one of the longest and highest capacity offshore gas pipelines in the world. The Nord Stream 2 is a pair of large-diameter natural gas pipelines running along the bottom of the Baltic Sea from the Russian coast near St Petersburg to the northern coast of Germany near Greifswald. Planning, design, and construction of Nord Stream 2 was mired in political controversy not only because of climate-related apprehensions over new fossil fuel infrastructure but also over concerns that the pipeline could be used as a geopolitical weapon by Russia against other European countries. Still, construction began in 2016 and finished 5 years later at the end of 2021.

As the German government worked toward certifying the pipeline to begin operation, Russia launched a military invasion of Ukraine. This unjustified and unconscionable attack on a sovereign nation has received widespread international condemnation followed up with a litany of sanctions on Russia and its most senior leaders. Part of the response included Germany halting the certification of this divisive, ten-billion-dollar megaproject. As of this video’s production, the invasion of Ukraine is ongoing and future international relations between Russia and most of the developed world are unlikely to improve any time soon.

The U.S. put sanctions on the company in charge of the pipeline and its senior officers. The project’s website has been taken offline, and most of the employees have been fired or quit. These circumstances raise plenty of questions: How do you install a pipeline at the bottom of the Baltic Sea? Why is this line so important to geopolitics? And what does the future hold for what may be the world’s most controversial infrastructure project? I’m Grady, and this is Practical Engineering. In today's episode, we’re talking about the Nord Stream 2 pipeline.

Like its predecessor, Nord Stream, the goal of the Nord Stream 2 pipeline is to provide a direct connection between the vast reserves of natural gas in Russia and the energy-hungry markets of Europe. With a length of 1,230 kilometers or 764 miles each, the twin pipes pass through the territorial waters or economic exclusion zones of five countries: the two landfall nations of Russia and Germany as well as Finland, Sweden, and Denmark. Also like its predecessor, the Nord Stream 2 is owned by a subsidiary of Gazprom (a Russian-state-owned enterprise and one of the largest companies in the world) and financed by a coterie of other international oil and gas firms. The project has a long, complex, and controversial history. This video is meant to highlight the engineering details of the project, but in this case, the politics can’t be ignored. I’ll do my best to hit the high points, but check out some of the more comprehensive journalism on the subject before you form any strong opinions.

Even before construction began, Nord Stream 2 had some massive obstacles to overcome. The Baltic is one of the world's most polluted seas, and all the countries around it have a vested interest in making sure those conditions don’t worsen. Pipeline construction can create harmful levels of underwater noise, affect fisheries, disrupt water quality, and even impact the cultural heritage of shipwrecks along the seafloor. Each country along the route imposed strict environmental requirements before construction permits would be issued. The planning phase for the pipeline involved detailed underwater surveys of the seabed to help choose the most feasible route along the way. This survey also helped identify unexploded ordinances from World Wars 1 and 2. Where possible, the pipeline was routed around these munitions, but in some cases they had to be detonated in place. When this was done, the contractors used bubble curtains around each explosion to mitigate the noise impacts on marine life.

The logistics of producing so much pipe was also a huge challenge. The pipe sections used for the Nord Stream 2 were about 1150 mm or 45 inches in diameter and 12 meters or 40 feet long. They started out as steel plates that were rolled into pipe sections, welded, stretched, beveled, and inspected for quality. An interior epoxy anti-friction coating was applied to minimize the pressure losses in the extremely long line. Then an exterior coating was applied to protect against corrosion in the harsh saltwater environment. And the entire project required manufacture of more than 200,000 of these pipe sections. That’s an average production rate of nearly 100 pipe sections per day spread between three suppliers.

Each pipe section was transported by rail to a port in Finland or Germany to receive another exterior coating, this time of concrete. This concrete weight coating was applied to increase the pipeline’s stability on the seabed. Doubling the weight of each pipe from 12 to 24 metric tons, the concrete would help resist the buoyancy and underwater currents that could move the line over time. It also provided mechanical protection during handling, transport, pipelay, and for long-term exposure along the seabed. After weight coating, the pipes were shipped to storage yards along the coast where they would eventually be transported by ship to large pipelay vessels working in the Baltic Sea.

These pipelay vessels were floating factories employing hundreds of workers each, and the Nord Stream 2 project had up to 5 working simultaneously. On the largest vessels, the basic process for pipelaying was first to weld two pipe sections together to create what’s called a double-joint. These welds got a detailed inspection, and if they passed, the double-joint moved to a central assembly line to be connected to the main pipe string. There you got more welding and inspection. If everything checked out, a heat-shrink sleeve was placed around each weld, and then polyurethane foam poured into a mold between the concrete coatings to further protect against corrosion while allowing the pipe string to flex during placement. Once complete, the vessel could advance a little further along the route while lowering the pipeline into its final position. This was a 24/7 operation and some of these pipelay vessels could complete 3 kilometers in a day.

In many locations, they could just lay pipe directly on the seabed. It was smooth enough to keep the line from deflecting too much and soft enough to avoid damage to the pipes. However, that wasn’t the case along the entire route. In some shallow waters where the pipelines were exposed to hydrodynamic forces like waves and currents, the lines were placed in excavated trenches and backfilled. There were also many areas along the route that were rugged enough to create free spans of unsupported pipeline. Fallpipe vessels were deployed ahead of the pipe installation to fill depressions with rock and gravel to provide a smoother path along the seabed for the line. Finally, at locations where the Nord Stream 2 lines would cross other subsea cables or pipes like power, telecommunications cables and other pipelines, rock mattresses were installed to protect each utility at the intersection.

Each end of the pipeline came with a tremendous amount of infrastructure as well. At the German landfall, the pipe was tunneled onshore to the receiving station. This facility includes shut down valves, filters, preheaters, and pressure reduction equipment to allow gas to be delivered into the European natural gas grid. Both facilities also included equipment for Pipeline Inspection Gauges (also known as PIGs). These devices are launched from Russia into each pipeline, pushed along by the gas pressure for the entire 1,200 kilometer journey. The PIGs scan for problems like corrosion or mechanical damage and collect data that can be downloaded when they reach the end of the line in Germany.

Installing multiple sections of pipe simultaneously sped up construction of the line, but it created a serious challenge as well. How do you connect segments of pipe that have already been installed along the seabed? That’s the job of maybe the most impressive operation of the entire project: the above water tie in or AWTI. The separate sections of pipeline were carefully installed on the seabed so their ends overlapped. When it came time to tie them together, first divers installed buoyancy tanks to each end to make them easier to lift. Then davit cranes along the side of the tie-in vessel attached to each pipe and lifted their ends above the waterline. These ends didn’t have a concrete weight coating to make them lighter and able to be cut to the exact length needed. The pipes were cut and beveled, welded, tested, and coated for corrosion protection. Finally, the tie-in vessel could lay the complete pipe back down on the seafloor, forming a small horizontal arc off the main alignment where divers removed the buoyancy tanks and detached the cranes. The Nord Stream 2 required several above water tie-ins during construction. It seems simple enough, but each one took about three weeks to complete. The final AWTI was completed in September 2021, marking the end of construction of the Nord Stream 2.

Although Europe is in the midst of a major transition away from fossil fuels to renewable sources, the demand for natural gas is still high and expected to remain that way for the foreseeable future. In addition, Germany is planning to shutter the last 3 of its nuclear plants by the end of 2022, using natural gas as a bridge toward the expansion of wind and solar. With gas demands remaining consistently high, many fear that the Nord Stream and Nord Stream 2 pipelines put Russia in a position to exert political influence over its European neighbors. Nord Stream 2 would also allow more Russian gas to bypass Ukraine, depriving it of the transit fees it gets from gas lines through its borders.

As early as 2016, politicians in various countries around the world were coming out in opposition to the project. The U.S. played a large role in trying to delay or stop Nord Stream 2 altogether with sanctions on the ships involved in construction plus a host of Russian companies while carefully avoiding serious impacts to the contractors of its German ally. U.S. President Biden waived those sanctions in mid-2021 in a bid to improve US-German relations, but the Russian invasion of Ukraine changed everything. The U.S. immediately reimposed the sanctions and Germany froze certification of the project. The Nord Stream 2 company has been mostly silent so far, but there aren’t many good outcomes of spending $10B on design and construction of a pipeline that can’t be used. Most news sources appear to agree that they are completely insolvent and have fired all their employees. In addition, most of the non-Russian companies involved in the project have already written off their investments and walked away.

This simple pipeline highlights the tremendous complexity of infrastructure and geopolitics. It can be extremely difficult for a normal citizen to know what they stand to gain or lose from a project like this. We want cheap energy. We want warm homes during the winter. But we don’t want the global climate to change. And we definitely don’t want an unpredictable and misguided authoritarian leader to hold a major portion of Europe’s gas supplies hostage for political gains. In some ways, Putin’s invasion of Ukraine simplified these complex issues because it gave Germany and the US no choice but to kill the project. There’s a lot of uncertainty right now with how the conflict will end and what the world will look like when the dust settles. But it seems doubtful now that the Nord Stream 2 - this incredible achievement of engineering, logistics, and maritime construction - will ever be anything more than an empty tube of steel and concrete at the bottom of the Baltic Sea (and maybe that’s for the best). Thank you for watching and let me know what you think.


April 19, 2022 /Wesley Crump

What Sewage Treatment and Brewing Have in Common

April 05, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

I’m on a mission to show the world how engrossing human management of sewage can be, and in fact, we’ve followed the flow of domestic wastewater through sewers, lift stations, and primary treatment in previous videos on this channel. If you’ve watched those videos or others I’ve made, you know I like to build scale demonstrations of engineering principles. I did some testing for the next step of wastewater treatment to see if I could make it work, and the results were just… bad. Even with the blue dye disguising the disgustingness of this demo, operating a small-scale wastewater treatment plant in my garage is probably the most misguided thing I’ve ever done for a video. So I got to thinking about other ways humans co-opt microorganisms to convert a less desirable liquid into a better one, and there is one obvious equivalent: making alcoholic drinks. So I’ve got a couple of gallons of apple cider, a packet of yeast, and a big glass vessel called a carboy. Even if you don’t imbibe, whether by law or by choice, I promise you’ll enjoy seeing the similarities and differences between cleaning up domestic wastewater and the ancient art form of fermenting beverages. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about secondary wastewater treatment… and a little bit about homebrewing too.

You probably don’t think about cellular biology when you consider civil engineers, even though we’re made of cells just like everyone else. We’re associated more with steel, concrete, and earthwork. But, the engineers who design wastewater treatment plants, and the operators who run them, have to know a lot about microbes. Here’s why: The worst part about sewage isn’t the solids. (They can be pretty easily removed in settling basins, as I’ve shown in a previous video). It’s not even the pathogens - dangerous organisms that can make us sick. (Those can be eliminated using disinfection processes like UV light or chlorine). The worst part about sewage is the nutrients it contains. I’m talking about organic material, nitrogen, phosphorous, and other compounds. You can’t just release this stuff into a creek, river, or ocean because the microbes already in the destination water like bacteria and algae will respond by increasing their population beyond what the ecosystem would ever see under natural conditions. As they do, they use up all the oxygen dissolved in the water, ruining the habitat and killing fish and other wildlife. Nutrient pollution is one of the most severe and challenging environmental issues worldwide, so one of the most critical jobs wastewater plants do is clean nutrients out of the water before it can be discharged. But, because they are dissolved into solution at the molecular scale, nutrients are much harder to separate from sewage than other contaminants.

Like domestic wastewater, making a fermented beverage starts with a liquid full of dissolved nutrients that we want to convert into something better. In this case, the nutrients are sugars that we’re trying to convert into alcohol. I should point out that making cider is technically not brewing since there’s no heat used to extract the sugars. But, the fermentation process we’re talking about in this video is the same, no matter whether you’re making beer, wine, or even distilled spirits. It all starts out with some kind of sugary liquid. The way we measure the nutrient concentration in brewing is pretty simple. dissolved sugars increase the density or specific gravity of the liquid. This glass tool is called a hydrometer, and it floats upright when suspended in a liquid. Just like a ship sits a little higher in seawater than it does in freshwater, a hydrometer floats to a different height depending on the density of the fluid. The more sugar, the higher the hydrometer rises.

On the other hand, characterizing the strength of sewage is equally as important but a little more complicated. For one, not all nutrients change the density of the fluid equally. But more importantly, there are a lot more of them than just sugar, and they can all exist at different strengths. So rather than try and separate all that complexity, we usually measure what matters most: how much dissolved oxygen would organisms steal from the water to break down the nutrients within a sewage sample. The technical term for this is Biochemical Oxygen Demand or BOD. In general terms, treatment plant operators measure the amount of oxygen dissolved in a sewage sample before setting it aside for a 5-day period. During that time, critters in the sample will eat up some of the nutrients, robbing the dissolved oxygen as they do. The difference in oxygen before and after the five days is the BOD. Once you know your initial concentration of nutrients, whether sugars or… other stuff… you can work on a way to get them out of there.

In both sewage and brewage, we expropriate tiny biological buddies for this purpose. In other words, we use them to our advantage. Wastewater treatment plants rely primarily on bacteria with some protozoa as well. There are a myriad of secondary treatment processes used around the world, but one is more common than all the rest, and it has the best name too: activated sludge. After the primary treatment of removing solids from the flow, wastewater passes into large basins where enormous colonies of microorganisms are allowed to thrive. At the bottom of the basins are diffusers that bubble prodigious quantities of air up through the sewage, dissolving as much oxygen as possible into the liquid and maximizing the microorganisms’ capacity to consume organic material. This combination of wastewater and biological mass is known as mixed liquor, but that’s just a coincidence in this case. Either way, you definitely don’t want to be drinking too much of it.

Fermentation of an alcoholic beverage - the process where sugars are converted to ethanol - works a little bit differently. First, the microorganisms doing the work in fermentation are yeast. These are single-cell organisms from the fungus kingdom, in some ways similar but in many ways quite unlike the bacteria and protozoa in a wastewater treatment plant. In fact, brewers work pretty hard to keep equipment clean and sanitized so that bacteria can’t colonize the brew. The foam you see in the carboy before I filled it with apple juice is a no-rinse sanitizer meant to kill unwanted microorganisms before pitching the wanted ones in. The yeast themselves will even take advantage of the antimicrobial effects of the very ethanol they produce.

Another difference between the processes is air. Except at the very beginning, when the yeast are first expanding their population, fermentation is an anaerobic process. That means it happens in the absence of oxygen. A wastewater treatment plant adds air to speed up the process. However, yeast exposed to oxygen stop producing alcohol, so the vessel is usually sealed to minimize the chances of that. The bubbles you see are carbon dioxide that the yeast create in addition to the ethanol. An airlock device lets the carbon dioxide vent so it can’t build up pressure without letting airborne contaminants inside. As the sugars are converted and CO2 gas leaves the vessel, the density of the liquid drops, and that change can be measured using a hydrometer. My cider started at a specific gravity of 1.06 and fermented down to 1.00, meaning it has an alcohol content of around 8% by volume. However, just like the outflow from an activated sludge basin, it’s not quite ready to drink.

Once the microorganisms have done their job and the liquid is nearly free of nutrients or sugars, you need to get them out. In both brewing and wastewater treatment, that usually happens through settling. I have a separate video that goes into more detail about this process, but the basics are pretty simple. Most solid particles, including microorganisms, are denser than water and thus will sink. But, they sink slowly, so you have to keep the liquid still for this type of separation to work well. Wastewater treatment plants use settling tanks called clarifiers that send the mixed liquor slowly from the center outward so that it drops the sludge of microorganisms to the bottom as it does, leaving clear effluent to pass over a weir around the perimeter to leave the tank. Similarly, you can see a nice layer of mostly dead yeast on the bottom of my fermentation vessel, typically called the lees or trub. Homebrewers use a process called racking, which is just siphoning the liquid from the fermentation vessel while leaving the solids behind.

In both cases, these microorganisms are not all dead. That’s where the “activated” in activated sludge comes from. A rotating arm in the clarifier pushes the sludge to a center hopper. From there, it is collected and returned to the aeration chamber to seed the next colony that will treat new wastewater entering the tanks. Of course, not all thatsludge is needed, so the rest must be discarded, creating a whole separate waste disposal challenge (but that’s a topic for another video). Similarly, the yeast at the bottom of my fermenter are not all dead and can be reused in another batch. Commercial breweries and homebrewers alike often use yeast over and over again. However, they mutate pretty quickly because of their short lifetime, so the flavor can drift over time.

At this point, both the wastewater and my hard cider are quote-unquote nutrient-free. They are generally ready to be safely released into a nearby watercourse and my tummy, respectively. However, there are some final tasks that may be wanted or needed in both cases. As you can see, my hard cider doesn’t look quite like what you would buy in a can or bottle at the grocery store. I’m not going to carbonate it in this video, but that is an extra step that many cidermakers and most beer brewers take. I will add an enzyme that helps clear up the haze from the unfiltered apple juice. It doesn’t make it taste any different, but it does look a lot nicer.

Like the finishing steps of homebrewing, many wastewater plants use tertiary treatment processes to target other pollutants the bugs couldn’t get. Depending on where the effluent is going, standards might require more purification than primary and secondary treatment can achieve on their own. In fact, wastewater treatment plants have been experiencing a relatively dramatic shift over the past few decades as they treat sewage less like a waste product and more like an asset. After all, raw sewage is 99.9 percent water, and water is a valuable resource to cities. In places with water scarcity, it can be cost-effective to treat municipal wastewater beyond what would typically be required so that it can be reused instead of discarded.


A few places across the world have potable reuse (also known as toilet-to-tap) where sewage is cleaned to drinking water quality standards and reintroduced to the distribution system. Wichita Falls, Texas and the International Space Station are notable examples. However, most recycled water isn’t meant for human consumption. Plenty of uses don’t require potable water, including industrial processes and the irrigation of golf courses, athletic fields, and parks. Many wastewater treatment plants are now considered water reclamation plants because, instead of discharging effluent to a stream or river, they pump it to customers that can use it, hopefully reducing demands on the potable water supply as a result. In many countries, purple pipes are used to distinguish non-potable water distribution systems, helping to prevent cross-connections. And sometimes you’ll see signs like this one to prevent people from getting sick. [Drink] On the other hand, Practical Engineering’s “Effervescent Effluent,” when enjoyed responsibly, is perfectly safe to drink. Cheers!

April 05, 2022 /Wesley Crump

The Most Mindblowing Infrastructure in My City

March 25, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

I’m standing in front of a pair of water towers near my house in San Antonio. If you’ve seen my video on water towers, or you just know about how they work, they might look a little odd to you. It’s not only unusual that there are two tanks right next to each other, but that they’re completely different heights. This difference is a little hint that there’s something more interesting below the surface here. Some engineering achievements are visibly remarkable. It’s easy to celebrate the massive projects across the world: the Hoover Dams and the Golden Gate Bridges. But it’s just as easy to overlook the less notable infrastructure all around us that makes modern life possible. If you’ve seen any of my videos, you know that I think structures hidden in plain sight are just as worthy of celebration. In fact, I think infrastructure is so remarkable, I wrote a book about it that you can preorder starting today. I can’t tell you how excited I am to announce this project, but first let me tell you a little bit about these water towers and a few other things in San Antonio too. I’m Grady and this is Practical Engineering. In today’s episode, I misguidedly chose  the coldest day of the year to film my first on location video here in my home city to talk about a few of my favorite parts of the constructed environment.

Luckily the drone footage was taken on a sunnier day. You may have guessed already that these two towers aren’t connected to the same water distribution system. If they were, water would just drain out of the upper tank and overflow the lower one. San Antonio actually has a second system that takes recycled water from sewage treatment plants and delivers it to golf courses, parks, and commercial and industrial customers throughout the city. Treated wastewater isn’t clean enough to drink, but it’s more than clean enough to water the grass or use in a wide variety of industrial processes. So, instead of discarding it, we treat it as an asset, delivering it to customers that can use it. That reduces the demand on the potable water supply (which is scarce in this part of Texas). Some people call this the purple pipe system, because recycled water pipes have a nice lavender shade to differentiate them and prevent cross connections. San Antonio actually has one of the largest recycled water delivery systems in the country, and this water tower is one of the many tanks they use to buffer the supply and demand of recycled water around town.

Not too far from the two towers is this unofficial historic landmark of San Antonio. It may just look like a simple concrete wall, but Olmos Dam is one of the most important flood control structures in the city. This structure was originally built in 1927 after a massive flood demolished much of downtown. A roadway along the top of the dam had electric lights and was a popular driving destination with nice views. The roadway has since been replaced by a more hydraulically-efficient curved crest. I have a special connection to this dam because I worked as an intern on a rehabilitation project at the engineering firm hired to design the repairs. The project involved the installation of about 70 post-tensioned anchors to stabilize the dam against extreme loads from flooding. Each anchor was drilled through the structure and grouted into the rock below. Then a massive hydraulic jack was used to tension the strands and lock each anchor off at the top to stitch the dam to its foundation like gigantic steel rubber bands. The contractor even had to use a special drill rig to fit under this highway bridge. San Antonio is in the heart of flash flood alley in Texas, named because of the steep, impermeable terrain and intense storms we get. Olmos Dam helped protect downtown from many serious floods in its hundred year lifetime. But, it’s not the only interesting flood control structure in town.

I’m here at the Flood Control Tunnel Inlet Park, one of the best-named parks in the City if you ask me. And below my feet is one of the most interesting infrastructure projects in all of San Antonio. These gates might not look too interesting at first glance, but during a flood, water in the San Antonio River flows into ports of this inlet structure instead of continuing downstream toward downtown. From this inlet, the floodwaters pass down a vertical shaft more than a hundred feet (or 35 meters) below the ground. The tunnel at the bottom of the shaft runs for about 3 miles (or 5 kilometers) below downtown to the south, allowing floodwaters to bypass the most vulnerable developed areas and saving hundreds of millions of dollars in property damages from flooding.

When in use, the floodwaters from the tunnel flow back up a vertical shaft and come out here at the Flood Control Tunnel Outlet on the Mission Reach of the San Antonio River. Under normal conditions, there are pumps that can recirculate river water through the tunnel, keeping things from getting stagnant and providing a fresh supply of water to flow through the downtown riverwalk. This part of the San Antonio River south of downtown is one of my favorite places because it’s a perfect example of how urban and natural areas can coexist.

When you consider infrastructure and construction, you might think about concrete, steel, and hard surfaces. But this part of the San Antonio River was included in one of the largest ecosystem restoration projects in the US. Before the project, this was your typical ugly, channelized, urban river, but now it’s been converted back to a much more natural state with native vegetation and its original meandering path. But, the project didn’t only improve the habitat along the river. It also included recreational improvements to make this stretch a destination for residents and tourists. For example, these grade control structures help keep the river from eroding downward, but they also feature canoe chutes so you can paddle the river without interruptions. There are several new parks along the river, including Confluence Park, home to this beautiful pavilion made of concrete petals. Most importantly, there is a continuous dedicated hike-and-bike trail along the entire stretch.

Everyone knows about the Alamo because of the famous battle, but there are actually 5 Spanish missions established in the early 1700s along the San Antonio River. The sites together are now a historic National Park and a UNESCO World Heritage site. You can tour the missions to learn about the history of Spanish colonialism and interwoven cultures of Spain and the Indigenous people of Texas and Mexico. The Mission Reach trail provides a connection to all the missions and a bunch of other interesting destinations along the river, including parks, public art, and my favorite spots: the historic and modern water control infrastructure projects.

So far all the structures I’ve shown you have been water-related. That’s my professional background, but we could do similar deep dives just here in San Antonio about the power grid, highways, bridges, telecommunications, and even construction projects. And, preferably on warmer days, we could do similar field guides in every urban area around the world. In fact, that’s the premise of my new book, Engineering in Plain Sight: An Illustrated Field Guide to the Constructed Environment. I’ve been working so hard on this project for the past two years, and I’m thrilled to finally tell you about it.

Just like there are written guides to birds, rocks, and plants, Engineering in Plain Sight is a field guide to infrastructure that provides colorful illustrations and accessible explanations of nearly every part of the constructed world around us. It’s essentially 50 new Practical Engineering episodes crammed between two covers. Imagine if you could look at the world through the eyes of the engineers who designed the infrastructure you might not even be noticing in your everyday life. I wrote this book with the goal of transforming your perspective of the built environment, and I think once you read it, you’ll never look at your city the same again. You can explore it like an encyclopedia - picking pages in no order. Or treat the sights of your city’s infrastructure like a treasure hunt and try to collect them all.

The book comes out in August, but I would love if you preorder your copy right now because, in the world of books, presales are the best way to get the attention of bookstores and libraries. If you preorder directly from the publisher, you’ll get a discount off the regular price, and you can preorder signed copies directly from my website that come with an exclusive enamel pin as a gift. Preordering is the only way to get your hands on this custom pin that was designed by the book’s illustrator.

Use the link in the description to find all the preorder locations. And one more thing: between now and when the book publishes, I’m going to be posting some short explainers about interesting infrastructure on all my social media channels, and I want to encourage you to do the same. I’ll be sending 5 signed copies of my new book to my favorite social media posts about infrastructure that use the hashtag #EngineeringInPlainSight. Check out the link below for more information. And from the bottom of my heart, thank you for watching and let me know what you think!

March 25, 2022 /Wesley Crump

How to Clean Sewage with Gravity

March 01, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Stickney Water Reclamation Plant in Chicago, the largest wastewater treatment plant in the world. It serves more than two million people in the heart of the Windy City, converting all their showers, flushes, and dirty dishwater, plus the waste from countless commercial and industrial processes into water safe enough to discharge into the adjacent canal which flows eventually into the Mississippi River. It all adds up to around 700 million gallons or two-and-a-half billion liters of sewage each day, and the plant can handle nearly double that volume on peak days. That’s a lot of olympic sized swimming pools, and in fact, the aeration tanks used to biologically treat all that sewage almost look like something you might do a lap or two in (even though there are quite a few reasons you shouldn’t). However, flanking those big rectangular basins are rows of circular ponds and smaller rectangular basins that have a simple but crucial responsibility in the process of treating wastewater. We often use chemicals, filters, and even gigantic colonies of bacteria to clean sewage on such a massive scale, but the first line of defense in the fight against dirty water is usually just gravity. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about settlement for water and wastewater treatment.

This video is part of a series on municipal wastewater handling and treatment. Rather than put out a single video overview of treatment plants (which many other channels have already masterfully done), we’re taking a deep dive into a few of the most interesting parts of converting sewage into clean water. Check out the wastewater playlist linked in the card above if you want to learn more.

The job of cleaning water contaminated by grit, grime, and other pollutants is really a job of separation. Water gets along with nearly every substance on earth. That’s why it’s so useful for cleaning and a major part of why it does such a good job carrying our wastes away from homes and businesses in sewers. But once it reaches a wastewater treatment plant, we need to find a way to separate the water from its inhabitant wastes so it can be reused or discharged back into the environment. Some contaminants chemically dissolve into the water and are difficult to remove at municipal scales. Others are merely suspended in the swift and turbulent flow and will readily settle out if given a moment of tranquility. That’s the trick that wastewater treatment engineers use as the first step in cleaning wastewater.

Once it passes through a screen to filter out sticks and rags, sewage entering a wastewater treatment plant’s first step, or primary treatment, is the simple process of slowing the wastewater down to allow time for suspended solids to settle out. How do you create such placid conditions from a constant stream of wastewater? You can’t tell people to stop flushing or showering to slow down the flow. Velocity and volumetric flow are related by a single parameter: the cross-sectional area. If you increase this area without changing the flow, the velocity goes down as a result. Basins used for sedimentation are essentially just enormous expansion fittings on the end of the pipe, dramatically increasing the area of flow so the velocity falls to nearly zero. But just because the sewage stream is now still and serene doesn’t mean impurities and contaminants instantly fall to the bottom. You’ve got to give them time.

How much time is a pretty important question if you’re an engineer because it affects the overall size of the basin, and thus it affects the cost. Particles falling out of suspension quickly reach a terminal velocity, just like a skydiver falling from a plane. That maximum speed is largely a function of each particle’s size, and I have a demonstration here in my garage to show you how that works. I think it’s intuitive that larger particles fall through a liquid faster than smaller ones. Compare me dropping a pebble to a handful of sand. The pebble reaches the bottom in an instant, while the smaller particles of sand settle out more slowly. Wastewater contains a distribution of particles from very small to quite large, and ideally we want to get rid of them all. 

As an example, I have two colors of sand here. I sifted the white sand through a fine mesh, discarding the smaller particles and keeping the large ones. I sifted the black sand through the same mesh, this time keeping the fine particles and discarding the ones retained by the sieve. After that, I combined both sands to create a gray mixture, and we’ll see what happens when we put it into a column of water. This length of pipe is full of clean water, and I’m turning it over so the mixture is at the top. Watch what happens as the sand settles to the bottom of the pipe. You can see that, on the whole, the white sand reaches the bottom faster, while the black sand takes longer to settle. The two fractions that were previously blended together separate themselves again just from falling through a column of water.

Of course, physicists have used sophisticated fluid dynamics with partial differential equations to work out the ideal settling velocity of any size of spherical particle in a perfectly still column of water based on streamlines, viscosity, gravity, and drag forces. But, we civil engineers usually just drop them in the water and time how quickly they fall. After all, there’s hardly anything ideal about a wastewater treatment plant. As water moves through a sedimentation basin and individual particles fall downward out of suspension, they take paths like the ones shown here. Based on this diagram, you would assume that depth of the basin would be a key factor in whether or not a particle reaches the bottom or passes through to the other side. Let me show you why settling basins defy your intuitions with just a tiny bit of algebra.

You’ve got a particle coming in on the left side of the basin. It has a vertical velocity - that’s how fast it settles - and a horizontal velocity - that’s how fast the water’s moving through the basin. If the time it takes to fall the distance D to the bottom is shorter than the time it takes to travel the length L of the basin, the particle will be removed from the flow. Otherwise it will stay in suspension past the settling basin. That’s what we don’t want. As I mentioned, the speed of the water is the flow rate divided by the cross sectional flow area - that’s the basin’s width times its depth. Since both the time it takes for a particle to travel the length of the basin and the time it takes to settle to its bottom are a function of the basin’s depth, that term cancels out, and you’re left with only the basin's length times width (in other words, its surface area). That’s how we measure the efficiency of a sedimentation basin. Divide the flow rate coming in by the surface area, and you get a speed that we call the overflow or surface loading rate. All the particles that settle faster than the overflow rate will be retained by the sedimentation basin, regardless of its depth.

Settlement is a cheap and efficient way to remove a large percentage of contaminants from wastewater, but it can’t remove them all. There are a lot more steps that follow in a typical wastewater treatment plant, but in addition to being the first step of the process, settlement is also usually the last one as well. Those circular ponds at the Stickney plant in Chicago are clarifiers used to settle and collect the colonies of bacteria used in the secondary treatment process. Clarifiers are just settlement basins with mechanisms to automatically collect the solids as they fall to the bottom. The water from the secondary treatment process, called mixed liquor, flows up through the center of the clarifier and slowly makes its way to the outer perimeter, dropping particles that form a layer of sludge at the bottom. The clarified water passes over a weir so that only a thin layer farthest from the sludge can exit the basin. A scraper pushes the sludge down the sloped bottom of the clarifier into a hopper where it can be collected for disposal.

Settlement isn’t only used for wastewater treatment. Many cities use rivers and lakes as sources of fresh drinking water, and these surface sources are more vulnerable to contamination than groundwater. So, they go through a water purification plant before being distributed to customers. Raw surface water contains suspended particles of various materials that give water a murky appearance (called turbidity) and can harbor dangerous microorganisms. The first step in most drinking water treatment plants is to remove these suspended particles from the water. But unlike the larger solids in wastewater, suspended particles creating turbidity in surface water don’t readily settle out. Because of this, most treatment plants use chemistry to speed up the process, and I have a little demo of that set up here in the studio.

I have two bottles full of water that I’ve vigorously mixed with dirt from my backyard. One will serve as the control, and the other as a demonstration. The reason these tiny soil particles remain suspended without settling is that they carry an electrical charge. Therefore, each particle repels its neighbors, fighting the force of gravity, and preventing them from getting too close to one another. Chemical coagulants neutralize the electric charges so fine particles no longer repel one another. Additional chemicals called flocculants bond the particles together into clumps called flocs. As the flocs of suspended particles grow, they eventually become heavy enough to settle out, leaving clarified water at the top of the bottle. Treatment plants usually do this in two steps, but the pool cleaner I’m using in the demo does both at once. It’s a pretty dramatic difference if you ask me. In a clarifier, this sludge at the bottom would be pumped to a digester or some other solids handling process, and the clear water would move on to filtration and disinfection before being pumped into the distribution system of the city.


Our ability to clean both drinking water and wastewater at the scale of an entire city is one of the most important developments in public health. Sedimentation is used not only in water treatment plants but also ahead of pumping stations to protect the pumps and pipes against damage, with canals to keep them from silting, in fish hatcheries, mining, farming, and a whole host of other processes that create or rely on dirty water. The science of settlement and sedimentation is something that impacts our lives in a significant way and hopefully learning a little bit about it helps you recognize the brilliant engineering keeping our water safe.

March 01, 2022 /Wesley Crump

What Really Happened During the 2003 Blackout?

February 15, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On August 14, 2003, a cascading failure of the power grid plunged more than 50 million people into darkness in the northeast US and Canada. It was the most significant power outage ever in North America, with an economic impact north of ten billion dollars. Calamities like this don’t happen in a bubble, and there were many human factors, political aspects, and organizational issues that contributed to the blackout. But, this is an engineering channel, and a bilateral task force of energy experts from the US and Canada produced this in-depth 240-page report on all of the technical causes of the event that I’ll try to summarize here. Even though this is kind of an older story, and many of the tough lessons have already been learned, it’s still a nice case study to explore a few of the more complicated and nuanced aspects of operating the electric grid, essentially one of the world’s largest machines. I’m Grady, and this is Practical Engineering. Today, we’re talking about the Northeast Blackout of 2003.

Nearly every aspect of modern society depends on a reliable supply of electricity, and maintaining this reliability is an enormous technical challenge. I have a whole series of videos on the basics of the power grid if you want to keep learning after this, but I’ll summarize a few things here. And just a note before we get too much further, when I say “the grid” in this video, I’m really talking about the Eastern Interconnection that serves the eastern two-thirds of the continental US plus most of eastern Canada. 

There are two big considerations to keep in mind concerning the management of the power grid. One: supply and demand must be kept in balance in real-time. Storage of bulk electricity is nearly non-existent, so generation has to be ramped up or down to follow the changes in electricity demands. Two: In general, you can’t control the flow of electric current on the grid. It flows freely along all available paths, depending on relatively simple physical laws. When a power provider agrees to send electricity to a power buyer, it simply increases the amount of generation while the buyer decreases their own production or increases their usage. This changes the flow of power along all the transmission lines that connect the two. Each change in generation and demand has effects on the entire system, some of which can be unanticipated. 

Finally, we should summarize how the grid is managed. Each individual grid is an interconnected network of power generators, transmission operators, retail energy providers, and consumers. All these separate entities need guidance and control to keep things running smoothly. Things have changed somewhat since 2003, but at the time, the North American Electric Reliability Council (or NERC) oversaw ten regional reliability councils who operated the grid to keep generation and demands in balance, monitored flows over transmission lines to keep them from overloading, prepared for emergencies, and made long-term plans to ensure that bulk power infrastructure would keep up with growth and changes across North America. In addition to the regional councils, there were smaller reliability coordinators who performed the day-to-day grid management and oversaw each control area within their boundaries.

August 14th was a warm summer day that started out fairly ordinarily in the northeastern US. However, even before any major outages began, conditions on the electric grid, especially in northern Ohio and eastern Michigan were slowly degrading. Temperatures weren’t unusual, but they were high, leading to an increase in electrical demands from air conditioning. In addition, several generators in the area weren’t available due to forced outages. Again, not unusual. The Midwest Independent System Operator (or MISO), the area’s reliability coordinator, took all this into account in their forecasts and determined that the system was in the green and could be operated safely. But, three relatively innocuous events set the stage for what would follow that afternoon.

The first was a series of transmission line outages outside of MISO’s area. Reliability coordinators receive lots of real-time data about the voltages, frequencies, and phase angles at key locations on the grid. There’s a lot that raw data can tell you, but there’s also a lot of things it can’t. Measurements have errors, uncertainties, and aren’t always perfectly synchronized with each other. So, grid managers often use a tool called a state estimator to process all the real-time measurements from instruments across the grid and convert them into the likely state of the electrical network at a single point in time, with all the voltages, current flows, and phase angles at each connection point. That state estimation is then used to feed displays and make important decisions about the grid.

But, on August 14th, MISO’s state estimator was having some problems. More specifically, it couldn’t converge on a solution. The state estimator was saying, “Sorry. All the data that you’re feeding me just isn't making sense. I can’t find a state that matches all the inputs.” And the reason it was saying this is that twice that day, a transmission line outside MISO’s area had tripped offline, and the state estimator didn’t have an automatic link to that information. Instead it had to be entered manually, and it took a bunch of phone calls and troubleshooting to realize this in both cases. So, starting around noon, MISO’s state estimator was effectively offline.

Here’s why that matters: The state estimator feeds into another tool called a Real-Time Contingency Analysis or RTCA that takes the estimated state and does a variety of “what ifs.” What would happen if this generator tripped? What would happen if this transmission line went offline? What would happen if the load increased over here? Contingency analysis is critical because you have to stay ahead of the game when operating the grid. NERC guidelines require that each control area manage its network to avoid cascading outages. That means you have to be okay, even during the most severe single contingency, for example, the loss of a single transmission line or generator unit. Things on the grid are always changing, and you don’t always know what the most severe contingency would be. So, the main way to ensure that you’re operating within the guidelines at any point in time is to run simulations of those contingencies to make sure the grid would survive. And MISO’s RTCA tool, which was usually run after every major change in grid conditions (sometimes several times per day), was offline on August 14th up until around 2 minutes before the start of the cascade. That means they couldn’t see their vulnerability to outages, and they couldn’t issue warnings to their control area operators, including FirstEnergy, the operator of a control area in northern Ohio including Toledo, Akron, and Cleveland.

That afternoon, FirstEnergy was struggling to maintain adequate voltage within their area. All those air conditioners use induction motors that spin a magnetic field using coils of wire inside. Inductive loads do a funny thing to the power on the grid. Some of the electricity used to create the magnetic field isn’t actually consumed, but just stored momentarily and then returned to the grid each time the current switches direction (that’s 120 times per second in the US). This causes the current to lag behind the voltage, reducing its ability to perform work. It also reduces the efficiency of all the conductors and equipment powering the grid because more electricity has to be supplied than is actually being used. This concept is kind of deep in the weeds of electrical engineering, but we normally simplify things by dividing bulk power into two parts: real power (measured in Watts) and reactive power (measured in var). On hot summer days, grid operators need more reactive power to balance the increased inductive loads on the system caused by millions of air conditioners running simultaneously.

Real power can travel long distances on transmission lines, but it’s not economical to import reactive power from far away because transmission lines have their own inductance that consumes the reactive power as it travels along them. With only a few running generators within the Cleveland area, FirstEnergy was importing a lot of real power from other areas to the south, but voltages were still getting low on their part of the grid because there wasn’t enough reactive power to go around. Capacitor banks are often used to help bring current and voltage back into sync, providing reactive power. However, at least four of FirstEnergy’s capacitor banks were out of service on the 14th. Another option is to over-excite the generators at nearby power plants so that they create more reactive power, and that’s just what FirstEnergy did.

At the Eastlake coal-fired plant on Lake Erie, operators pushed the number 5 unit to its limit, trying to get as much reactive power as they could. Unfortunately, they pushed it a little too hard. At around 1:30 in the afternoon, its internal protection circuit tripped and the unit was kicked offline - the second key event preceding the blackout. Without this critical generator, the Cleveland area would have to import even more power from the rest of the grid, putting strain on transmission lines and giving operators less flexibility to keep voltage within reasonable levels. 

Finally, at around 2:15, FirstEnergy’s control room started experiencing a series of computer failures. The first thing to go was the alarm system designed to notify operators when equipment had problems. This probably doesn’t need to be said, but alarms are important in grid operations. People in the control room don’t just sit and watch the voltage and current levels as they move up and down over the course of a day. Their entire workflow is based on alarms that show up as on-screen or printed notifications so they can respond. All the data was coming in, but the system designed to get an operator’s attention was stuck in an infinite loop. The FirstEnergy operators were essentially driving on a long country highway with their fuel gauge stuck on “full,” not realizing they were nearly out of gas. With MISO’s state estimator out of service, Eastlake 5 offline, and FirstEnergy’s control room computers failing, the grid in northern Ohio was operating on the bleeding edge of the reliability standards, leaving it vulnerable to further contingencies. And the afternoon was just getting started.

Transmission lines heat up as they carry more current due to resistive losses, and that is exacerbated on still, hot days when there’s no wind to cool them off. As they heat up, they expand in length and sag lower to the ground between each tower. At around 3:00, as the temperatures rose and the power demands of Cleveland did too, the Harding-Chamberlin transmission line (a key asset for importing power to the area) sagged into a tree limb, creating a short-circuit. The relays monitoring current on the line recognized the fault immediately and tripped it offline. Operators in the FirstEnergy control room had no idea it happened. They started getting phone calls from customers and power plants saying voltages were low, but they discounted the information because it couldn’t be corroborated on their end. By this time their IT staff knew about the computer issues, but they hadn’t communicated them to the operators, who had no clue their alarm system was down.

With the loss of Harding-Chamberlin, the remaining transmission lines into the Cleveland area took up the slack. The current on one line, the Hanna-Juniper, jumped from around 70% up to 88% of its rated capacity, and it was heating up. About half an hour after the first fault, the Hanna-Juniper line sagged into a tree, short circuited, and tripped offline as well. The FirstEnergy IT staff were troubleshooting the computer issues, but still hadn’t notified the control room operators. The staff at MISO, the reliability coordinator, with their state estimator issues, were also behind on realizing the occurrence and consequences of these outages. 

FirstEnergy operators were now getting phone call after phone call, asking about the situation while being figuratively in the dark. Call transcripts from that day tell a scary story.

“[The meter on the main transformer] is bouncing around pretty good. I’ve got it relay tripped up here…so I know something ain't right,” said one operator at a nearby nuclear power plant.

A little later he called back: “I’m still getting a lot of voltage spikes and swings on the generator… I don’t know how much longer we’re going to survive.”

A minute later he calls again: “It’s not looking good… We aint going to be here much longer and you’re going to have a bigger problem.”

An operator in the FirstEnergy control room replied: “Nothing seems to be updating on the computers. I think we’ve got something seriously sick.”

With two key transmission lines out of service, a major portion of the electricity powering the Cleveland area had to find a new path into the city. Some of it was pushed onto the less efficient 138 kV system, but much of it was being carried by the Star-South Canton line which was now carrying more than its rated capacity. At 3:40, a short ten minutes after losing Hanna-Juniper, the Star-South Canton line tripped offline when it too sagged into a tree and short-circuited. It was actually the third time that day the line had tripped, but it was equipped with circuit breakers called reclosers that would energize the line automatically if the fault had cleared. But, the third time was the charm, and Star-South Canton tripped and locked out. Of course, FirstEnergy didn’t know about the first two trips because they didn’t see an alarm, and they didn’t know about this one either. They had started sending crews out to substations to get boots on the ground and try to get a handle on the situation, but at that point, it was too late.

With Star-South Canton offline, flows in the lower capacity 138 kV lines into Cleveland increased significantly. It didn’t take long before they too started tripping offline one after another. Over the next half hour, sixteen 138 kV transmission lines faulted, all from sagging low enough to contact something below the line. At this point, voltages had dropped low enough that some of the load in northern Ohio had been disconnected, but not all of it. The last remaining 345 kV line into Cleveland from the south came from the Sammis Power Plant. The sudden changes in current flow through the system now had this line operating at 120% of its rated capacity. Seeing such an abnormal and sudden rise in current, the relays on the Star-Sammis line assumed that a fault had occurred and tripped the last remaining major link to the Cleveland area offline at 4:05 PM, only an hour after the first incident. After that, the rest of the system unraveled.

With no remaining connections to the Cleveland area from the south, bulk power coursing through the grid tried to find a new path into this urban center. 

First overloads progressed northward into Michigan, tripping lines and further separating areas of the grid. Then the area was cut off to the east. With no way to reach Cleveland, Toledo, or Detroit from the south, west, or north, a massive power surge flowed east into Pennsylvania, New York, and then Ontario in a counter-clockwise path around Lake Erie, creating a major reversal of power flow in the grid. All along the way, relays meant to protect equipment from damage saw these unusual changes in power flows as faults and tripped transmission lines and generators offline

Relays are sophisticated instruments that monitor the grid for faults and trigger circuit breakers when one is detected. Most relaying systems are built with levels of redundancy so that lines will still be isolated during a fault, even if one or more relays malfunction. One type of redundancy is remote backup, where separate relays have overlapping zones of protection. If the closest relay to the fault (called Zone 1) doesn’t trip, the next closest relay will see the fault in its Zone 2 and activate the breakers. Many relays have a Zone 3 that monitors even farther along the line.

When you have a limited set of information, it can be pretty hard to know whether a piece of equipment is experiencing a fault and should be disconnected from the grid to avoid further damage or just experiencing an unusual set of circumstances that protection engineers may not have anticipated. That’s especially true when the fault is far away from where you’re taking measurements. The vast majority of lines that went offline in the cascade were tripped by Zone 3 relays. That means the Zone 1 and 2 relays, for the most part, saw the changes in current and voltage on the lines and didn’t trip because they didn’t fall outside of what was considered normal. However, the Zone 3 relays - being less able to discriminate between faults and unusual but non-damaging conditions - shut them down. Once the dominos started falling in the Ohio area, it took only about 3 minutes for a massive swath of transmission lines, generators, and transformers to trip offline. Everything happened so fast that operators had no opportunity to implement interventions that could have mitigated the cascade.

Eventually enough lines tripped that the outage area became an electrical island separated from the rest of the Eastern Interconnection. But, since generation wasn’t balanced with demands, the frequency of power within the island was completely unstable, and the whole area quickly collapsed. In addition to all of the transmission lines, at least 265 power plants with more than 508 generating units shut down. When it was all over, much of the northeastern United States and the Canadian province of Ontario were completely in the dark. Since there were very few actual faults during the cascade, reenergizing happened relatively quickly in most places. Large portions of the affected area had power back on before the end of the day. Only a few places in New York and Toronto took more than a day to have power restored, but still the impacts were tremendous. More than 50 million people were affected. Water systems lost pressure forcing boil-water notices. Cell service was interrupted. All the traffic lights were down. It’s estimated that the blackout contributed to nearly 100 deaths.

Three trees and a computer bug caused a major part of North America to completely grind to a halt. If that’s not a good example of the complexity of the power grid, I don’t know what is. If you asked anyone working in the power industry on August 13, whether the entire northeast US and Canada would suffer a catastrophic loss of service the next day, they would have said no way. People understood the fragility of the grid, and there were even experts sounding alarms about the impacts of deregulation and the vulnerability of transmission networks, but this was not some big storm. It wasn’t even a peak summer day. It was just a series of minor contingencies that all lined up just right to create a catastrophe.


Today’s power grid is quite different than it was in 2003. The bilateral report made 46 recommendations about how to improve operations and infrastructure to prevent a similar tragedy in the future, many of which have been implemented over the past nearly 20 years. But, it doesn’t mean there aren’t challenges and fragilities in our power infrastructure today. Current trends include more extreme weather, changes in the energy portfolio as we move toward more variable sources of generation like wind and solar, growing electrical demands, and increasing communications between loads, generators, and grid controllers. Just a year ago, Texas saw a major outage related to extreme weather and the strong nexus between natural gas and electricity. I have a post on that event if you want to take a look after this. I think the 2003 blackout highlights the intricacy and interconnectedness of this critical resource we depend on, and I hope it helps you appreciate the engineering behind it. Thank you for reading and let me know what you think.

February 15, 2022 /Wesley Crump

Can You Pump Sewage?

February 01, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The Crossness Pumping Station in London has ornate architecture and elaborate ironwork which belie its original, somewhat disgusting purpose: to lift raw sewage from London’s southern outfall, the lowest point in one of London’s biggest sewers, up to the ground surface where it could be discharged directly into the Thames River. Of course, we don’t typically release raw sewage into waterways anymore, and Crossness has long been decommissioned after newer treatment works were built in the 1950s. It’s now in the process of being restored as a museum you can visit to learn more about the fascinating combined history of human waste and Victorian engineering. But even though we have more sophisticated ways to treat wastewater before discharging it into streams and rivers, there’s one thing that hasn’t changed. We still use gravity as the primary way of getting waste to flow away from homes and businesses within the sewers belowground. And eventually, we need a way to bring that sewage back up to the surface of the earth. But that’s not as easy as it sounds. I’m Grady, and this is Practical Engineering. Today, we’re talking about sewage lift stations.

I have a post all about the engineering of sewers, and today we’re following that wastewater one more step on its smelly journey through a typical city. You can go check that out after this if you want to learn more, but I’ll summarize it quickly here. Most sewers flow by gravity from each home or business toward a wastewater treatment plant. They’re installed as pipes, but sewers usually flow only partly full like smelly water slides or underground creeks and rivers. This is convenient because we don’t have to pay a monthly gravity bill, and it almost never gets knocked out during a thunderstorm. It’s a free and consistent force that compels sewage downward. But, because Earth’s gravity only pulls in one direction, sewers must always slope, meaning they often end up well below the ground surface, especially toward their downstream ends. And that can be problematic. Here’s why.

Sewers are almost always installed in open excavations also known as trenches. This might seem obvious, but the deeper a trench must be dug, the more difficult, dangerous, disruptive, and ultimately expensive construction becomes. In some cases, it just stops being feasible to chase the slope of a sewer farther and farther below the ground surface. A good alternative is to install a pumping station that can lift raw sewage from its depths back closer to the surface. Lift stations can be small installations designed to handle a few apartment complexes or massive capital projects that pump significant portions of a city's total wastewater flow. A typical lift station consists of a concrete chamber called a wet well. Sewage flows into the wet well by gravity, filling it over time. Once the sewage reaches a prescribed depth, a pump turns on, pushing the wastewater into a specialized sewer pipe called a force main. You always want to keep the liquid moving swiftly in pipes to avoid the solids settling out, so this intermittent operation makes sure that there are no slow trickles during off-peak hours. The sewage travels under pressure within the force main to an uphill manhole where it can continue its journey downward via gravity once again.

Another important location for lift stations is at the end of the line. Once wastewater reaches its final destination, there are no magical underground sewage outlets. Septic systems get rid of wastewater through leach fields that infiltrate the subsurface, but they’re designed for individual buildings and aren’t feasible on a city scale. That would require enormous areas of land to get so much liquid to soak into the soil, not to mention the potential for contamination of the groundwater. Ignoring, for now, the fact that we need to clean it up first, we still need somewhere for our sewage to go. In most cases, that’s a creek, river, or the ocean, meaning we need to lift that sewage up to the surface of the earth one last time. Rather than build wastewater treatment plants in underground lairs like stinky superheroes so we only pump clean water, it’s much easier just to lift the raw sewage up to the surface to be treated and eventually discharged. That means we have to send some pretty gross stuff (sewage) through some pretty expensive and sophisticated pieces of machinery (the pumps), and that comes with some challenges.

We often think of sewage as its grossest constituents: human excrement, you know, poop. But, sewage is a slurry of liquids and solids from a wide variety of sources. Lots of stuff ends up in our wastewater stream, including soil, soap, hair, food, wipes, grease, and trash. These things may make it down the toilet or sink drain and through the plumbing in your house, but in the sewer system, they can conglomerate into large balls of grease, rags, and other debris (sometimes called “pig tails” or “fatbergs” by wastewater professionals). In addition, with many cities putting efforts into conserving water, the concentration of solids in wastewater is trending upward. Conventional pumps handle liquids just fine but adding solids in the stream increases the challenge of lifting raw sewage.

Appropriately sized centrifugal pumps can handle certain types and sizes of suspended solids just fine. Sewage pumps are designed for the extra wear and tear. The impellers have fewer vanes to avoid snags and the openings are larger so that solids can freely move through them. Different manufacturers have proprietary designs to minimize obstructions to the extent possible, but no sewage pump is clog-proof. Especially with today’s concentrated wastewater full of wipes that have been marketed as flushable, clogs in lift stations can be a daily occurrence. Removing a pump, clearing it of debris, and replacing it is a dirty and expensive job (especially if you have to do it frequently). Most lift stations have an alarm when the level gets too high, but if a clog doesn’t get cleared fast enough, raw sewage can back up into houses and businesses or overflow the wet well, potentially exposing humans and wildlife to dangerous biohazards.

A seemingly obvious solution to the problem of clogging is to use a screen in the lift station wet well to prevent trash from reaching the pumps. But, screens have a limitation: they can clog up too. By adding a screen, you’ve traded pump maintenance for another kind of maintenance: removing and hauling away debris. Smaller lift stations with bar or basket screens can get away with maybe a once-a-week visit from a crew to clean them. Larger pump stations often feature automatic systems that can remove solids from the screen into a dumpster that can be hauled to a landfill every so often.

Sometimes using a screen is an effective way to protect against clogging, but it’s not always convenient, especially because it creates a separate waste stream to manage. For example, if a lift station is remote where it’s inconvenient to send crews for service and maintenance, you might prefer that all the solids remain in the wastewater stream. After all, treatment plants are specifically designed to clean wastewater. They have better equipment and consistent staffing, so it often just makes sense to focus the investments of time and effort at the plant rather than individual lift stations along the way. In these cases, there’s another option for minimizing pump clogs: grinding the solids into smaller pieces.

There’s a nice equivalent to a lift station grinder that can be found under the sinks of many North American homes: the garbage disposal. This common household appliance saves you the trouble and smell of putting food scraps into the wastebasket. It works like a centrifugal pump with a spinning impeller, but it also features a grinding ring consisting of sharp blades and small openings. As the impeller spins the solids, they scrape against the grinding ring, shearing into smaller pieces that can travel through the waste plumbing.

Some lift stations feature grinding pumps that are remarkably similar to under-sink garbage disposals. Others use standalone grinders that simply chew up the solids before they reach the pumps. Grinders are often required at medical facilities and prisons where fibrous solids are more likely to find their way into the wastewater stream. Large grinders are also used where storm drains and sewers are combined because those systems see heavier debris loads from rainwater runoff. A grinder is another expensive piece of equipment to purchase and maintain at a lift station, but it can offer better reliability, fewer clogs, and thus decreased maintenance costs.


Of course, clogging is not the only practical challenge of operating a sewage lift station. When you depend on electromechanical equipment to provide an essential service, you always have to plan for things to go wrong. Lift stations usually feature multiple pumps so that they can continue operating if one fails. They often have backup generators so that sewage can continue to flow even if grid power is lost. Another issue with lift stations is air bubbles getting into force mains and constricting the flow. Automatic air release valves can purge force mains of these bubbles, but venting sewer gas into populated areas isn’t usually a popular prospect. Although our urban lives depend on sewers to carry waste away before it can endanger public health, reminders that they exist are usually unwelcome. Hopefully this breaks that convention to help you understand a little about the challenges and solutions of managing wastewater to keep your city clean and safe.

February 01, 2022 /Wesley Crump

Why Buildings Need Foundations

January 04, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

When we bought our house several years ago, we fell in love with every part of it except one: the foundation. At 75 years old, we knew these old piers were just about finished holding this old house up. This year we finally bit the bullet to have them replaced. Any homeowner who’s had foundation work done can commiserate with us on the cost and disruption of a project like this. But homes aren’t the only structures with foundations. It is both a gravitational necessity and a source of job stability to structural and geotechnical engineers that all construction - great and small - sits upon the ground. And the ways in which we accomplish such a seemingly unexceptional feat are full of fascinating and unexpected details. I’m Grady and this is Practical Engineering. Today, we’re talking about foundations.

There’s really just one rule for structural and geotechnical engineers designing foundations: when you put something on the ground, it should not move. That seems like a pretty straightforward directive. You can put a lot of stuff on the ground and have it stay there. For example, several years ago I optimistically stacked these pavers behind my shed with the false hope that I would use them in a landscaping project someday, but their most likely future is to sit here in this shady purgatory for all of eternity. Unfortunately, buildings and other structures are a little different. Mainly, they are large enough that one part could move relative to the other parts, a phenomenon we call differential movement. When you move one piece of anyTHING relative to the rest of it, you introduce stress. And if that stress is greater than the inherent strength of the thing, that thing will pull itself apart. It happens all the time, all around the world, including right here in my own house. When one of these piers settles or heaves more than the others, all the stuff it supports tries to move too. But doorframes, drywall, and ceramic tile work much better and last much longer when the surrounding structure stays put.

There are many kinds of foundations used for the various structures in our built environment, but before we dive into how they work, I think it will be helpful to first talk about what they’re up against, or actually down against. Of course, buildings are heavy, and one of the most important jobs of a foundation is to evenly distribute that weight into the subsurface as downward pressure. Soil isn’t infinitely strong against vertical loads. It can fail just like any other component of a structural system. When the forces are high enough to shear through soil particles, we call it a bearing failure. The soil directly below the load is forced downward, pushing the rest of the soil to either side, eventually bulging up around the edges.

Even if the subsurface doesn’t full-on shear, it can still settle. This happens when the particles are compressed more closely together, and it usually takes place over a longer period of time. (I have a post all about settlement that you can check out after this.) So, job number 1 of a foundation is to distribute the downward force of a structure over a large enough area to reduce the bearing pressure and avoid shear failures or excessive settlement.

Structural loads don’t just come from gravity. Wind can exert tremendous and rapidly-fluctuating pressure on a large structure pushing it horizontally and even creating uplift like the wing of an airplane. Earthquakes also create loads on structures, shifting and shaking them with very little warning. Just like the normal weight of a structure, these loads must also be resisted by a foundation to prevent it from lifting or sliding along the ground. That’s job number 2.

Speaking of the ground, it’s not the most hospitable place for many building materials. It has bugs, like termites, that can eat away at wooden members over time, reducing their strength. It also has moisture that can lead to mold and rot. My house was built in the 1940s on top of cedar piers. This is a wood species that is naturally resistant to bugs and fungi, but not completely immune to them. So, job number 3 of a foundation is to resist the effects of long-term degradation and decay that come from our tiny biological neighbors.

Another problem with the ground is that soil isn’t really as static as we think. Freezing isn’t usually a problem for me in central Texas, but many places in the world see temperatures that rise and fall below the freezing point of water tens or hundreds of times per year. We all know water expands when it freezes, and it can do so with prodigious force. When this happens to subsurface water below a structure, it can behave like a jack to lift it up. Over time, these cycles of freeze and thaw can slowly shift or raise parts of a structure more than others, creating issues. Similarly, some kinds of soil expand when exposed to moisture. I also have a post on this phenomenon, so you have two to read after this one. Expansive clay soil can create the same type of damage as cycles of freeze and thaw by subtly moving a structure in small amounts with each cycle of wet and dry. So job number 4 of a foundation is to reach a deep enough layer that can’t freeze or that doesn’t experience major fluctuations in moisture content to avoid these problems that come with water in the subgrade below a structure.

Job number 5 isn’t necessarily applicable to most buildings, but there are many types of structures (like bridges and retaining walls) that are regularly subject to flowing water. Over time (or sometimes over the course of a single flood), that water can create erosion, undermining the structure. Many foundations are specifically designed to combat erosion, either with hard armoring or by simply being installed so deep into the earth that they can’t be undermined by quickly flowing water.

Job number 6 really applies to all of engineering: foundations have to be cost effective. Could the contractor who built my house in the 1940s have driven twice as many piers, each one to three times the depth? Of course it can be done, but (with some minor maintenance and repairs), this one lasted 75 years before needing to be replaced. With the median length of homeownership somewhere between 5 and 15 years, few people would be willing to pay more for a house with 500 years of remaining life in the foundation than they would for one with 30. I could have paid this contractor to build me a foundation that will last hundreds of years... but I didn’t. Engineering is a job of balancing constraints, and many of the decisions in foundation engineering come down to the question of “How can we achieve all of the first 5 jobs I mentioned without overdoing it and wasting a bunch of money in the process?” Let’s look at a few ways.

Foundations are generally divided into two classes: deep and shallow. Most buildings with only a few stories, including nearly all homes, are built on shallow foundations. That means they transfer the structure’s weight to the surface of the earth (or just below it). Maybe the most basic of these is how my house was originally built. They cut down cedar trees, hammered those logs into the ground as piles, layed wooden beams across the top of those piers, and then built the rest of the house atop the beams. Pier and beam foundations are pretty common, at least in my neck of the woods, and they have an added benefit of creating a crawlspace below the structure in which utilities like plumbing, drains, and electric lines can be installed and maintained. However, all these individual, unconnected points of contact with the earth leave quite a bit of room for differential movement.

Another basic type of shallow foundation is the strip footing, which generally consists of a ribbon or strip of concrete upon which walls can sit. In some cases the floor is isolated from the walls and sits directly on concrete slab atop the subgrade, but strip footings can also support floor joists, making room for a crawlspace below. For sites with strong soils, this is a great option because it’s simple and cheap, but if the subgrade soils are poor, strip footings can still allow differential movement because all the walls aren’t rigidly connected together. In that case, it makes sense to use a raft foundation - a completely solid concrete slab that extends across the entire structure. Raft foundations are typically concrete slabs placed directly on the ground (usually with some thickened areas to provide extra rigidity). They distribute the loads across a larger area, reducing the pressure on the subgrade, and they can accommodate some movement of the ground without transferring the movement into a structure, essentially riding the waves of the earth like a raft on the ocean (hence the name). However, they don’t have a crawlspace which makes plumbing repairs much more challenging.

One issue with all shallow foundations is that you still need to install them below the frost line - that is the maximum depth to which water in the soil might freeze during the harshest part of the winter - in order to avoid frost heaving. In some parts of the contiguous United States, the frost line can be upwards of 8 feet or nearly two-and-a-half meters. If you’re going to dig that deep to install a foundation anyway, you might as well just add an extra floor to your structure below the ground. That’s usually called a basement, and it can be considered a building’s foundation (although the walls are usually constructed on a raft or strip footings as described above).

As a structure’s size increases, so do the loads it imposes on the ground, and eventually it becomes infeasible to rely only on soils near the surface of the earth. Tall buildings, elevated roadways, bridges, and coastal structures often rely on deep foundations for support. This is especially true when the soils at the surface are not as firm as the layers farther below the ground. Deep foundations almost always rely on piles, which are vertical structural elements that are driven or drilled into the earth, often down to a stronger layer of soil or bedrock, and there are way more types than I could ever cover in a single video. Piles not only transfer loads at the bottom (called end bearing), but they can also be supported along their length through a phenomenon called skin friction. This makes it possible for a foundation to resist much more significant loads - whether downward, upward or horizontal - within a given footprint of a structure.

One of the benefits of driven piles is that you install them in somewhat the same way that they’ll be loaded in their final configuration. There’s some efficiency there because you can just stop pushing the pile into the ground once it’s able to resist the design loads. There’s a problem with this though. Let me show you what I mean. This hydraulic press has more than enough power to push this steel rod into the ground. And at first, it does just that. But eventually, it reaches a point where the weight of the press is less than the bearing capacity of the pile, and it just lifts itself up. Easy… (you might think). Just add more weight. But consider that these piles might be designed to support the weight of an entire structure. It’s not feasible to bring in or build some massive weight just to react against to drive a pile into the ground. Instead, we usually use hammers, which can deliver significantly more force to drive a pile with only a relatively small weight.

The problem with hammered piles is that the dynamic loading they undergo during installation is different from the static loading they see once in service. In other words, buildings don’t usually hammer on their foundations. For example, if a pile can withstand the force of a 5-ton weight dropped from 16 feet or 5 meters without moving, what’s the equivalent static load it can withstand? That turns out to be a pretty complicated question, and even though there are published equivalencies between static and dynamic loads, their accuracy can vary widely depending on soil conditions. That’s especially true for long piles where the pressure wave generated by a hammer might not even travel fast enough to load the entire member at the same moment in time. Static tests are more reliable, but also much more expensive because you either have to bring in a ton (or thousands of tons) of weight to put on top, or you have to build additional piles with a beam across them to give the test rig something to react against.

One interesting solution to this problem is called statnamic testing of piles. In this method, a mass is accelerated upward using explosives, creating an equal and opposite force on the pile to be tested. It’s kind of like a reverse hammer, except unlike a hammer where the force on the pile lasts only for a few milliseconds, the duration of loading in a statnamic test is often upwards of 100 or 200 milliseconds. That makes it much more similar to a static force on the pile without having to bring in tons and tons of weight or build expensive reaction piers just to conduct a test.

I’m only scratching the surface (or subsurface) of a topic that fills hundreds of engineering textbooks and the careers of thousands of contractors and engineers. If all the earth was solid rock, life would be a lot simpler, but maybe a lot less interesting too. If there are topics in foundations that you’d like to learn more about, add a comment or send me an email, and I’ll try to address it in a future post , but I hope this one gives you some appreciation of those innocuous bits of structural and geotechnical engineering below our feet.


January 04, 2022 /Wesley Crump

Rebuilding the Oroville Dam Spillways

December 21, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In February 2017, the world watched as the main spillway on one of the largest dams in the world suffered a catastrophic failure, prompting a series of events that led to the evacuation of nearly 200,000 people downstream and hundreds of millions of dollars of damage to critical water infrastructure. I talked about the failure of the Oroville Dam spillway in California after the independent forensic team released their conclusions about why the structure failed, summarizing their 600-page report. Then, I got flooded with requests to cover the repairs, and I love a good construction project as much as anyone else. So how do you rebuild one of the biggest spillways in the world after a catastrophic failure knowing that the next winter flood season is right around the corner? The answer might surprise you. I’m Grady, and this is Practical Engineering. Today, we’re talking about rebuilding the Oroville Dam spillways.

Oroville Dam in northern California is the tallest dam in the United States. It was built in the 1960s, creating one of California’s keystone reservoirs to smooth out the tremendous variability in rain and snowfall from their climate of hot, dry summers and flood-prone winters. The dam itself is a massive earthen embankment. To the northwest is the main spillway, also known as the Flood Control Outlet or FCO spillway. At the top are radial gates to control the flow. They release water into the enormous concrete chute before it passes through gigantic dentates that disperse the flow as it crashes into the Feather River below. It’s nearly impossible to convey the scale of this structure, which could fit eight American football fields with room to spare or more than 150 tennis courts. Beyond is the emergency spillway, a concrete weir set a foot above the maximum operating level to provide a backup path for water to leave the reservoir during extreme flood events.

If you want more detail about the failure, I encourage you to go back and read my previous post after this. I do want to summarize the damages here because you can’t really grasp the magnitude of the reconstruction project without an appreciation for how profoundly ruined this event left the spillways of Oroville Dam. Just about all but the upper section of the main spillway chute was wholly destroyed. The flows that broke free from the chute scoured the hillside around and below the structure, washing away concrete and eroding an enormous chasm as deep as 100 feet or 30 meters in some places. At the emergency spillway, overflows had similarly scoured the hillside, creating erosional head cuts that traveled upstream, threatening the safety and stability of the structure and ultimately leading to the downstream evacuation. In total, more than a million cubic meters of soil and rock were stripped away, much of which was deposited into the Feather River below the dam. Both spillways were rendered totally incapable of safely discharging future flood flows from Lake Oroville.

Even before the event was over, the California Department of Water Resources, or DWR, was planning for the next flood season, which was right around the corner. Having the tallest dam in the United States sitting crippled and unable to pass flood flows safely with the rainy season only six months away just wasn’t an option. As soon as the extent of the situation was revealed, DWR began assembling a team and plotting the course for recovery. Rather than try to handle all the work internally, DWR contracted with a wide range of consultants from engineering firms across the country and partnered with federal agencies, namely the Corps of Engineers and Bureau of Reclamation, who both have significant knowledge and experience with major water resources projects. 

In March (less than a month after the incident started and well before it was close to over), DWR held an all-day workshop with the design and management teams to collaborate on alternatives for restoring the dam’s spillways, focusing on the main spillway. They were facing some significant challenges. With the next flood season quickly approaching, they had limited time for design, regulatory reviews, and construction. Steps that would typically take months or years needed to be compressed into weeks. On top of that, they were still in the midst of the spillway failure without a complete understanding of what had gone wrong, making it difficult to propose solutions that would avoid a similar catastrophe in the future. Although they had a laundry list of ideas, most fell into three categories nicknamed by the design team as “Use the Hole,” “Bridge the Hole,” or “Fill the Hole.”

“Use the hole” alternatives involved taking advantage of the scour hole and channels carved by the uncontrolled flows from the spillway. If they could protect the soil and rock from further erosion, these new landscape features could serve as the new path for water exiting the reservoir, eliminating the need for a replacement to the massive and expensive concrete chute. The engineering team built a scale model of the spillway at Utah State University as a design tool for providing hydraulic information. They constructed an alternative with a modified scour hole to see how it would perform when subjected to significant releases from the spillway. Sadly the model showed enormous standing waves under peak flows, so this alternative was discarded as infeasible.

“Bridge the hole” alternatives involved constructing the spillway chute above grade. In other words, instead of placing the structure on the damaged soil and rock foundation, they could span the eroded valleys using aqueduct-style bridges. However, given the complexity of engineering such a unique spillway, the design team also ruled this option out. The time it would take for structural design just wouldn’t leave enough time for construction.

“Fill the hole” alternatives centered around replacing the eroded foundation material and returning the main spillway to its original configuration. There were a lot of advantages to this approach. It had the least amount of risk and the fewest unknowns about hydraulic performance, which had been proven through more than 50 years of service. This option also provided a place to reuse the scoured rock that had washed into the Feather River. Next, it had the lowest environmental impacts because no new areas of the site would be permanently impacted. And finally, it was straightforward construction - not anything too complicated - giving the design team confidence that contractors could accomplish the work within the available time frame.

Once a solution had been selected, the design team started developing the plans and specifications for construction. Over a hundred engineers, geologists, and other professionals were involved in designing repairs to the two spillways, many working 12-plus hour days, 6 to 7 days a week, on-site in portable trailers near the emergency spillway. Because many of the problems with the original spillways resulted from the poor conditions of underlying soil and rock, the design phase included an extensive geotechnical investigation of the site. At its peak, there were ten drill rigs taking borings of the foundation materials. The samples were tested in laboratories to support the engineering of the spillway replacements.

The design team elected to fill the scoured holes with roller-compacted concrete, a unique blend of the same essential ingredients of conventional concrete but with a lot less water. Instead of flowing into forms, roller compacted concrete, or RCC, is placed using paving equipment and compacted into place with vibratory rollers. The benefit of RCC was that it could be made on-site using materials mined near the dam and those recovered from the Feather River. It also cures quickly, reaching its full strength faster and with less heat buildup, allowing crews to place massive amounts of it on an aggressive schedule without worrying about it cracking apart from thermal effects. RCC is really the hero of this entire project. The design engineers worked hard to develop a mix that was as inexpensive as possible, using the rock and aggregates available on the site, while still being strong enough to carry the weight of the new spillway.

In the interest of time, California DWR brought on a contractor early to start building access roads and staging areas for the main construction project. They also began stabilizing the steep slopes created by the erosion to make the site safer for the construction crews that would follow. The main construction project was bid at the end of March with plans only 30% complete. This allowed the contractors to get started early to mobilize the enormous quantity of equipment, materials, and workers required for this massive undertaking. Having a contractor on the project early also allowed the design team to collaborate with the construction team, making it easier to assess the impact of design changes on the project’s costs and schedule.

Because the original spillway failed catastrophically, DWR knew that the entire main spillway would need to be rebuilt to modern standards. However, they didn’t have the time to do the whole thing before the upcoming flood season. DWR had developed an operations plan for Lake Oroville to keep the reservoir low and minimize the chance of spillway flows while the facilities were out-of-service for construction, but they couldn’t just empty the lake entirely. They still had to balance the purposes of the reservoir, including flood protection, hydropower generation, environmental flows, and the rights of water users downstream. The winter flood season was approaching rapidly, and there was still a possibility of a flood filling the reservoir and requiring releases. DWR needed a spillway that could function before November 2017 (a little more than six months from when the contractor was hired), even if it couldn’t function at its total original capacity.

In collaboration with the contractor, the design team decided to break up the repair project into two phases. Phase 1 would rush to get an operational spillway in place before the 2017-2018 winter flood season. The remaining work to complete the spillway would be finished ahead of the following flood season at the end of 2018. In addition to the repairs at the main spillway, engineers also designed remediations to the emergency spillway, including a buttress to the existing concrete weir, an RCC apron to protect the vulnerable hillside soils, and a cutoff wall to keep erosion from progressing upstream. To speed up regulatory approval, which can often take months under normal conditions, the California Division of Safety of Dams and the Federal Energy Regulatory Commission both dedicated full-time staff to review designs as they were produced, working in the same trailers as the engineers. The project also required an independent board of consultants to review designs and provide feedback to the teams. This group of experts met regularly throughout design and construction, and their memos are available online for anyone to peruse.

Phase 1 of construction began as the damaged spillway continued to pass water to lower the reservoir throughout the month of May. The contractor started blasting and excavating the slopes around the site to stabilize them and provide access to more crews and equipment. At the same time, an army of excavators began to remove the soil and rock that was scoured from the hillside and deposited into the Feather River. The spillway gates were finally closed for the season at the end of May, allowing equipment to mobilize to all areas of the site. They quickly began demolition of the remaining concrete spillway. Blasting also continued to stabilize the slopes by reducing their steepness in preparation for RCC placement and break up the existing concrete to be hauled away or reused as aggregate.

By June, all the old concrete had been removed, and crews were working to clean the foundation materials of loose rock and soil. The contractor worked to ensure that the foundation was perfectly clean of loose soil and dust that could reduce the strength of the bond between concrete and rock.

In July and August, crews made progress on the upper and lower sections of the spillway that hadn’t been significantly undermined. Because they didn’t have to fill in a gigantic scour hole in this area, crews could use conventional concrete to level and smooth the foundation, ensuring that the new structural spillway slab would be a consistent thickness across its entire width and length. Of course, I have to point out that the chute was not simply being replaced in kind. Deficiencies in the original design were a significant part of why the spillway failed in the first place. The new design of the structural concrete included an increase in the thickness of the slab, more steel reinforcement with an epoxy coating to protect against corrosion, flexible waterstops at the joints in the concrete to prevent water from flowing through the gaps, steel anchors drilled deep into the bedrock to hold the slabs tightly against their foundation, and an extensive drainage system. These drains are intended to relieve water pressure from underneath the structure and filter any water seeping below the slab so it can’t wash away soil and undermine the structure.

As the new reinforced concrete slabs and training walls were going up on the lower section of the chute, RCC was being placed in lifts into the scour hole at the center of the chute. This central scour hole was the most time-sensitive part of the project because there was just so much volume to replace. Instead of filling the scour hole AND building the new spillway slabs and walls on top during Phase 1, the designers elected to use the RCC as a temporary stand-in for the central portion of the chute during the upcoming flood season. The designs called for RCC to be placed up to the level of the spillway chute with formed walls, not quite tall enough for the total original capacity, but enough to manage a major flood if one were to occur.

By September, crews had truly hit their stride, producing and placing colossal amounts of concrete each day, slowly reconnecting the upper and lower sections of the chute across the chasm of eroded rock. Reinforced concrete slabs and walls continued to go up on both the upper and lower sections of the chute. With only a month before the critical deadline of November 1, the contractor worked around the clock to produce and place both conventional and roller-compacted concrete across the project site. By the end of the day on November 1st, Phase 1 of the massive reconstruction was completed on schedule and without a single injury. The spillway was ready to handle releases for the winter flood season if needed. Luckily, it wasn’t, and the work didn’t stop at Oroville dam.

Phase 2 began immediately, with the contractor starting to work on the parts of the project that wouldn’t compromise the dam’s ability to release flows during the flood season. That mainly involved a focus on the emergency spillway. Crews first rebuilt a part of the original concrete weir, making it stronger and more capable of withstanding hydraulic forces. They also installed a secant pile cutoff wall in the hillside well below the spillway. A secant pile wall involves drilling overlapping concrete piers deep into the bedrock. The purpose of the cutoff wall was to prevent erosion from traveling upstream and threatening the spillway structure. A concrete cap was added to the secant piles to tie them all together at the surface. Finally, roller compacted concrete was placed between the secant wall and the spillway to serve as a splash pad, protecting the vulnerable hillside from erosion if the emergency spillway were ever to be used in the future.

Once the flood season was over in May, DWR gave the contractor the go-ahead to start work back on the main spillway. There were two main parts of the project remaining. First, they needed to completely remove and replace the uppermost section of the chute and training walls. Except for the dentates at the downstream end, this was the only section of the original chute remaining after Phase 1. 

At the RCC section of the spillway, crews first removed the temporary training walls that were installed to allow the spillway to function at a reduced capacity during the prior flood season. They never even got to see a single drop of water, but at least the material was reused in batches of concrete for the final structure. Next, the contractor milled the top layer of RCC to make room for the structural concrete slab. They trenched drains across the RCC to match the rest of the spillway, and finally, they built the structural concrete slabs and walls to complete the structure. All this work continued through the summer and fall of 2018. On November 1st, construction hit a key milestone of having all the main spillway concrete placed ahead of the winter flood season. Although cleanup and backfill work would continue for the next several months, the spillway was substantially complete and ready to handle releases if it was needed. It’s a good thing too because a few months later, it was.

Crews continued cleaning up the site, working on the emergency spillway, and demobilizing equipment throughout the 2018-2019 flood season. In April 2019, heavy rain and snowfall filled Lake Oroville into the flood control zone, necessitating the opening of the spillway gates. For the first time since reconstruction, barely two years after this whole mess got started, the new spillway was tested. And it performed beautifully. I’m sure it was a tremendous relief and true joy for all of the engineers, project managers, construction workers, and the public to see that one of the most important reservoirs in the state was back in service. As of this writing, Oroville is just coming up from historically low levels resulting from a multi-year drought in California. It just goes to show the importance of engineering major water reservoirs like Oroville to smooth out the tremendous variability in rain and snowfall.

It’s easy to celebrate such an incredible engineering achievement of designing and constructing one of the largest spillway repair projects in the world without remembering what necessitated the project in the first place. The systemic failure of the dam owner and its regulators to recognize and address the structure’s inherent flaws came at a tremendous cost, both to those whose lives were put at risk and evacuated from their homes and to the taxpayers and ratepayers who will ultimately foot the more-than-a-billion dollars spent on these repairs. Dam owners and regulators across the world have hopefully learned a hard lesson from Oroville, thanks in large part to those who shared their knowledge and experience of the event. I’d like to give them a shout out here, because this wouldn’t have been possible without them.

California DWR’s commitment to transparency means we have tons of footage from the event and reconstruction. Engineers and project managers involved in the emergency and reconstruction shared their experiences in professional journals. Finally, my fellow YouTuber Juan Brown provided detailed and award-winning coverage of the project as a citizen journalist on his channel, Blancolirio, including regular overflights of Oroville Dam in his Mighty Luscombe. Go check out his playlist if you want to learn more. As I always say, this is only a summary, and it doesn’t include nearly the level of detail that Juan put into his reporting.

December 21, 2021 /Wesley Crump

Why Retaining Walls Collapse

December 07, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In March of 2021, a long-running construction project on a New Jersey highway interchange ground to halt when one of the retaining walls along the roadway collapsed. This project in Camden County, called the Direct Connection, was already 4 years behind schedule, and this failure set it back even further. As of this writing, the cause of the collapse is still under investigation, but the event brought into the spotlight a seemingly innocuous part of the constructed environment. I love innocuous parts of the constructed environment, and I promise by the end of this you’ll pay attention to infrastructure that you’ve never even noticed before. Why do we build walls to hold back soil, what are the different ways to do it, and why do they sometimes fall down? I’m Grady and this is Practical Engineering. Today, we’re talking about retaining walls.

The natural landscape is never ideally suited to construction as it stands. The earth is just too uneven. Before things get built, we almost always have to raise or lower areas of the ground first. We flatten building sites, we smooth paths for roads and railways, and we build ramps up to bridges and grade-separated interchanges. You might notice that these cuts and fills usually connect to the existing ground on a slope. Loose soil won’t stand on its own vertically. That’s just the nature of granular materials. The stability of a slope can vary significantly depending on the type of soil and the loading it needs to withstand. You can get many types of earth to hold a vertical slope temporarily, and it’s done all the time during construction, but over time the internal stresses will cause them to slump and settle into a more stable configuration. For long-term stability, engineers rarely trust anything steeper than 25 degrees. That means any time you want to raise or lower the earth, you need a slope that is twice as wide as it is tall, which can be a problem.

Don’t tell them I said this, but slopes are kind of a waste of space. Depending on the steepness, it’s either inconvenient, or entirely impossible to use sloped areas for building things, walking, driving, or even as open spaces like parks. In dense urban areas, real estate comes at a premium, so it doesn’t make sense to waste valuable land on slopes. Where space is limited, it often makes sense to avoid this disadvantage by using a retaining wall to support soil vertically.

When you see a retaining wall in the wild, the job of holding back soil looks effortless. But that’s usually only true because much of the wall’s structure is hidden from view. A retaining wall is essentially a dam, except instead of water, it holds back earth. Soil doesn’t flow as easily as water, but it is twice as heavy. The force exerted on a retaining wall from that soil, called the lateral earth pressure, can be enormous. But that’s just from the weight of the soil itself. Include the fact that we often apply additional forces from buildings, vehicles, or other structures, on top of the backfill behind the wall. We call these surcharge loads, and they can increase the forces on a retaining wall even further. Finally, water can flow through or even freeze in the soil behind a retaining wall, applying even more pressure to its face.

Estimating all these loads and designing a wall to withstand them can be a real challenge for a civil engineer. Unlike most structures where loads are vertical from gravity, most of the forces on a retaining wall are horizontal. There are a lot of different types of walls that have been developed to withstand these staggering sideways forces. Let’s walk through a few different designs.

The most basic retaining walls rely on gravity for their stability, often employing a footing along the base. The footing is a horizontal member that serves as a base to distribute the forces of the wall into the ground. Your first inclination might be to extend the footing on the outside of the wall to extend the lever arm like an outrigger on a crane. However, it’s actually more beneficial for the footing to extend inward into the retained soil. That’s because the earth behind the wall sits atop the footing, which acts as a lever to keep the wall upright against lateral forces. Retaining walls that rely only on their own weight and the weight of the soil above them to remain stable are called gravity walls (for obvious reasons), and the ones that use a footing like this are called cantilever walls.

One common type of retaining wall involves tying a mass of soil together to act as its own wall, retaining the unreinforced soil beyond and this was actually the subject of one of the very first engineering posts. It’s accomplished during the fill operation by including reinforcement elements between layers of soil, a technique called mechanically stabilized earth. The reinforcing elements can be steel strips or fabric made from plastic fibers called geotextile or geogrid. It is remarkable how well this kind of reinforcement can hold soil together.

Gravity walls and mechanically stabilized earth are effective retaining walls when you’re building up or out. In other words, they’re constructed from the ground up. But, excavated slopes often need to be retained as well. Maybe you’re cutting out a path for a roadway through a hillside or constructing a building in a dense urban area starting at the basement level. In these cases, you need to install a retaining wall before or during excavation from the top down, and there are several ways to go about it. Just like reinforcements hold a soil mass together in mechanically stabilized earth, you can also stitch together earth from the outside using a technique called soil nailing. First, an angled hole is drilled in the face of the unstable slope. Then a steel bar is inserted into the hole, usually with plastic devices called spiders to keep it centered. Cement grout is added to the hole to bond the soil nail to the surrounding earth.

Both mechanically stabilized earth and soil nails are commonly used on roadway projects, so it’s easy to spot them if you’re a regular driver. But don’t examine too closely until you are safely stopped. These walls are often faced with concrete, but the facings are rarely supporting much of the load. Instead, their job is to protect the exposed soil from erosion due to wind or water. In temporary situations, the facing sometimes consists of shotcrete, a type of concrete that can be sprayed from a hose using compressed air. For permanent installations, engineers often use interlocking concrete panels with a decorative pattern. These panels not only look pretty, but they also allow for some movement over time and for water to drain through the joints.

One disadvantage of soil nails is that the soil has to settle a little bit before the strength of each one kicks in. The nails also have to be spaced closely together, requiring a lot of drilling. In some cases it makes more sense to use an active solution, usually called anchors or tiebacks. Just like soil nails, anchors are installed in drilled holes at regular spacing, but you usually need a lot fewer of them. Also unlike soil nails, they aren’t grouted along their entire length. Instead, part of the anchor is installed inside a sleeve filled with grease, so you end up with a bonded length and an unbonded length. That’s because, once the grout cures, a hydraulic jack is used to tension each one. The unbonded length of the anchor acts like a rubber band to store that tension force. Once the anchor is locked off, usually using a nut combined with a wedge-shaped washer, the tension in the unbonded length applies a force to the face of the wall, holding the soil back. Anchored walls often have plates, bearing blocks, or beams called walers to distribute the tension force across the length of the wall.

One final type of retaining wall uses piles. These are vertical members driven or drilled into the ground. Concrete shafts are installed with gigantic drill rigs like massive fence posts. When they are placed in a row touching each other, they’re called tangent piles. Sometimes they are overlapped, called secant piles, to make them more watertight. In this case, the primary piles are installed without steel reinforcement, and before they cure too hard, secondary piles are drilled partially through the primary ones. The secondary piles have reinforcing steel to provide most of the resistance to earth pressure. Alternatively, you can use interlocking steel shapes called sheet piling. These are driven into the earth using humongous hammers or vibratory rigs. Pile walls depend on the resistance from the soil below to cantilever up vertically and resist the lateral earth pressure. The deeper you go, the more resistance you can achieve. Pile walls are often used for temporary excavations during construction projects because the wall can be installed first before digging begins, ensuring that the excavated faces have support for the entirety of construction.

All these types of retaining walls perform perfectly if designed correctly, but retaining walls do fail, and there are a few reasons why. One reason is just under designing for lateral earth pressure. It’s not intuitive how much force soil can apply to a wall, especially because the slope is often holding itself up during construction. Earth pressure behind a wall can build gradually such that failure doesn’t even start until many years later. Lots of retaining walls are built without any involvement from an engineer, and it's easy to underestimate the loads if you’re not familiar with soil mechanics. Most cities require that anything taller than around 4 feet or 1.5 meters be designed by a professional engineer.

As I mentioned, soil loads aren’t the only forces applying to walls. Some fail when unanticipated surcharge loads are introduced like larger buildings or heavy vehicles driving too close to the edge. If you’re ever putting something heavy near a retaining wall, whether it’s building a new swimming pool or operating a crane, it’s usually best to have an engineer review beforehand. 

Water is another challenge with retaining walls. Not only does water pressure add to the earth pressure, in some climates it can freeze. When water freezes, it expands with a force that is nearly impossible to restrain, and you don’t want that happening to the face of a wall. Most large walls are built with drainage systems to prevent water from building up. Keep an eye out for holes through the face of the wall that can let water out, called weepholes, or pipes that collect and carry the water away.

Finally, soil can shear behind the wall, even completely bypassing the wall altogether. For tall retaining walls with poor soils, multiple tiers, or lots of groundwater, engineers perform a global stability analysis as a part of design. This involves using computer software that can compare the loads and strengths along a huge number of potential shearing planes to make sure that a wall won’t collapse. 

Look around and you’ll see retaining walls everywhere holding back slopes so we all have a little more space in our constructed environments. They might just look like a pretty concrete face on the outside, but now you know the important job they do and some of the engineering that makes it possible.


December 07, 2021 /Wesley Crump

What Really Happened at the Millennium Tower?

November 16, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The Millennium Tower is the tallest residential building in San Francisco, with 58 stories above the ground and 419 luxury condominium units. The tower opened to residents in 2009, but even before construction was finished, engineers could tell that the building was slowly sinking into the ground and tilting to one side. How do engineers predict how soils will behave under extreme loading conditions, and what do you do when a skyscraper’s foundation doesn’t perform the way it was designed? Let’s find out. I’m Grady, and this is Practical Engineering. Today, we’re talking about the Millennium Tower in San Francisco.

Skyscrapers are heavy. That might seem self-evident, but it can’t be overstated in a story like this. An average single-story residential home is designed to apply a pressure to the subsurface of maybe 100 pounds per square foot of building footprint. That’s about 5 kilopascals, the pressure at the bottom of a knee-deep pool of water. With its concrete skeleton, the Millenium Tower was designed to impose a load of 11,000 pounds per square foot or 530 kilopascals to its foundation (about 100 times more than an average house). It would be impossible for just the ground surface to bear that much weight, especially in this case where the ground surface is a weak layer of mud and rubble placed during the City’s infancy to reclaim land from the bay.

That tremendous pressure is why most tall buildings use deep foundation systems. The Millennium Tower’s foundation consists of a 10-foot or 3-meter-thick concrete slab supported by 950 concrete friction piles driven into the subsurface to a depth of about 80 feet or 24 meters. Friction piles spread out the load of the building vertically, allowing much more of the underlying soils to be used to support the structure without becoming overwhelmed. The piles also allow the foundation to bear on stronger soils than those at the surface.

Driving the piles so deep allowed the building to not sit on the surface layer of artificial fill, or even the soft underlying layer of mud but rather on the dense sandy soil of the Colma Formation below. This is a fairly common design in San Francisco, with more than a dozen tall buildings in the downtown area utilizing a similar foundation system, including some nearly as large as this one. However, it’s not the dense sands causing problems for the Millennium Tower, but what’s underneath. Below the Colma Formation is a thick layer of Ice Age mud locally known as the Old Bay Clay. Thanks to the geologists for that name. When the building was designed, the project geotechnical engineers predicted that it would settle 4 to 6 inches (10 to 15 centimeters) over the structure’s entire lifetime, mainly from this layer of Old Bay Clay below the bottom of the piles. But even before construction was complete, the building had already settled more than that.

The ground below your feet may seem firm and stable, but when subjected to increased loading - and especially when the load is extreme like that of a concrete skyscraper - soil can compress in a process called consolidation. Essentially, the soil is like a sponge filled with water. An increased load will slowly squeeze the water out, allowing the grains to compress into the empty space. Settlement is usually a gradual process because it takes time for the water to find a path out from the soil matrix. But some things can accelerate the process, even if they’re not intentional.

The Millenium Tower was already designed to put more stress on the underlying Old Bay Clay than any other building in the area. However, construction of the tower’s basement also required the contractor to pump water out of the subsurface to keep the site dry. This is often done using vertical wells similar to the ones used for drinking water but usually not as deep. This deliberate and continuous dewatering of foundation soils accelerated the settlement. Then other construction projects nearby began, including the adjacent Transbay Transit Center, which required their own deep excavations and groundwater drawdowns. All these factors added up to a lot more settlement than was initially anticipated by the project’s geotechnical engineers. The result was that, by 2016 (when the public first learned about the issue), the building had already sunk more than 16 inches or 41 centimeters, triple the movement that was anticipated for its entire lifetime. Unfortunately, that settlement wasn’t happening evenly. Instead, the northwest corner had sunk a little lower than the rest of the foundation, causing the tower to tilt several inches in that direction.

The media had a field day reporting on the leaning tower of San Francisco, and accusations started flying about who was to blame and whether the City had covered up details about the building’s movement. The developer continued insisting that the building was safe, reiterating that all buildings settle over time, and the Millennium Tower was no different. But it definitely was different, at least in magnitude. With so much attention to the building, the City commissioned a panel of experts in 2017 to assess its safety both for everyday use and in the event of a strong earthquake. By that time, the building had settled another inch and was out-of-plumb by more than a foot or 30 centimeters. That’s not something you could notice by eye and was probably only discernible to the most perceptive residents, but it’s well beyond the 6 inches allowed by the building code. Even so, the panel found that the building was completely safe, and the settlement had not compromised its ability to withstand strong earthquakes. However, they cautioned that the movement hadn’t stopped, and further tilting may affect the building’s safety.

At the same time, and despite engineering assessments confirming the building’s safety, the condominium prices were plummeting. No one wanted to live in a building that was sinking into the ground with no sign of slowing down. It didn’t take long for lawsuits to be filed. By the end of it, just about every person and organization related in any way to the Millennium Tower was involved in at least one lawsuit, including individual residents, the homeowners association, the building developer, the Transbay Joint Powers Authority, and many others. In total, there were nine separate lawsuits involving around 400 individual parties. After many years of complex litigation, a comprehensive settlement (of the legal kind) was eventually reached through private mediation. The result was that no one took the blame for the building’s excessive movement, condo owners would be compensated for the loss of property values, and, most importantly, the building would be fixed

During mediation, the retrofits to the building’s foundation to slow the sinking and “de-tilt” the tower were a big point of contention. One early plan was to install hundreds of micropiles (small diameter drilled piles) through the existing foundation down to bedrock. But the estimated cost for the repair was as much as 500-million-dollars, more than the original cost of the entire building. It turns out it’s a lot easier to drill foundation piles before the building is built than afterward. The challenges associated with working below the building, like access, vibrations, noise, and lack of space, drove up the price and the parties couldn’t agree to pay such a substantial cost. An unconventional alternative proposed by the developer’s engineer ended up resolving the dispute, and as of this writing, is currently under construction.

The proposed fix to the Millenium Tower is to install piles along two sides of the building’s perimeter. That may seem kind of simple, but there is a lot of clever engineering involved to make it work. Fifty-two piles will be drilled along the north and west sides of the tower all the way down to bedrock. Unlike the original plan, these piles will be installed outside the building below the adjacent sidewalks, saving a significant amount on the construction cost. An extension to the building’s existing concrete slab will be installed around each pile but not rigidly attached to them. Instead, each pile will be sleeved through and extended above the concrete slab so that the building can move independently. The slab will be equipped with steel beams centered above each pile and anchored deep within the concrete. Finally, hydraulic jacks will be installed between each of the fifty-two piles and beams.

Once everything is installed, the contractor will use the hydraulic jacks to lift the building’s foundation, transferring about 20 percent of the load onto the new perimeter piles. That means each one will be carrying around 800,000 pounds or 360,000 kilograms. The goal of the upgrade is to remove weight from the clay soils below the building, transferring it to the stronger bedrock further below and thus slowing down the settlement. The design requires that the holes be overdrilled so that no part of the new piles can come into contact with the Old Bay Clay and put any weight on this weak subsurface layer. The annular space between each pile and the clay will be filled with low-strength material only after the hydraulic jacking operation is complete. Once the building is safely supported, each pile will be enclosed in a concrete vault below the ground, everything will be backfilled, and the sidewalks will be replaced. If all goes according to plan, the settlement on the north and west sides of the building will be completely arrested. With less load on the original foundation, the sinking of the other two sides will gradually slow to a stop, straightening the building back to its original plumbness, but just a couple of feet lower than where it started.

Of course, expensive and innovative construction projects rarely do go according to plan, and this one is no different. The City of San Francisco and the design engineers were carefully monitoring the building’s movement as construction of the retrofit got started in May 2021. It didn’t take long to notice an issue. The vibrations and disturbance of drilling through the Old Bay Clay were making the settlement accelerate. The speed at which the building was tilting and sinking started to increase as the drilling continued. In August 2021, construction was halted to reassess the plan and find a solution to install the foundation retrofit safely. As of this writing, crews are testing some revised drilling procedures that they hope will reduce the disturbance to the clay layer so they can get those piers installed and the building supported as quickly as possible.

The story of the Millennium Tower is a fascinating case study in geotechnical engineering. Our ability to predict how soils will behave under new and extreme conditions isn’t perfect, especially when those soils are far below the surface, where we can only guess their properties and extents based on a few borehole samples. In addition, buildings don’t get built in a vacuum, and the tallest ones are usually at the center of dense urban areas. Soils don’t care about property lines, and you can end up with big problems by underestimating the impacts that adjacent projects can create. Most people will wonder why the building’s foundation didn’t just go to bedrock in the original design. The answer is the same reason my house doesn’t have piles to bedrock. No one likes to pay for things they don’t think are necessary. If those geotechnical and structural engineers could go back in time, I think they probably would go with a different foundation, but whether they could have reasonably predicted the performance of the original design with all the extra dewatering and adjacent construction is a more complicated question.

The Millennium Tower is also an interesting case study in the relationship between engineers and the media. The developer’s engineers and the City have shown that the building is perfectly safe through detailed modeling and investigation. And yet, the prices of those luxury condominiums plummeted with the frenzy of reporting about the settlement and tilting. Those prices depend not only on buyers’ confidence in the building’s safety but also their willingness to be associated with a building that is regularly in the news. The value in the multimillion-dollar repair project will be not just to slow down the settlement but also to slow down the articles, news segments, memes, and tourists from remembering this building as the leaning tower of San Francisco.

I know I say this at the end of all my blog posts, but this is not the whole story. I did my best to summarize the high points, but there are many more details to this saga. I definitely encourage you to seek out those details before drawing any hard conclusions. It’s an excellent example of the challenges and complexity involved in large-scale engineering projects, the limitations and uncertainty in engineering practice, and the interconnectedness of regulations, engineering, and the media. I’ll be keeping an eye on the progress of the foundation retrofit. Thank you, and let me know what you think!


November 16, 2021 /Wesley Crump

Why SpaceX Cares About Dirt

November 02, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Before the SpaceX South Texas launch facility on South Padre Island near Boca Chica supported crazy test launches of the Starship spaceflight program, it was just a pile of dirt. Contractors brought in truck after truck of soil, creating a massive mesa of more than 300,000 cubic yards or 230,000 cubic meters of earth. That’s a lot of olympic-sized swimming pools, not that you’d want to go swimming in it. After nearly two years, they hauled most of that soil back off the site for disposal. It might seem like a curious way to start a construction project, but foundations are critically important. That’s true for roads, bridges, pipelines, dams, skyscrapers, and even futuristic rocket launch facilities. The Texas coastline is not known for its excellent soil properties, so engineers had to specify some extra work before the buildings, tanks, and launchpads could be constructed. Building that giant dirt pile was a clever way to prevent these facilities from sinking into the ground over time. Why do some structures sink, and what can we do to keep it from happening? I’m Grady and this is Practical Engineering. Today, we’re talking about soil settlement.

The Earth’s gravity accelerates us, and everything else on our planet downward. To keep us from falling toward the center of the planet, we need an equal and opposite reaction to keep us in place. If you’re at the top of a skyscraper, your weight is supported by floor joists that transfer it to beams that transfer it to columns that transfer it downward into massive concrete piers, but eventually the force of you must be resisted by the earth. It’s ground all the way down. You might not think about the ground, and its critical role in holding stuff up, but the job of a geotechnical engineer is to make sure that when we build stuff, the earth below is capable and ready to support that stuff for its entire lifespan.

Every step you take when walking along the ground induces stress into the subsurface. And every rocket launch facility you build on the Texas coastline does the same thing. This isn’t always a big deal. When constructing on bedrock, there’s a lot less to worry about, but much of the earth’s landscape consists of soil: granular compositions of minerals. Stress does a funny thing to soils. I mean, it does some funny things to all of us, but to soils too. At first consideration, you might not think there’s really much difference between rock and soil. After all, soil particles are just tiny rocks, and many sedimentary rocks are made from accumulated soil particles anyway. But, soil isn’t just particles. In between all those tiny grains are empty spaces we call pores, and those pores are often filled with water. Just like squeezing a sponge forces water out, introducing stress to a soil layer can do the same thing.

Over time, water is forced  to exit the pore space of the soil and flow up and out. As the water departs, the soil compresses to take up the void left behind. This process is called consolidation. It’s not the only mechanism for settlement, but it is the main one, especially for soils that are made up of fine particles. Large-grained soils like sand and gravel interlock together and don’t really act like a sponge so much as a solid, porous object. To the extent they do consolidate, it happens almost immediately. You can squeeze and squeeze, but nothing happens. Fine-grained soils like clay and silt are different. Like sand or gravel, the particles themselves aren’t very compressible. However, unlike in coarse-grained soils, fine particles aren’t so much touching their neighbors as they are surrounded by a thin film of water. When you squish the soil, the tiny particles rearrange themselves to interlock, pressurizing the pore water and ultimately forcing it out. The more weight you add, the more stress goes into the subsurface, the more water is forced out of the pores, and thus the further the soil settles. Geotechnical laboratories perform these tests with much scientific rigor.

This may seem obvious, but when we build stuff, we don’t want it to move. We want the number on that dial to stay the same for all of eternity, or at least until the structure is at the end of its lifespan. That idea - that when you build something, it stays put - is essentially all of geotechnical engineering in a nutshell. It encompasses the entirety of foundation design, from the simplest slabs of concrete for residential houses, to the highly sophisticated substructures of modern bridges and skyscrapers. The way movement occurs also matters. It’s actually not such a big deal if settlement happens uniformly. After all, in many cases the movement is nearly imperceptible. I’m using a special instrument just so you can see it on camera. Many buildings can take a little movement without much trouble. But often, settlement doesn’t happen uniformly.

For one, structures don’t usually impose uniform loads. If everything we built was uniform in size and density, we might be okay, but that’s never the case. No matter what you’re constructing, you almost always have some heavy parts and other light parts that stress the soil differently. On top of that, the underlying geology isn’t uniform either. Take a look at any road cut to see this. The designers of the bell tower at the Pisa Cathedral in Italy famously learned this lesson the hard way. Small differences in the soils on either side of the tower caused uneven settlement. Geotechnical engineering didn’t exist as a profession in the 1100s, and the architects would have had no way of knowing that the sand layer below the tower was a little bit thinner on the south side than the north. It didn’t take long after construction started for the tower to begin its iconic lean. I should point out that there’s another soil effect that can cause the opposite problem. Certain types of soils expand when exposed to increased moisture, introducing further complications to a geotechnical engineer. I have a separate post on that topic, so check it out after this if you want to learn more.

Settlement made the tower of Pisa famous, but in most cases it just causes problems and costs a lot of money to fix. One of the most famous modern examples is the Millennium Tower in San Francisco, California. The 58-story building was constructed atop the soft, compressible fill and mud underlying much of the Bay Area. Engineers used a foundation of piles driven deep below the building to a layer of firmer sand, but it wasn’t enough. Only 10 years after construction, the northwest corner of the building had sunk more than 18 inches or 46 centimeters into the earth, causing the building to tilt. Over time, some of the building's elements were damaged or broken, including the basement and pavement surrounding the structure. As you would expect, there were enough lawsuits to fill an olympic sized swimming pool. The repairs to the building are in progress at an estimated cost of 100 million dollars, not to mention the who-knows-how-much in legal fees.

One of the most reliable ways to deal with settlement is just to make sure it happens during construction instead of afterwards. As you build, you can account for minor deviations as they occur. Unfortunately, consolidation isn’t always a speedy process. The voids in clay soils are extremely small, so the path that water has to take in order to exit the soil matrix is long and windy. We call this windiness sinuosity. Depending on the soils and loads applied, the consolidation process can take years to complete.

It’s not a good idea to build a structure that will settle unevenly over the next several years. Hopefully it’s obvious that that’s bad design. So, we have a few options. One is to use a concrete slab that is stiff enough to distribute all the forces of the structure evenly and provide support no matter how nonuniformly the settlement occurs. These slabs are sometimes called raft foundations because they ride the soil like a raft in the ocean. Another option is to sink deep piles down to a firmer geologic layer or bedrock so that loads get transferred to material more capable of handling them. But both of those options can be quite expensive. A third option is simply to accelerate the consolidation process so that it’s complete by the end of construction.

One way to speed up consolidation in clay soils is to introduce a drainage system. Settlement is mainly a function of how quickly water can exit the soil. In a clay layer, particularly a very thick layer or one underlain by rock, the only way for water to leave is at the surface. That means water well below the ground has to travel a long distance to get out. We can shorten the distance required to exit the soil by introducing drains. This is often done using prefabricated vertical drains, called PVDs or wick drains. These plastic strips have grooves in which water can travel, and they can be installed by forcing them directly into the subsurface using heavy machinery. An anchor plate is attached, the drain is pressed into the soil to the required depth, the mandrel is pulled out, and the material is cut. It all happens in quick succession, allowing close spacing of drains across a large area. The tighter the spacing, the less distance water has to exit. One of the other benefits here is that water often travels through soils horizontally faster than it does vertically, since geologic layers are usually horizontal. That speeds up consolidation even more. Plotting the displacement over time, the benefit of vertical drains is unmistakable.


The second way we speed up consolidation is surcharge loading. This is applying stress to the foundation soils before construction to force the water out quickly. Like I described in the intro at SpaceX South Texas, it’s usually as simple as hauling in a huge volume of earth to be temporarily placed on site. The way this works is as straightforward as squeezing a sponge harder. It’s the equivalent of adding more weight to my acrylic oedometer, but it’s simpler just to show a graph. Let’s say you’re going to build a structure that will impose a stress on the subsurface. That stress corresponds to a consolidation at this red line. If you load the foundation soils with something heavier than your structure, that weight will be associated with a greater consolidation. It’s going to take about the same time to reach a certain percentage of consolidation in both cases, but you’re going to hit the target consolidation (the red line) much faster. In many cases, engineers will specify both wick drains and surcharging to consolidate the soil as quickly as possible so that construction can begin. Once you get rid of all the extra soil you brought in, you can start building on your foundation knowing that it’s not going to settle further over time.

November 02, 2021 /Wesley Crump

What Really Happened At Edenville and Sanford Dams?

October 19, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On May 18th, 2020, heavy rainfall in Michigan raised the level of Wixom Lake - a man-made reservoir impounded by Edenville Dam - higher than it had ever gone before. As the reservoir continued to rise the following day, the dam suddenly broke, sending a wall of water downstream. As it traveled along the Tittabawassee River, the flood wave reached and quickly overpowered the Sanford Dam downstream. The catastrophic failure of the two dams impacted more than 2,500 structures and caused more than 200-million-dollars in damage. The independent forensic team charged with investigating the event released an interim report on the failures in September 2021. The conclusions of the report include a discussion of a relatively rare phenomenon in earthen dams. Let’s walk through the investigation to try and understand what happened. I’m Grady, and this is Practical Engineering. Today, we’re talking about the failures of Edenville and Sanford Dams.

Edenville and Sanford Dams were two of four dams owned by Boyce Hydro Power along the Tittabawassee River in Michigan. The dams were built in the 1920s to generate hydroelectricity. Edenville Dam was constructed just upstream of the confluence with the Tobacco River. It was an earthfill embankment dam with two spillways and a powerhouse. The water impounded by the dam formed a reservoir called Wixom Lake, nearly the entire perimeter of which was surrounded by waterfront homes. State highway 30 bisected the dam along a causeway, splitting the lake between the two rivers with a small bridge to allow water to flow between the two sections of the reservoir. Sanford Dam downstream was a similar structure as Edenville, but not nearly as long. It consisted of an earthen embankment, a gated spillway, an emergency spillway, and a powerhouse for the turbines, generators, and other hydroelectric equipment.

Edenville Dam, in particular, had a long history of acrimony and disputes between the dam owner and regulatory agencies. Most dams that generate hydroelectricity in the US are subject to oversight by the Federal Energy Regulatory Commission (or FERC). But, Edenville Dam had its license to generate hydropower revoked in 2018 when the owner failed to comply with FERC’s safety regulations. Their report listed seven concerns, the most significant of which was that the dam didn’t have enough spillway capacity. As a result, if a severe storm were to come, the dam wouldn’t be able to release enough water to prevent the reservoir level from climbing above the top of the structure, overtopping it and likely causing it to fail. After losing the license to generate hydropower, jurisdiction over the dam fell to the State of Michigan, where disagreements about its structural condition, spillway capacity, and water levels in Wixom Lake continued.

The days before the failure had already been somewhat rainy, with small storms moving through the area. But heavy rain was in the forecast for May 18th. The deluge arrived early that morning, and it didn’t take long for the water levels in Wixom Lake to begin to rise. By 7 AM, operators at the dam had started opening gates on both spillways to release some of the floodwaters downstream. Gate operations continued throughout the day as the reservoir continued rising. At 3:30 PM, all six gates (three at each spillway) were fully opened. From then on, there was nothing more operators could do to get the floodwater out faster, and the level in Wixom Lake continued to creep upwards. That night, the lake reached the highest level in its history, only about 4 feet or 1.3 meters below the top of the earthen dam.

At daybreak on May 19th, it was already clear that Edenville Dam was struggling from the enormous forces of the flood. Operators noticed severe erosion from the quickly flowing water in the reservoir near the east spillway along the embankment. Regulators and dam personnel met to review the damage, and a contractor was brought in to deploy erosion control measures. And still, the water kept rising.

By 5 PM, Wixom Lake had risen to within around a foot (or 30 centimeters) from the top of the dam. As crews worked to mitigate the erosion problems in other places, eyewitnesses noticed a new area of depression on the far eastern end of the dam. This part of the embankment hadn’t been a significant point of focus during the flood because it wasn’t experiencing visible erosion, but it was apparent something serious had happened. Photos from a few hours earlier didn’t show anything unusual, but now the top of the embankment sank down nearly to the reservoir level. Eyewitnesses moved to the nearby electrical substation to get a better look at this part of the dam. Within only a few moments, the embankment failed. Lynn Coleman, a Michigander and one of the bystanders, caught the whole thing on camera. 

Over the next two hours, all of Wixom Lake drained through the breach in the dam. Water rushing through the narrow gap in the causeway washed out the highway bridge, and all of the waterfront homes and docks around the entire perimeter of the lake were left high and dry. As the floodwaters rushed through the breach into the river, the level downstream in Sanford Lake rose rapidly. By 7:45, the reservoir was above the dam’s crest, quickly eroding and breaching the structure. With the combined volumes of Wixom and Sanford Lakes surging uncontrolled down the Tittabawassee River, downstream communities including Sanford, Midland, and Saginaw were quickly inundated. Google Earth shows aerial imagery before, during, and after the flood, so you can really grasp the magnitude of the event. More than 10,000 people were evacuated, and flooding damaged more than 2,500 structures. Amazingly, no major injuries or fatalities were reported.

In their interim report on the event, the independent forensic team considered a broad range of potential explanations for what happened at Edenville Dam. Although the spillway for the dam was undersized per state regulations, this storm event didn’t completely overwhelm the structure. The level in Wixom Lake never actually went higher than the top of the embankment, so overtopping (one of the most common causes of dam failure, including the cascading loss of the downstream Sanford Dam) was eliminated as a possible cause of failure for Edenville Dam.

The team also looked at internal erosion, a phenomenon I’ve covered before that has resulted in many significant dam failures. Internal erosion involves water seeping through the soil and washing it away from the inside. However, this type of erosion usually happens over a longer time period than what was witnessed at Edenville Dam. No water seepage exiting the downstream face of the embankment or eroding soil was evident in the time leading up to the breach, ruling this mechanism out as the main cause of failure.

The forensic team determined that the actual cause of the failure was static liquefaction, a relatively unusual mechanism for an earthen dam. Soils are kind of weird but don’t tell that to geotechnical engineers. Because they are composed of many tiny particles, they can behave like solids in some cases and liquids in others. Of course, most of our constructed environment depends on the fact that soils mainly behave like solids, providing support to the things we build on top of them.

Liquefaction happens when soil experiences an applied stress, like an earthquake, that causes it to behave like a liquid, and it mostly happens in cohesionless soils - those where the grains don’t stick together, such as sand. When a body of cohesionless soil is saturated, water fills the pore spaces between each particle. When a load is applied, the water pressure within the soil increases, and if it can’t flow out fast enough, it forces the particles of soil away from each other. A soil’s strength is derived entirely from the friction between the interlocking particles. So, when those grains no longer interlock, the ground loses its strength. Some of the most severe damage from earthquakes comes from the near-instant transformation of underlying soils from solid to liquid. Buildings sink into their foundations, sewer lines float to the surface, and roads crumble without underlying support.

Liquefaction typically requires cyclical loading, like during an earthquake or extreme, sudden displacements to trigger the flow. Gradual increases in loading will only cause the water within the soil to flow out, equalizing the pore water pressure. But, some soils can reach a point of instability and liquefy under sustained or gradually increasing loading conditions in certain circumstances. This phenomenon is known as static liquefaction. A good analogy is the difference between glass and steel. Both materials have a linear stress-strain curve at first. In simple terms, the harder you push, the harder they push back. But both reach a point of peak strength, beyond which a soil will fail or deform. Well-compacted sand is like steel. It fails with ductile behavior. If you stress it beyond its strength, it deforms, but the strength is still there. In other words, if you want to keep deforming it, you have to keep applying a force at its peak strength. On the other hand, loose sand is like glass. If you push it beyond its peak strength, it fails with brittle behavior, suddenly losing most of its strength.

The independent forensic team took samples of the soils within the Edenville Dam embankment and subjected them to testing to see if they were liquefiable. The tests showed brittle collapse behavior necessary for static liquefaction. They also reviewed construction records and photographs where no compaction equipment was seen. The team concluded that as the level of Wixom Lake rose that fateful May evening, it increased the hydraulic load on the embankment, putting more stress on the earthen structure than it had ever been asked to withstand. In addition, the higher levels may have introduced water from the reservoir to permeable layers of the upper embankment (as evidenced by the depression that formed before the failure), increasing seepage and thus increasing the pore water pressure of saturated, uncompacted, sandy soils within the structure. Eventually, the peak strength of the embankment soil was surpassed, and a brittle collapse resulted, liquefying enough soil to breach a downstream section of the dam. A few seconds later, lacking support from the rest of the structure, the dam’s upstream face collapsed, and all of Wixom Lake began rushing through.

Edenville Dam was built in the 1920s before most of our current understanding of geotechnical engineering and modern dam safety standards existed. Most dams are earthen embankment dams, but modern ones are built much differently than this one was. Embankments are constructed slowly from the bottom up in individual layers called lifts. This lets you compact and densify every layer before moving upward, rather than just piling up heaps of loose material. We use gentle slopes on embankments to increase long-term stability since soils are naturally unstable on steep slopes. We have strict control over the type of soil used to construct the embankment, constantly testing to ensure the properties match or exceed the assumptions used during design. We often build an embankment of multiple zones. The core is made of clay soils that are highly impermeable to seepage, while the outer shells have less stringent specifications. We include rock riprap or other armoring on the upstream face so that waves and swift water in the reservoir can’t erode the vulnerable embankment. And, we include drains that both relieve pressure so it can’t build up within the soil and filter the seepage to prevent it from washing away soil particles from inside or below the structure. Edenville Dam actually did have a primitive internal drainage system made from clay tiles, but many of the drains in the area of the failure appeared to be missing in a recent inspection.

Although it seems like an outlier, the story of Edenville and Sanford Dams is not an unusual one. There are a lot of small, old dams across the United States built to generate hydropower in a time before everyone was interconnected with power grids. Over time, the revenue that comes from hydropower generation gradually declines as the maintenance costs for the facility and the danger the dam poses to the public both increase. However, the reservoir created by the dam is now a fixture of the landscape, elevating property values, creating communities and tourism, and serving as habitat for wildlife. You end up with a mismatch of value where most of the dam’s benefits are borne by those who don’t incur any responsibility for its upkeep or liability for the threat it poses to downstream communities. Even owners with the best intentions find themselves utterly incapable of good stewardship. Combine all that with the fact that the regulatory authorities are often underfunded and lack the resources to keep a good eye on every dam under their purview, and you get a recipe for disaster. After all, there’s only so much you can do to compel an owner to embark on a multimillion-dollar rehabilitation project for an aging dam when they don’t have the money to do it and won’t derive any of the benefits as a result.

Since the failure, the dam owner Boyce Hydro filed for bankruptcy protection, and the counties took control of the dams with a nonprofit coalition of community members and experts to manage repair and restoration efforts. Of course, there’s a lot more to this story than just the technical cause of the failure, and the final Independent Forensic Team report will have a deeper dive into all the human factors that contributed to the failure. They expect that report to be released later in 2021. Dams are inherently risky structures, and it’s unfortunate that we have to keep learning that lesson the hard way. Thank you for reading, and let me know what you think!

October 19, 2021 /Wesley Crump
  • Newer
  • Older