Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

What Really Happened with the Substation Attack in North Carolina?

January 17, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

At around 7PM on the balmy evening of Saturday, December 3, 2022, nearly every electric customer in Moore County, North Carolina was simultaneously plunged into darkness. Amid the confusion, the power utility was quick to discover the cause of the outage: someone or someones had assaulted two electrical substations with gunfire, sending a barrage of bullets into the high voltage equipment. Around 45,000 customers were in the dark as Duke Energy began work to repair the damaged facilities, but it wouldn’t be until Wednesday evening, four days after the shooting, that everyone would be back online. That meant schools were shuttered, local businesses were forced to close during the busy holiday shopping season, a curfew was imposed, and the county declared a state of emergency to free up resources for those affected. The attack came as other utilities around the United States were reporting assaults on electrical substations, including strikingly similar instances in Oregon and Washington. Let’s talk about what actually happened and try to answer the question of what should be done. We even have exclusive footage of the substations that I’m excited to show you. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the Moore County substation attacks.

Right in the geographic center of North Carolina, Moore County is home to just under 100,000 people. The county is maybe most famous for the Pinehurst Resort, a historic golf course that has hosted the US Open on several occasions. It also sits nearby Fort Bragg, one of the largest Army bases in the world. And here’s an overlay of Moore County’s transmission grid. Taking a look at this layout will help us understand this event a little better. By the way, this information is not secret - it’s publically available, at least for now, in a few locations including the Energy Information Administration website and Open Street Maps, and I’ll discuss the implications of that later in the video.

Two 230 kilovolt (or kV) transmission lines come into Moore County from the southwest and connect to the West End Substation near Pinehurst. One of the lines terminates here while the other continues to the northwest, without making any other connections in Moore County. These two 230 kV lines are the only connection to the rest of the power grid in the area. At the West End Substation, two power transformers drop the voltage to 115kV. From there, two 115kV lines head out in opposite directions to form a loop around Moore County. Distribution substations, the ones with transformers that lower the voltage further to connect to customers, are mostly spread out along this 115kV loop. So, essentially, most of Moore County has two links to the area power grid, and both of them are at a single substation, West End. And you might be able to guess one of the two substations that was attacked that Saturday evening. Interestingly, the other substation attacked was here in Carthage. Just looking at a map of the transmission lines, it would be easy to assume that Carthage provides a second link to the 230 kV transmission grid, but actually, it’s just a distribution substation on the 115 kV loop. The 230 kV line passes right by it.


Duke Energy (the owner of the substations) hasn’t shared many details about the attack. In their initial press release, they simply stated that “several large and vital pieces of equipment were damaged in the event.” Those investigating the attack, including the FBI, are also keeping details close to their chest. Our drone photographer had to have a police escort just to get this footage. But, we can use photos and clips of the substations to hypothesize some details of the event. Just take what I say with a grain of salt, because the folks in charge haven’t confirmed many details. It really looks like the attacker or attackers were specifically targeting the transformers. These are typically the largest and most expensive pieces of equipment (and the hardest to replace) in a substation. They do the job of changing the voltage of electricity as needed to move power across the network. And, even more specifically, it looks like the attackers went after the thin metal radiators of the transformers. Just like the radiator in your car, these are used on transformers to dissipate the heat that builds up within the main tank. But unlike the coolant system in a car, wet-type power transformers are filled with oil. If all that oil drains out of the transformer tank, it can cause the coils to overheat or arc, leading to substantial permanent damage.

Disabling the transformers was presumably the goal of the attack, but obviously, with power transformers being both so important and so difficult to replace, they are almost always equipped with protective devices. We don’t have to do a deep dive into the classic Recommended Practice for the Protection of Transformers Used in Industrial and Commercial Power Systems, but it’s enough to say that utilities put quite a bit of thought into minimizing the chance that something unexpected, whether it’s a short circuit or a bullet, can cause permanent damage to a transformer. Sensors can measure oil pressure, gas buildup, liquid levels, and more to send alarms to the utility when an anomaly like an oil leak occurs. And, some protective devices can even trigger the circuit breakers to automatically disconnect the transformer before it sustains permanent damage.

Whether it happened automatically or manually as a result of an alarm, the two 230 kV transformers in the West End substation were disconnected from the grid as a result of the shooting, and in doing so, the entire 115kV loop that goes around Moore County was de-energized, turning out the lights for the roughly 45,000 connected households and businesses. Aerial footage taken the day after the attacks shows the disconnect switches for the 230kV lines open, an easy visual verification that the transformers are de-energized. You can also see some disassembled radiators on site, presumably to replace the damaged ones on the transformers. It seems that the gunfire only damaged the transformer radiators, which is a good thing because those can usually be replaced and put back into service relatively easily. If the windings within the transformer itself were damaged, it would probably require replacement of the equipment. Transformers of this scale are rarely manufactured without an order, which means we don’t have a lot of spares sitting around, and the lead time can be months or years to get a new one delivered, let alone installed.

With at least three damaged transformers, the utility began working 24-hour shifts on a number of parallel repairs to restore power as quickly as possible. Again, they didn’t share many details of the restoration plans, so we can only talk about what we see in the footage. One of the more interesting parts of restoration involved bringing in this huge mobile substation. It seems that crews temporarily converted the Carthage substation so it could tap into the adjacent 230kv line. The power passes through mobile circuit switches, a truck-mounted transformer, secondary circuit breakers, voltage regulators, and disconnects to feed the 115kv loop. You can also see the cooling system of the mobile transformer is mounted at the back of the trailer to save space. With this temporary fix, and presumably some permanent repairs to the transformer radiators at West End, Duke Energy was able to restore service to all customers by the end of Wednesday, about 4 days after all this started. Knowing the extent of the damage, that’s an impressive feat! But they still have some work ahead of them. In this footage taken two weeks after the attack, you can see that one of the 230 kv transformers is back online while the other is still disconnected with all its radiators dismantled.

The FBI and local law enforcement are still working to find those involved in the incident, and there’s currently a $75,000 reward out for anyone who can help. Officials have stopped short of calling it an act of terrorism, presumably because we don’t know the motive of whoever perpetrated the act. The local sheriff said this person, “knew exactly what they were doing,” and I tend to agree. It doesn’t take a mastermind to take some pot shots at the biggest piece of equipment in a switchyard, but this attack shows some sophistication. They targeted multiple locations, they specifically targeted transformers, and one of the substations they chose was critical to the distribution of power to nearly all of Moore County. It’s fairly safe to say that this person or persons had at least some knowledge about the layout and function of power infrastructure in the area… but that’s not necessarily saying much.

To an unknown but significant extent, power infrastructure gets its security through obscurity. It’s just not widely paid attention to or understood. But, almost all power infrastructure in the US is out in the open, on public display, a fact that is a great joy for people like me who enjoy spotting the various types of equipment. But, it also means that it’s just not that hard for bad actors to be deliberate and calculated about how and where to cause damage. With its sheer size and complexity, it would be impossible to provide physical security to every single element of the grid. But, protecting the most critical components, including power transformers, is prudent. That’s especially true for substations like West End that provide a critical link to the grid for a large number of customers. They already have a new gate up, but that’s probably just a start. I think it’s likely that ballistic resistant barriers will become more common at substations over time, and, of course, those added costs for physical security will be passed down to ratepayers in one way or another.


But it’s important to put this event in context as well. Attacks on the power grid are relatively rare, and they fall pretty low on the list of threats, even behind cybersecurity and supply chain issues. The number one threat to the grid in nearly every place in the US? The weather. If you experience an outage of any length, it's many times more likely to be mother nature than a bad actor with a gun. That’s not to say that there’s not room for improvement though, and this event highlights the need for making critical substations more secure and also making the grid more robust so that someone can’t rob tens of thousands of people of their lights, heat, comfort, and livelihood for four days with just a few well-placed bullets.

January 17, 2023 /Wesley Crump

How Different Spillway Gates Work

January 03, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In the heart of Minneapolis, Minnesota on the Mississippi River is the picturesque Upper Saint Anthony Falls Lock and Dam, which originally made it possible to travel upstream on the river past the falls starting in 1937. It’s a famous structure with a fascinating history, plus it has this striking overflow spillway with a stilling basin at the toe that protects the underlying sandstone from erosion. But there’s another dam just downstream, that is a little less-well-known and a little less scenic, aptly called the Lower Saint Anthony Falls Lock and Dam. Strangely, the spillway for the lower dam is less than half the width of the one above, even though they’re on the exact same stretch of the Mississippi River, subject to the same conditions and the same floods. That’s partly because, unlike its upstream cousin, the Lower Saint Anthony Falls dam is equipped with gates, providing greater control and capacity for the flow of water through the dam. In fact, dams all over the world use gates to control the flow of water through spillways.

If you ask me, there’s almost nothing on this blue earth more fascinating than water infrastructure. Plus I’ve always wanted to get a 3D printer for the shop. So, I’ve got the acrylic flume out, I put some sparkles in the water, and I printed a few types of gates so we can see them in action, talk about the engineering behind them, and compare their pros and cons. And I even made one type of gate that’s designed to raise and lower itself with almost no added force. But this particular type of gate was made famous in 2019, so we’ll talk about that too. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about spillway gates.

Almost all dams need a way to release excess water when the reservoir is full. If you’ve ever tried to build an impoundment across a small stream or channel, you know how powerful even a small amount of flowing water can be. Modern spillways are often the most complex part of a dam because of the high velocities of flow. If not carefully managed, that quickly flowing water can quickly tear a dam apart. The incredible damage at Oroville Dam in 2017 is a striking example of this. Although many dams use uncontrolled spillways where the water naturally flows through once the reservoir rises to reach a certain level, gated spillways provide more control over the flow, and so can allow us to build smaller, more cost-effective structures. There are countless arrangements of mechanical devices that have been used across the world and throughout history to manage the flow of water. But, modern engineering has coalesced to variations on only a few different kinds of gates. One of the simplest is the crest gate that consists of a hinged leaf on top of a spillway.

A primary benefit of the crest gate is that ice and debris flow right over the top, since there’s nothing for the flow to get caught on. Another advantage of crest gates is that they provide a lot of control over the upstream level, since they act like a weir with an adjustable top. So, you’ll often see crest gates used on dams where the upstream water level needs to be kept within a narrow range. For example, here in San Antonio we have the RiverWalk downtown. If the water gets too low, it won’t be very attractive, and if it gets too high, it will overtop the sidewalks and flood all the restaurants. So, most of the dams that manage the flow of water in the San Antonio River downtown use steel crest gates like this one. Just up the road from me, Longhorn Dam holds back Ladybird Lake (formerly Town Lake) in downtown Austin. Longhorn Dam has vertical lift gates to pass major floods, but the central gates on the dam that handle everyday flows are crest gates. Finally, the dam that holds back Town Lake in Tempe, Arizona uses a series of crest gates that are lowered during floods.

Crest gates are attached to some kind of arm that raises or lowers the leaf as needed. Most use hydraulic cylinders like the one in Tempe Town Lake Dam. The ones here in San Antonio actually use a large nut on a long threaded rod like the emergency jack that comes in some cars. You might notice I’m using an intern with a metal hook to open and close the model crest gate, but most interns aren’t actually strong enough to hold up a crest gate at a real dam. In fact, one of the most significant disadvantages of crest gates is that the operators, whether hydraulic cylinders or something else, not only have to manage the weight of the gate itself but also the hydrostatic force of the water behind the gate, which can be enormous. Let’s do a little bit of quick recreational math to illustrate what I mean:

The gates at Tempe Town Lake are 32 meters or about 106 feet long and 6.4 meters or 21 feet tall. If the upstream water level is at the top of one of these gates, that means the average water pressure on the gate is around four-and-a-half pounds for every square inch or about 31,000 newtons for every square meter. Doesn’t sound like a lot, but when you add up all those square inches and square meters of such a large gate, you get a total force of nearly one-and-a-half million pounds or 660,000 kilograms. That’s the weight of almost two fully-loaded 747s, and by the way, Tempe Town Lake has eight of these gates. The hydraulic cylinders that hold them up have to withstand those enormous forces 24/7. That’s a lot to ask of a hydraulic or electromechanical system, especially because when the operation system fails on a crest gate, gravity and hydrostatic pressure tend to push the gate open, letting all the water out and potentially creating a dangerous condition downstream. The next kind of spillway gate solves some of these problems.

Radial crest gates, also known as Tainter gates, use a curved face connected to struts that converge downstream toward a hinge called a trunnion. A hoist lifts the gate using a set of chains or cables, and water flows underneath. My model being made from plastic means it kind of stays where it’s put due to friction, but full-scale radial gates are heavy enough to close under their own weight. That’s a good thing, because, unlike most crest gates, if the hoist breaks, the gate fails closed. The hoist is also mostly just lifting the weight of the gate itself, with the trunnion bearing the hydrostatic force of the water behind held back. These features make radial gates so reliable that they’re used in the vast majority of gated spillways at large dams around the world. If you go visit a dam or see a swooping aerial shot of a majestically flowing spillway, there’s a pretty good chance that the water is flowing under a radial gate.

The trunnion that holds back all that pressure while still allowing the gate to pivot is a pretty impressive piece of engineering. I mean, it’s a big metal pin, but the anchors that hold that pin to the rest of the dam are pretty impressive. Water pressure acts perpendicular to a surface, so the hydrostatic pressure on a radial gate acts directly through this pin. That keeps the force off the hoist, providing low-friction movement. But it’s not entirely friction-free. In fact, the design of many older radial gates neglected the force of friction within the trunnion and needed retrofits later on. I mentioned the story of California’s Folsom Dam in a prior video. That one wasn’t so lucky to get a structural retrofit before disaster struck in 1995. Operators were trying to raise one of the gates to make a release through the spillway when the struts buckled, releasing a wave of water downstream. Folsom Reservoir was half empty by the time they closed the opening created by the failed gate.

How did they do it? Stoplogs, another feature you’re likely to see on most large dams across the world. Just like all mechanical devices that could cause dangerous conditions and tremendous damage during a failure, spillway gates need to be regularly inspected and maintained. That’s hard to do when they’re submerged. The inspecting part is possible, but it’s hard to paint things underwater. In fact, it’s much simpler, safer, and more cost effective to do most types of maintenance in the dry. So we put gates on our gates. Usually these are simpler structures, just beams that fit into slots upstream of the main gate. Stoplogs usually can’t be installed in flowing water and are only used as a temporary measure to dewater the main gate for inspection or maintenance. I put some stoplog slots on my model so you can see how this works. I can drop the stoplogs into the slots one by one until they reach the reservoir level. Then I crack the gate open and the space is dewatered. You can see there’s still some leakage of the stoplogs, but that’s normal and those leaks can be diverted pretty easily. The main thing is that now the upstream face of the gate is dry so it can be inspected, cleaned, repaired, or repainted.

And if you look closely, it’s not just my model stoplogs that leak, but the gates too. In fact, all spillway gates leak at least a little bit. It’s usually not a big issue, but we can’t have them leaking too much. After all, there’s not much point in having a gate if it can’t hold back water. The steel components on spillway gates don’t just ride directly against the concrete surface of the spillway. Instead, they are equipped with gigantic rubber seals that slide on a steel plate embedded in the concrete. Even these seals have a lot of engineering in them. I won’t read you the entire Hydraulic Laboratory Report No. 323 - Tests for Seals on Radial Gates or the US Army Corps of Engineers manual on the Design of Spillway Tainter Gates, but suffice it to say, we’ve tried a lot of different ways to keep gates watertight over the years and have it mostly sealed up to a science now. Most gates use a j-bulb seal that’s oriented so that the water pressure from upstream pushes the seal against the embedded plate, making the gate more watertight. Different shapes of rubber seals can be used in different locations to allow all parts to move without letting water through where it’s not wanted.

In fact, there’s one more type of spillway gate I want to share where the seals are particularly important. Beartrap gates are like crest gates in that they have a leaf hinged at the bottom, but beartrap gates use two overlapping hinged leaves, and they open and close in an entirely different way. The theory behind a beartrap gate is that you can create a pressurized chamber between the two leaves. If you introduce water from upstream into this chamber, the resulting pressure will float the bottom leaf, pushing it upward. That, in turn, raises the upper leaf. The upstream water level rises as the gate goes up, increasing the pressure within the chamber between the gates. The two leaves are usually tied in a way that once fully open, they can be locked together. To lower the gates, the conduit to the upstream water is closed, and the water in the chamber is allowed to drain downstream, relieving the upward pressure on the lower leaf so it can slowly fall back to its resting position. It sounds simple in theory, but in practice this is pretty hard to get right.

I built a model of a bear trap gate that mostly works. If I open this valve on the upstream side, I subject the chamber to the upstream water pressure. In ideal conditions with no friction and watertight seals, this would create enough pressure to lift both leaves. In reality, it needs a little bit of help from the intern hook. But you can see that, as the water level upstream increases, the lower leaf floats upward as well. When the gates are fully opened, the leaves lock together to be self-supporting. Some old bear trap gates used air pressure in the chamber to give the gates a little bit of help going up. I tried that in my model and it worked like a charm. It took a few tries to figure out how much pressure to send, but eventually I got it down.

It’s not just my model bear trap gate that’s finicky, though. Despite the huge benefit of not needing any significant outside force to raise and lower the gates, this type of system has never been widely used. 

This chamber between the leaves is the perfect place for silt and sand to deposit. They were also quite difficult to inspect and maintain because you had to dewater the entire chamber and reroute flows. And because they weren’t widely used, there were never any off-the-shelf components, so anytime something needed to be fixed, it was a custom job. The world got to see a pretty dramatic example of the challenges associated with maintaining old bear trap gates in 2019 when one of the gates at Dunlap Dam near New Braunfels, Texas completely collapsed.

This dam was one of five on the Guadalupe River built in the 1930s to provide hydropower to the area. But over nearly a century that followed, power got a lot cheaper, and replacing old dams got a lot more expensive. Since the dam wasn’t built with maintenance in mind, it was nearly impossible to inspect the condition of the steel hinges of the gate. But that lack of surveillance caught up with the owner on the morning of May 14, 2019 when a security camera at the dam caught the dramatic failure of one of the gate’s hinges. The lake behind the dam quickly drained and kicked off a chain of legal battles, some of which are still going on today. Luckily, no one was hurt as a result of the failure. Eventually, the homeowners around the lake upstream banded together to tax themselves and rebuild the structure, a task that is nearly complete now more than three years later. Of course, there’s a lot more to this fascinating story, but it’s a great reminder of the importance of spillway gates in our lives and what can go wrong if we neglect our water infrastructure.

January 03, 2023 /Wesley Crump

How This Bridge Was Rebuilt in 15 Days After Hurricane Ian

December 20, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On September 28, 2022, Hurricane Ian made landfall on the western coast of Florida as a Category 4 storm, bringing enormous volumes of rainfall and extreme winds to the state. Ian was the deadliest hurricane to hit Florida since 1935. Over 100 people died as a result of flooding and over 2 million people lost power at some point during the storm. The fierce winds that sucked water out of Tampa Bay, also forced storm surge inland on the south side of the hurricane, causing the sea to swell upwards of 13 feet or 4 meters above high-tide. And that doesn’t include the height of the crashing waves. One of the worst hit parts of the state became a symbol for the hurricane’s destruction: the barrier island of Sanibel off the coast of Fort Myers. The island’s single connection to the mainland, the Sanibel Causeway, was devastated by Hurricane Ian to the point where it was completely impassable to vehicles. Incredibly, two weeks after hiring a contractor to perform repairs, the causeway was back open to traffic. But this fix might not last as long as you’d expect. How did they do it? And why can’t all road work be finished so quickly? Let’s discuss. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the Sanibel Causeway Hurricane Repairs.

What is a causeway anyway? You might think the only two options to get a road across a body of water are a bridge or a tunnel, but there’s actually a third option. You can build an embankment from compacted soil or rock that sits directly on the seabed and then construct a roadway on top of that. A path along reclaimed land like this is called a causeway, and the one between Fort Meyers and Sanibel Island in Florida was first built in 1963 and rebuilt in 2007. But, a causeway has a major limitation compared to a bridge or tunnel: it doesn’t allow crossing of maritime traffic because it divides the waterway in two. So, the Sanibel Causeway has some bridges. And actually, for a structure called a causeway, it’s mostly bridges, three to be exact. Bridges C and B are long, multi-span structures that sit relatively low above the water. Bridge A, closest to the mainland, is a high-span structure to allow for tall sailboats to pass underneath. Islands 1 and 2 are the actual causeway parts of the causeway where the road sits at grade (or on the ground). Overall, the causeway is about 3 miles or 5 kilometers long, carries over three million vehicles a year, on average, and, critically, is the only way to drive a vehicle on or off Sanibel Island, which is home to about 6,000 people.

Each of the two causeway islands serves as a county park with beaches and places for fishing. The islands aren’t natural. They were built up in the 1960s by dredging sand and silt from the bay and piling it up above the water level. It’s pretty easy to see this on the aerial photos of the islands. They really are just slender stretches of dredged sediment sitting in the middle of the bay. But, they didn’t pile the sediment that high above the water. The top of the roadway along the islands is only around 7 feet or 2 meters above sea level. And here’s the thing about sand and silt. If you look at the range of earthen materials by particle size, the large ones like gravel and even coarse sand don’t erode quickly because they’re heavy, and the tiny ones like clay don’t erode quickly either because they’re sticky (they have cohesion), but right in the middle are the fine sands and silts that aren’t heavy or sticky, so they easily wash away. The storm surge and waves brought on by Hurricane Ian breached both of the causeway islands, violently eroding huge volumes of sand out to sea and leaving the roadways on top completely destroyed. But that wasn’t the only damage.

In between the island sections of roadway and the bridges are the approach ramps: compacted soil structures that transition from the low causeway islands up to and down from the elevated sections. Instead of using traditional earthen embankments as the approaches for each bridge, the 2007 project included retaining walls built using mechanically stabilized earth, or MSE. I have a few videos about how these walls work you can check out after this if you want to learn more. Basically, reinforcing elements within the soil allow the slopes to stand vertically on the bridge approaches, saving precious space on the small causeway islands and reducing the total load on the dredged sand below each approach. Concrete panels are used as a facing system to protect the vulnerable earthen structures from erosion. But, you know, these are meant to protect against rainfall and strong winds, not hurricane force waves and 10 foot storm surge. With the full force of Hurricane Ian bearing down on them, three of the causeway’s approach ramps were heavily damaged. The one on the mainland side, and the ones on the north side of each causeway island. The bridges themselves largely withstood the hurricane with minimal damage, thanks to good engineering. But, with the approaches and causeway sections ruined, Sanibel Island was completely cut off from vehicle access, making rescue operations, power grid repairs, and resupplies practically impossible.

Within only a few days of the hurricane’s passing, state and county officials managed to pull together a procurement package to solicit a contractor for the repairs. On October 10, they announced their pick of Superior Construction and Ajax Paving and their target completion date of October 31st. Construction crews immediately sprang into action with a huge mobilization of resources, including hundreds of trucks, earth moving machines, cranes, barges, dredges, and more than 150 people. Major sections of the job were inaccessible by vehicle, so crews and equipment had to be ferried to various damaged locations along the causeway. The power was still out in many places, and cell phone and internet coverage were spotty. Even coordinating meals and places to sleep for the crew was a challenge.

For the most part, the repairs were earthwork projects, replacing the lost soil and sand along the causeway islands and bridge approaches. A lot of the material was dredged back from the seabed to rebuild each of the two islands, but over 2,000 loads of rock and 4,000 tons (3,600 metric tons) of asphalt were brought in from the mainland. Just coordinating that many crews and resources was an enormous challenge both for FDOT and the contractor. Both made extensive use of drones to track the quantities of materials being transported and placed and to keep an eye on the progress across the 3-mile-long construction site. Progress continued at a breakneck pace at each of the damaged areas of the causeway to bring the subgrade back up to the correct level. Once the eroded soil was replaced, all the damaged sections were paved with asphalt to provide a durable driving surface. With the incredible effort and hard work of the contractor and its crews, the designers, FDOT and their representatives, emergency responders, relief workers, and many more, the causeway was reopened to the public on October 19th, a short 15 days after the project started and well ahead of the original estimated completion date.

You might be wondering, “If they can fix a hurricane-damaged road in two weeks, why does the road construction along my commute last for years?” And it’s a good question, because you actually sacrifice quite a lot to get road work done so quickly. First, you sacrifice the quality of the work. And that's not a dig on the contractor, but a simple reality of the project. These temporary repairs aren’t built to last; they’re built to a bare minimum level needed to get vehicles safely across the bay. Look closely and you won’t see the conveniences and safety features of modern roadways like pavement markings and stripes, guard rails, or shoulders.                                                          

These embankments constructed as bridge approaches are also not permanent. Something happens when you make a big pile of soil like this (even if you do a good job with compaction and keeping the soil moisture content just right): it settles. Over time and under the weight of the embankment, the grains of soil compress together and force out water, causing the top of the embankment to sink. But the bridge sits on piles that aren’t subjected to these same forces. So, over time, you end up with a mismatch in elevation between the approach and bridge. If you’ve ever felt a bump going up to or off a bridge, you know what I mean. In fact, this is one of the many reasons why you might see a construction site sitting empty. They’re waiting for the embankments to settle before paving the roadway. Oftentimes, a concrete approach slab is used to try and bridge the gap that forms over time, but I don’t see any approach slabs in the photos of the repair projects. That means it’s likely these approaches will have to be replaced or repaired fairly soon. In addition, the slopes of the approaches are just bare soil right now, susceptible to erosion and weathering until they get protected with grass or hard armoring.

The other sacrifice you make for a fast-track project like this is cost. We don’t know the details of the contract right now, but just looking at all the equipment at the site, we know it wasn’t cheap. It’s expensive to mobilize and operate that much heavy equipment, and the rental fees come due whether they sit idle or not. It’s expensive to pay overtime crews to maintain double shifts. It’s expensive to get priority from material suppliers, equipment rentals, work crews, fuel, et cetera, especially in a setting like a hurricane recovery where all those things are already in exceptionally high demand. And, it’s expensive to keep people and equipment on standby so that they can start working as soon as the crew before them is finished. Put simply, we pay a major premium for fast-tracked construction and an even bigger one for emergency repairs where the conditions require significant resources under high demands.


Of course, it wasn’t just the roadways damaged on Sanibel Island. The power infrastructure and many many buildings were damaged or destroyed as well. And it wasn't just Sanibel Island affected, but huge swaths of coastal Florida too (including nearby Pine Island that had an emergency bridge project of its own). There’s a long way to go to restore not just the roadway to Sanibel Island, but also the island itself. And that will involve a lot of tough decisions about where, how much, and how strong to rebuild. After all, Sanibel is a barrier island, a constantly changing deposit of sand formed by wind and waves. These islands are critical to protecting mainland coasts by absorbing wave energy and bearing the brunt of storms. In fact, many consider barrier islands to be critical infrastructure, but development on the islands negates that critical purpose. That doesn’t mean the community doesn’t belong there; nearly every developed area is subject to disproportionate risk from some kind of destructive natural phenomenon. But it does obligate the planners and engineers involved in rebuilding to be thoughtful about the impacts hurricanes can have and how infrastructure can be made more resilient to them in the future.

December 20, 2022 /Wesley Crump

What Is A Black Start Of The Power Grid?

December 06, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

November 1965 saw one of the most widespread power outages in North American history. On the freezing cold evening of the 9th, the grid was operating at maximum capacity as people tried to stay warm when a misconfigured relay tripped a breaker on a key transmission line. The loss of that single line cascaded into a loss of service for over 30 million people in the northeast US plus parts of Ontario in Canada. Restoring electricity to that many people is no simple task. In this case, the startup began with a little 12 megawatt gas generator in Southampton, New York. That’s about the capacity of four wind turbines, but it was enough to get power plants in Long Island back online which were able to power up all of New York City, eventually returning service to all those 30 million people

The grid is a little bit of a house of cards. It’s not necessarily flimsy, but if the whole thing gets knocked down, you have to rebuild it one card at a time and from the ground up. Restoring power after a major blackout is one of the most high stakes operations you can imagine. The consequences of messing it up are enormous, but there’s no way to practice a real-life scenario. It seems as simple as flipping a switch, but restoring power is more complicated than you might think. And I built a model power grid here in the studio to show you how this works. This is my last video in a deep dive series on widespread outages to the power grid, so go back and check out those other videos if you want to learn more. I’m Grady and this is Practical Engineering. In today’s episode we’re talking about black starts of the grid.

An ideal grid keeps running indefinitely. Maybe it sustains localized damage from lightning strikes, vehicle accidents, hurricanes, floods, and wayward squirrels, but the protective devices trigger circuit breakers to isolate those faults and keep them from disrupting the rest of the system. But, we know that no grid is perfect, and occasionally the damage lines up just right or the protective devices behave in unexpected ways that cascade into a widespread outage. I sometimes use the word blackout kind of freely to refer to any amount of electrical service disruption, but it’s really meant to describe an event like this: a widespread outage across most or all of an interconnected area. Lots of engineering, dedicated service from linesworkers, plenty of lessons learned from past mishaps, and a little bit of good fortune have all meant that we don’t see too many true blackouts these days, but they still happen, and they’re still a grid operator’s worst nightmare. We explored the extreme consequences that come from a large-scale blackout in a previous video. With those consequences in mind, the task of bringing a power grid back online from nothing (called a black start) is frightfully consequential with significant repercussions if things go wrong.

The main reason why black starts are so complicated is that it takes power to make power. Most large-scale generating plants - from coal-powered, to gas-powered, to nuclear - need a fair amount of electricity just to operate. That sounds counterintuitive, and of course configurations and equipment vary from plant to plant, but power generating stations are enormous industrial facilities. They have blowers and scrubbers, precipitators and reactors, compressors, computers, lights, coffee makers, control panels and pumps (so many pumps): lubrication pumps, fuel pumps, feedwater pumps, cooling water pumps, and much much more. Most of this equipment is both necessary for the plant to run and requires electricity. Even the generators themselves need electricity to operate.

I don’t own a grid scale, three-phase generator (yet), but I do have an alternator for a pickup truck, and they are remarkably similar devices. You probably already know that moving a conductor through a magnetic field generates a current. This physical phenomenon, called induction, is the basis for almost all electricity generation on the grid. Some source of motion we call the prime mover, often a steam-powered turbine, spins a shaft called a rotor inside a set of coils. But you won’t see a magnet on the rotor of a grid-scale generator, just like (if you look closely inside the case) you won’t see a magnet inside my alternator. You just see another winding of copper wire. Turns out that this physical phenomenon works both ways. If you put a current through a coil of wire, you get a magnetic field. If that coil is on a rotor, you can spin it like so.

This is my model power plant. I got this idea from a video by Bellingham Technical College, but their model was a little more sophisticated than mine. Let me give you a tour. On the right we have the prime mover. Don’t worry about the fact that it’s an electric motor. My model power plant consumes more energy than it creates, but I didn’t want to build a mini steam turbine just for this demonstration. The thing that’s important is that the prime mover drives a 3-phase generator, in my case through this belt. And the generator you already saw is a car alternator that I “modified” to create an Alternating Current instead of a Direct Current like what’s used in a vehicle. The alternator is connected to some resistors that simulate loads on the grid. And I have an oscilloscope hooked up to one of the phases so we can see the AC waveform. Yeah, all this is so we can just see that sine wave on the oscilloscope. It could have been a couple of tiny 3-phase motors; It could even have just been a signal generator. But, you guys love these models so I thought you deserved something slightly grander in scale. There’s a few other things here too, including a second model power plant, but we’ll get to those in a minute.

The alternator I used in my model has two brushes of graphite that ride along the rotor so that we can supply current to the coil inside to create an electromagnet. This is called excitation, and it has a major benefit over using permanent magnets in a generator: it’s adjustable. Let’s power up the prime mover to see how it works. If there’s no excitation current, there’s no magnetic field, which means there’s no power. We’re just spinning two inert coils of wire right next to each other. But watch what happens when I apply some current to the brushes. Now the rotor is excited, and I have to say, I’m pretty excited too, because I can see that we’re generating power. As I increase the excitation current, we can see that the voltage across the resistor is higher, so we’re generating more power. Of course, this additional power doesn’t come for free. It also puts more and more mechanical load on the prime mover. You can see when I spin the alternator with no excitation current, it turns freely. But when I increase the current, it becomes more difficult to spin. Modern power plants adjust the excitation current in a generator to regulate the voltage of electricity leaving the facility, something that would be much harder to do in a device that used permanent magnets that don’t need electricity to create a magnetic field.

The power for the excitation system can come from the generator, but, like the other equipment I mentioned, it can’t start working until the plant is running. In fact, power plants often use around 5 to 10 percent of all the electricity they generate. That’s why a black start of a large power plant is often called bootstrapping, because the facility has to pick itself up by the bootstraps. It needs a significant amount of power both to start and maintain its own creation of power, and that poses an obvious challenge. You might be familiar with the standby generators used at hospitals, cell phone towers, city water pumps, and many other critical facilities where a power outage could have severe consequences. Lots of people even have small ones for their homes. These generators use diesel or natural gas for fuel and large banks of batteries to get started. Imagine the standby generator capacity that would be needed at a major power plant. Five percent of the nearest plant to my house, even at a quarter of its nameplate capacity, is 18 megawatts. That’s more than 100 of these.

It’s just not feasible to maintain that amount of standby generation capacity at every power plant. Instead, we designate black start sources that can either spin up without support using batteries and standby devices or that can remain energized without a connection to the rest of the grid. Obviously, these blackstart power plants are more expensive to build and maintain, so we only have so many of them spread across each grid. Their combined capacity can only supply a small fraction of electricity demands, but we don’t need them for that during a blackout. We just need them to create enough power so that larger base load plants can spin up. Hydropower plants are often used as blackstart sources because they only need a little bit of electricity to open the gates and excite the generators to produce electricity. Some wind turbines and solar plants could be used as blackstart sources, but most aren’t set up for it because they don’t produce power 24-7.

But, producing enough power to get the bigger plants started is only the first of many hurdles to restoring service during a blackout. The next step is to get the power to the plants. Luckily, we have some major extension cords stretched across the landscape. We normally call them transmission lines, but during a blackout, they are cranking paths. That’s because you can’t just energize transmission lines with blackstart sources. First those lines have to be isolated so that you don’t inadvertently try to power up cities along the way. All the substations along a predetermined cranking path disconnect their transformers to isolate the transmission lines and create a direct route. Once the blackstart source starts up and energizes the cranking path, a baseload power plant can draw electricity directly from the line, allowing it to spin up.

One trick to speed up recovery is to blackstart individual islands within the larger grid. That provides more flexibility and robustness in the process. But it creates a new challenge: synchronization. Let’s go back to the model to see how this works. I have both generating stations running now, each powering their own separate grid. This switch will connect the two together. But you can’t just flip it willy nilly. Take a look at my oscilloscope and it’s easy to see that these two grids aren’t synchronized. They’re running at slightly different frequencies. If I just flip the switch when the voltage isn’t equal between the two grids, there’s a surge in current as the two generators mechanically synchronize. We’re only playing with a few volts here, so it’s a little hard to see on camera. If I flip the switch when the two generators are out of sync, they jerk as the magnetic fields equalize their current. If the difference is big enough, the two generators actually fight against each other, essentially trying to drive each other like motors. It’s kind of fun with this little model, but something like this in a real power plant would cause tremendous damage to equipment. So during a black start, each island, and in fact each individual power plant that comes online, has to be perfectly synchronized (and this is true outside of black start conditions as well).

I can adjust the speed of my motors to get them spinning at nearly the exact same speed, then flip the switch when the waveforms match up just right. That prevents the surges of power between the two systems at the moment they’re connected. You can see that the traces on the oscilloscope are identical now, showing that our two island grids are interconnected. One way to check this is to simply connect a light between the same phase on the two grids. If the light comes on, you know there’s a difference in voltage between them and they aren’t synchronized. If the light goes off and stays off, there’s no voltage difference, meaning you’re good to throw the breaker. Older plants were equipped with a synchroscope that would show both whether the plant was spinning at the same speed as the grid (or faster or slower) and whether the phase angle was a match. I bought an old one for this video, but it needs much higher voltages than I’m willing to play with in the studio, so let’s just animate over the top of it. Operators would manually bring their generators up to speed, making slight adjustments to match the frequency of the rest of the grid. But matching the speed isn’t enough, you also have to match the phase, so this was a careful dance. As soon as the synchroscope needle both stopped moving and was pointing directly up, the operator could close the breaker.

During a black start, utilities can start restoring power to their customers, slowly matching generation capacity with demand as more and more power plants come online. Generally, the most critical loads will be prioritized during the recovery like natural gas infrastructure, communications, hospitals, and military installations. But even connecting customers adds complexity to restoration.

Some of our most power-hungry appliances only get more hungry the longer they’ve been offline. For example an outage during the summer means all the buildings are heating up with no access to air conditioning. When the power does come back on, it’s not just a few air conditioners ready to run. It’s all of them at once. Add that to refrigerators, furnaces, freezers, and hot water heaters, and you can imagine the enormous initial demand on the grid after an extended outage. And don’t forget that many of these appliances use inductive motors that have huge inrush currents. For example, here’s an ammeter on the motor of my table saw while I start it up. It draws a whopping 28 amps as it gets up to speed before settling down to 4 amps at no load. Imagine the demand from thousands of motors like this starting all at the exact same instant. The technical term for this is cold load pickup, and it can be as high as eight to ten times normal electrical demands before the diversity of loads starts to average out again, usually after about 30 minutes. So, grid operators have to be very deliberate about how many customers they restore service to at a time. If you ever see your neighbor a few blocks away getting power before you, keep in mind this delicate balancing act that operators have to perform in order to get the grid through the cold load pickup for each new group of customers that go online.


The ability to black start a power grid quickly after a total collapse is so important because electricity is vital to our health and safety. After the 2003 blackout in the US, new reliability standards were issued, including one that requires grid operators to have detailed system restoration plans. That includes maintaining blackstart sources, even though it’s often incredibly expensive. Some standby equipment mostly just does just that: stands by. But it still has to be carefully maintained and regularly tested in the rare case that it gets called into service. Also, the grid is incredibly vulnerable during a blackstart, and if something goes wrong, breakers can trip and you might have to start all over again. Utilities have strict security measures to try and ensure that no one could intentionally disable or frustrate the black start process. Finally, they do detailed analysis to make sure they can bring their grid up from scratch, including testing and even running drills to practice the procedures; All this cost and effort and careful engineering just to ensure that we can get the grid back up and running to power homes and businesses after a major blackout.

December 06, 2022 /Wesley Crump

How Long Would Society Last During a Total Grid Collapse?

November 22, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In February 2021, a winter storm that swept through Texas caused one of the most severe power crises in American history. The cold weather created shockingly high electricity demands as people tried to keep their homes warm. But it also caused problems with the power supply because power plants themselves and their supporting infrastructure weren’t adequately protected against freezing weather. The result was that Texas couldn’t generate enough power to meet demand. Instead they would have to disconnect customers to reduce demands down to  manageable levels. But before grid operators could shed enough load from the system, the frequency of the alternating current dropped as the remaining generators were bogged down, falling below 59.4 hertz for over 4 minutes.

It might not seem like much, but that is a critical threshold in grid operations. It’s 1% below nominal. Power plants have relays that keep track of grid frequency and disconnect equipment if anything goes awry to prevent serious damage. If the grid frequency drops below 59.4 hertz, the clock starts ticking. And if it doesn’t return to the nominal frequency within 9 minutes, the relays trip! That means the Texas grid came within a bathroom break from total collapse. If a few more large power plants tripped offline or too few customers were shed from the system in time, it’s likely that the frequency would have continued to drop until every single generator on the grid was disconnected.

Thankfully, that nightmare scenario was avoided. Still, despite operators preventing a total collapse, the 2021 power crisis was one of the most expensive and deadly disasters in Texas history. If those four minutes had gone differently, it’s almost impossible to imagine how serious the consequences would be. Let’s put ourselves in the theoretical boots of someone waking up after that frigid February night in Texas, assuming the grid did collapse, and find out. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the impacts of blackouts on other infrastructure.

Every so often some loud noise wakes you from your sleep: a truck backfiring on the street outside, a baby crying, a cat knocking something off a shelf. But it’s a very different thing altogether to be awoken by silence, your unconscious mind telling you that the sounds you should be hearing are gone. It only takes a groggy minute to piece it together. The refrigerator is silent, no air is flowing through the heating register, the ceiling fan above your head is slowly coming to a stop. The power is out. You check your phone. It’s 4AM. Nothing you can really do but go back to sleep and hope they get it fixed by daylight.

Most of us have experienced a power outage at some point, but they’re usually short (lasting on the order of minutes or hours) and they’re mostly local (affecting a small area at a time). A wide area interconnection - that’s the technical term for a power grid - is designed that way on purpose. It has redundancies, multiple paths that power can take to get to the same destination, and power users and producers are spread out, reducing the chance that they could be impacted all at once. But having everyone interconnected is a vulnerability too, because if things go very wrong, everyone is affected. We’re in the midst of a deep dive series on wide scale outages to the power grid, and a mismatch between supply and demand (like what happened in Texas) is only one of the many reasons that could cause a major blackout. Natural disasters, engineering errors, and deliberate attacks can all completely collapse a grid, and - at least for the first few hours of an outage - you might not even know that what you’re experiencing is any more serious than a wayward tree branch tripping the fuse on the transformer outside your house.

You wake up 3 hours later, cold, sunlight peeking in through your bedroom window. The power is still off. You grab your cell phone to try and figure out what’s going on. It has a full battery from charging overnight, and you have a strong signal too. You try to call a friend, but the call won’t go through. You try a few more times, but still, nothing more than a friendly voice saying “All Circuits Are Busy.”

There is a vast array of pathways that information flows between people across the globe, and they all use grid power to function. Fiber networks use switches and optical terminals distributed throughout the service area. Cable TV and DSL networks have nodes that service around 500 to 1000 customers each that require power. Cellular networks use base stations mounted on towers or rooftops. Major telecommunications facilities are usually on prioritized grid circuits and may even have redundant power feeds from multiple substations, but even during a blackout where the entire grid is completely disabled, you might still have service. That’s because most telecommunication facilities are equipped with backup batteries that can keep them running during a power outage for 4 to 8 hours. Critical facilities like cellular base stations and data-centers often have an on-site backup generator. These generators have enough fuel to extend the resiliency beyond 24 to 48 hours. That said, major emergencies create huge demands on telecommunication services as everyone is trying to find and share information at once, so you might not be able to get through even if the services are still available. In the US, the federal government works with telecommunications providers to create priority channels so that 911 calls, emergency management communications, and other matters related to public safety can get through even when the networks are congested.

Since you’re trying to make a personal call and you aren’t enrolled in the Telecommunications Service Priority program, you’re not getting through. Just then, an emergency alert appears on your screen. It says that there’s a power grid failure and to prepare for an extended outage. The reality of the situation is just starting to set in. Since most people have a cell phone, wireless emergency alerts have become an important addition to the Emergency Alert System that connects various levels of government to tv, radio, satellite, and telephone companies to disseminate public warnings and alerts. During a blackout, sharing information isn’t just for likes on social media. It’s how we keep people safe, connect them with resources, and maintain social order. Two-way communications like cell phones and the internet might not last long during a grid outage, so one-way networks like radio and television broadcasts are essential to keep people informed. These facilities are often equipped with more backup fuel reserves and even emergency provisions for the staff so that they can continue to operate during a blackout for weeks if necessary.

Jump ahead a couple of days.Your circumstances start to dictate your experiences heavily. Even an outage of this length can completely upend your life if you, for example, depend on medication that must be refrigerated or electrically-powered medical equipment (like a ventilator or dialysis machine). But for many, a blackout on the order of a day or two is still kind of fun, a diversion from the humdrum of everyday life. Maybe you’ve scrounged together a few meals from what’s remaining in your pantry, enjoyed some candlelit conversations with neighbors, seen more stars in the night sky than you ever have in your life. But after those first 48 hours, things are starting to get more serious. You ponder how long you can stay in your home before needing to go out for supplies as you head into the kitchen to get a glass of water. You open the tap, and nothing comes out.

A public water supply is another utility highly dependent on a functioning electrical grid. Pumping, cleaning, and disinfecting water to provide a safe source to everyone within a city is a power-intensive ordeal. Water is heavy, after all, and just moving it from one place to another takes a tremendous amount of energy. Most cities use a combination of backup generators and elevated storage to account for potential emergencies. Those elevated tanks, whether they are water towers or just ground-level basins built on hillsides, act kind of like batteries to make sure the water distribution system stays pressurized even if pumps lose power. But those elevated supplies don’t last forever. Every state has its own rules about how much is required. In Texas, large cities must have at least 200 gallons or 750 liters of water stored for every connection to the system, and half of that needs to be in elevated or pressurized tanks so that it will still flow into the pipes if the pumps aren’t working. Average water use varies quite a bit by location and season, but that amount of storage is roughly enough to last a city two days under normal conditions. Combine the backup storage with the backup generation system at a typical water utility, and maybe they can stretch to 3 or 4. Without a huge mobilization of emergency resources, water can quickly become the most critical resource in an urban area during a blackout. But don’t forget the related utility we depend on as well: sewage collection.

Lift stations that pump raw sewage and treatment plants that clean it to a level where it’s safe to release back into the environment are energy intensive processes as well. Most states require that lift stations and treatment plants have backup power supplies or enough storage to avoid overflows during an outage, but usually those requirements are for short-term disruptions. When power is lost for more than a day or two, these facilities won’t be able to continue functioning without additional fuel and maintenance. Even in the best case scenario, that means raw wastewater in the sewers will have to bypass treatment plants and be discharged directly into waterways like rivers and oceans. In the worst case, sewers and lift stations will overflow, exposing the people within cities to raw sewage and creating a public health emergency.

Flash forward to a week after the start of the blackout, and any fun from the change of pace is long gone. You still keep your cell phone battery charged from your car, but you rarely get a signal and phone calls almost never connect. Plus, your car’s almost out of gasoline and the fuel at filling stations has long been sent to backup generators at critical facilities. You are almost certainly running low on food and water after a week, even if you’ve been able to share or barter with neighbors or visit one of the rare stores that was willing to open their doors and accept cash. By now, only the most prioritized facilities like hospitals and radio stations plus those with solar or wind charging systems still have a functioning backup power supply. Everything else is just dead. And now you truly get a sense of how complex and interconnected our systems of infrastructure are, because there’s almost nothing that can frustrate the process of restoring power than a lack of power itself. Here’s what I mean:

Power plants are having trouble purchasing fuel because, without electricity to power data centers and good telecommunications, banks and energy markets are shut down. Natural gas compressors don’t have power, so they can’t send fuel to the plants. Railway signals and dispatch centers are down, so the coal trains are stopped. Public roadways are snarled because none of the traffic signals work, creating accidents and reducing the capacity at intersections. Even if workers at critical jobs like power plants, pipelines, and substations still have gas in their vehicles, they are having a really hard time actually getting to work. And even if they can get there, they might not know what to do. Most of our complicated infrastructure systems like oil and gas pipelines, public water systems, and the electrical grid are operated using SCADA - networked computers, sensors, and electronic devices that perform a lot of tasks automatically… if they have power. Even if you can get people to the valves, switches, pump stations, and tanks to help with manual operations, they might not know under which parameters to operate the system. The longer the outage lasts, the more reserves of water, fuel, foods, medicine, and goods deplete, and the more systems break down. Each of these complicated systems are often extremely difficult to bring back online alone, and nearly impossible without the support of adjacent infrastructure.

Electricity is not just a luxury. It is a necessity of modern life. Even ignoring our own direct use of it, almost everything we depend on in our daily lives, and indeed the orderly conduct of a civil society, is undergirded by a functioning electrical grid. Of course, life as we know it doesn’t break down as soon as the lights go out. Having gone without power for three days myself during the Texas winter storm, I have seen first hand how kind and generous neighbors can be in the face of a difficult situation. But it was a difficult situation, and a lot of people didn’t come through on the other side of those three days quite as unscathed as I did.


Natural disasters and bad weather regularly create localized outages, but thankfully true wide-scale blackouts have been relatively few and far between. That doesn’t mean they aren’t possible, though, so it’s wise to be prepared. In general, preparedness is one of the most important roles of government, and at least in the US, there’s a lot we get right about being ready for the worst. That said, it makes sense for people to have some personal preparations for long-duration power outages too, and you can find recommendations for supplies to keep on hand at FEMA’s website. At both an institutional and personal level, finding a balance between the chance of disaster striking and the resources required to be prepared is a difficult challenge, and not everyone agrees on where to draw the line. Of course, the other kind of preparedness is our ability to restore service to a collapsed power grid and get everyone back online as quickly as possible. That’s called a black start, and it sounds simple enough, but there are some enormous engineering challenges associated with bringing a grid up from nothing. That’s the topic we’ll cover in the next Practical Engineering video, so make sure you’re subscribed so you don’t miss it. Thank you for watching, and let me know what you think.

November 22, 2022 /Wesley Crump

How Would a Nuclear EMP Affect the Power Grid?

November 08, 2022 by Wesley Crump


[Note that this article is a transcript of the video embedded above.]

Late in the morning of April 28, 1958, the USS Boxer aircraft carrier ship was about 70 miles off the coast of the Bikini Atoll in the Pacific Ocean. The crew of the Boxer was preparing to launch a high-altitude helium balloon. In fact, this would be the 17th high-altitude balloon to be launched from the ship. But this one was a little different. Where those first 16 balloons carried some instruments and dummy payloads, attached to this balloon was a 1.7 kiloton nuclear warhead, code named Yucca. The ship, balloon, and bomb were all part of operation Hardtack, a series of nuclear tests conducted by the United States in 1958. Yucca was the first test of a nuclear blast in the upper limits of earth’s atmosphere. About an hour and a half after the balloon was launched, it reached an altitude of 85,000 feet or about 26,000 meters. As two B-36 peacemaker bombers loaded down with instruments circled the area, the warhead was detonated.

Of course, the research team collected all kinds of data during the blast, including the speed of the shock wave, the effect on air pressure, and the magnitude of nuclear radiation released. But, from two locations on the ground, they were also measuring the electromagnetic waves resulting from the blast. It had been known since the first nuclear explosions that the blasts generate an electromagnetic pulse or EMP, mainly because it kept frying electronic instruments. But until Hardtack, nobody had ever measured the waves generated from a detonation in the upper atmosphere. What they recorded was so far beyond their expectations, that it was dismissed as an anomaly for years. All that appears in the report is a casual mention of the estimated electromagnetic field strength at one of the monitoring stations being around 5 times the maximum limit of the instruments.

It wasn’t until 5 years later that the US physicist Conrad Longmire would propose a theory for electromagnetic pulses from high-altitude nuclear blasts that is still the widely accepted explanation for why they are orders of magnitude stronger than those generated from blasts on the ground. Since then, our fears of nuclear war not only included the scenario of a warhead hitting a populated area, destroying cities and creating nuclear fallout, but also the possibility of one detonating far above our heads in the upper atmosphere, sending a strong enough EMP to disrupt electronic devices and even take out the power grid. As with most weapons, the best and most comprehensive research on EMPs is classified. But, in 2019, a coalition of energy organizations and government entities called the Electric Power Research Institute (or EPRI) funded a study to try and understand exactly what could happen to the power grid from a high altitude nuclear EMP. It’s not the only study of its kind, and it’s not without criticism from those who think it leans optimistic, but it has the most juicy engineering details from all the research I could find. And the answers are quite a bit different than Hollywood would have you believe. This is a summary of that report, and it’s the first in a deep dive series of videos about large-scale threats to the grid. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the impact of a nuclear EMP on our power infrastructure.

A nuclear detonation is unwelcome in nearly every circumstance. These events are inherently dangerous and the physics of a blast go way beyond our intuitions. That’s especially true in the upper atmosphere where the detonation interacts with earth’s magnetic field and its atmosphere in some very unique ways to create an electromagnetic pulse. An EMP actually has three distinct components all formed by different physical mechanisms that can have significantly different impacts here on Earth’s surface. The first part of an EMP is called E1. This is the extremely fast and intense pulse that immediately follows detonation.

The gamma rays released during any nuclear detonation collide with electrons, ionizing atoms and creating a burst of electromagnetic radiation. That’s generally bad on its own, but when detonated high in the atmosphere, earth’s magnetic field interacts with those free electrons to produce a significantly stronger electromagnetic pulse than if detonated within the denser air at lower altitudes. The E1 pulse comes and goes within a few nanoseconds, and the energy is somewhat jokingly referred to as DC to daylight, meaning it’s spread across a huge part of the electromagnetic spectrum.

The E1 pulse generally reaches anywhere within a line of sight of the detonation, and for a high-altitude burst, this can cover an enormous area of land. At the height of the Yucca test, that’s a circle with an area larger than Texas. A weapon at 200 kilometers in altitude could impact a significant fraction of North America. But, not everywhere within that circle experiences the strongest fields. In general, the further from the blast you are, the lower the amplitude of the EMP. But, because of earth’s magnetic field, the maximum amplitude occurs a little bit south of ground zero (in the northern hemisphere), creating this pattern called a smile diagram. But no one will be smiling to find out that they are within the affected area of a high altitude nuclear blast.

Although a weapon like this wouldn’t damage buildings, create nuclear fallout, be felt by humans, or probably even be visible to most, that E1 pulse can have a huge effect on electronic devices. You’re probably familiar with antennas that convert radio signals into voltage and current within a conductor. Well, for a strong enough pulse spread across a huge range of frequencies, essentially any metallic object will act like an antenna, converting the pulse into massive voltage spikes that can overwhelm digital devices. And, the E1 pulse happens so quickly that even devices meant to protect against surges may not be effective. Of course, with just about everything having embedded electronics these days, this has far reaching implications. But on the grid, there are really only a few places where an E1 pulse is a major concern. The first is with the control systems within power plants themselves. The second is communications systems used to monitor and record data to assist grid operators. The EPRI report focused primarily on the third hazard associated with an E1 pulse: digital protective relays.

Most folks have seen the breakers that protect circuits in your house. The electrical grid has similar equipment used to protect transmission lines and transformers in the event of a short circuit or fault. But, unlike the breakers in your house that do both the sensing for trouble and the circuit breaking all in one device, those roles are separate on the grid. The physical disconnecting of a circuit under load is done by large, motor controlled contactors quenched in oil or dielectric gas to prevent the formation of arcs. And the devices that monitor voltage and current for problems and tell the breakers when to fire are called relays. They’re normally located in a small building in a substation to protect them from weather. That’s because most relays these days are digital equipment full of circuit boards, screens, and microelectronics. And all those components are particularly susceptible to electromagnetic interference. In fact, most countries have strict regulations about the strength and frequency of electromagnetic radiation you can foist upon the airwaves, rules that I hope I’m not breaking with this device.

This is a pulse generator I bought off eBay just to demonstrate the weird effects that electromagnetic radiation can have on electronics. It just outputs a 50 MHz wave through this antenna, and you can see when I turn it on near this cheap multimeter, it has some strange effects. The reading on the display gets erratic, and sometimes I can get the backlight to turn on. You can also see the two different types of E1 vulnerabilities here. An EMP can couple to the wires that serve as inputs to the device. And an EMP can radiate the equipment directly. In both cases, this little device wasn’t strong enough to cause permanent damage to the electronics, but hopefully it helps you imagine what’s possible when high strength fields are applied to sensitive electronic devices.

The EPRI report actually subjected digital relays to strong EMPs to see what the effects would be. They used a Marx generator which is a voltage multiplying circuit, so I decided to try it myself. A Marx generator stores electricity in these capacitors as they charge in parallel. When triggered, the spark gaps connect all the capacitors in series to generate very high voltages, upwards of 80 or 90 kilovolts in my case. My fellow YouTube engineer Electroboom has built one of these on his channel if you want to learn more about them. Mine generates a high voltage spark when triggered by this screwdriver. Don’t try this at home, by the way. I didn’t design an antenna to convert this high voltage pulse into an EMP, but I did try a direct injection test. This cheap digital picture frame didn’t stand a chance. Just to clarify, this is in no way a scientific test. It’s just a fun demonstration to give you an idea of what an E1 pulse might be capable of.

The E2 pulse is slower than E1 because it’s generated in a totally different way, this time from the interaction of gamma rays and neutrons. It turns out that an E2 pulse is roughly comparable to a lightning strike. In fact, many lightning strikes are more powerful than those that could be generated by high-altitude nuclear detonations. Of course, the grid’s not entirely immune to lightning, but we do use lots of lightning protection technology. Most equipment on the grid is already hardened against some high voltage pulses such that lightning strikes don’t usually create much damage. So, the E2 pulse isn’t as threatening to our power infrastructure, especially compared to E1 and E3.

The final component of an EMP, called E3, is, again, much different from the other two. It’s really not even a pulse at all, because it’s generated in an entirely different way. When a nuclear detonation happens in the upper atmosphere, earth’s magnetic field is disturbed and distorted. As the blast dissipates, the magnetic field slowly returns to its original state over the course of a few minutes. This is similar to what happens when a geomagnetic storm on the sun disrupts earth’s gravity, and large solar events could potentially be a bigger threat than a nuclear EMP to the grid. In both cases, it’s because of the disturbance and movement of earth’s magnetic field. You probably know what happens when you move a magnetic field through a conductor: you generate a current. We call that coupling, and it’s essentially how antennas work. And in fact, antennas work best when their size matches the size of the electromagnetic waves.

For example, AM radio uses frequencies between down to 540 kilohertz. That corresponds to wavelengths that can be upwards of 1800 feet or 550 meters, big waves. Rather than serving as a place to mount antennas like FM radio or cell towers, AM radio towers are the antenna. The entire metal structure is energized! You can often tell an AM tower by looking at the bottom because they sit atop a small ceramic insulator that electrically separates them from the ground. As you can imagine, the longer the wavelength, the larger an antenna has to be to couple well with the electromagnetic radiation. And hopefully you see what I’m getting at. Electrical transmission and distribution lines often run for miles, making them the ideal place for an E3 pulse to couple and generate current. Here’s why that’s a problem.

All along the grid we use transformers to change the voltage of electricity. On the transmission side, we increase the voltage to reduce losses in the lines. And on the distribution side, we lower the voltage back down to make it safer for customers to use in their houses and buildings. Those transformers work using electromagnetic fields. One coil of wire generates a magnetic field that passes through a core to induce current to flow through an adjacent coil. In fact, the main reason we use alternating current on the grid is because it allows us to use these really simple devices to step voltage up or down. But transformers have a limitation.

Up to a certain point, most materials used for transformer cores have a linear relationship between how much current flows and the strength of the resulting magnetic field. But, this relationship breaks down at the saturation point, beyond which additional current won’t create much further magnetism to drive current on the secondary winding. An E3 pulse can induce a roughly DC flow of current through transmission lines. So you have DC on top of AC, which creates a bias in the sine wave. If there’s too much DC current, the transformer core might saturate when current moves in one direction but not the other, distorting the output waveform. That can lead to hot spots in the transformer core, damage to devices connected to the grid that expect a nice sinusoidal voltage pattern, and lots of other funky stuff.

So what are the implications of all this? For the E1 pulse damaging some relays, that’s probably not a big deal. There are often redundant paths for current to flow in the transmission system. That’s why it’s called the grid. But the more equipment that goes offline and the greater the stress on the remaining lines, the greater the likelihood of a cascading failure or total collapse. EPRI did tests simulating a one megaton bomb detonated at 200 kilometers in altitude. They estimated that about 5% of transmission lines could have a relay that gets damaged or disrupted by the resulting EMP. That alone probably isn’t enough to cause a large-scale blackout of the power grid, but don’t forget about E3. EPRI found that the third part of an EMP could lead to regional blackouts encompassing multiple states because of transformer core saturation and imbalances between supply and demand of electricity. Their modeling didn’t lead to widespread damage to the actual transformers, and that’s a good thing because power transformers are large, expensive devices that are hard to replace, and most utilities don’t keep many spares sitting around. All that being said, their report isn’t without criticism and many believe that an EMP could result in far more damage to electric power infrastructure.
When you combine the effects of the E1 pulse and the E3 pulse, it’s not hard to imagine how the grid could be seriously disabled. It’s also easy to see how, even if the real damages to equipment aren’t that significant, the widespread nature of an EMP, plus its potential impacts on other systems like computers and telecommunications, has the potential to frustrate the process of getting things back online. A multi-day, multi-week, or even multi-month blackout isn’t out of the question in the worst-case scenario. It’s probably not going cause a hollywood-style return to the stone age for humanity, but it is certainly capable of causing a major disruption to our daily lives. We’ll explore what that means in a future video.

November 08, 2022 /Wesley Crump

Endeavour's Wild Journey Through the Streets of Los Angeles

October 18, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In May of 1992, the Space Shuttle Endeavour launched to low earth orbit on its very first flight. That first mission was a big one: the crew captured a wayward communications satellite stuck in the wrong orbit, attached a rocket stage, and launched it back into space in time to help broadcast the Barcelona Summer Olympics. Endeavour went on to fly 25 missions, spending nearly a year total in space and completing 4,671 trips around the earth. But even though the orbiter was decommissioned after its final launch in 2011, it had one more mission to complete: a 12 mile (or 19 kilometer) trip through the streets of Los Angeles to be displayed in the California Science Center. Endeavour’s 26th mission was a lot slower and a lot shorter than the previous 25, but it was still full of fascinating engineering challenges. This October marks the 10 year anniversary of the nearly 3-day trip, so let’s reminisce on this incredible feat and dive into what it took to get the orbiter safely to its final home. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about the Space Shuttle Endeavor Transport project.

As midnight approached on October 11, 2012 the Space Shuttle Endeavour began its harrowing (if somewhat sluggish) journey from LAX airport to the California Science Center near downtown LA. Although Endeavour traveled into space 25 times, launched a number of satellites, visited Mir, helped assemble the International Space Station, and even repaired the Hubble Telescope, it was never designed to navigate the busy streets of an urban area. But, despite spending so much of its career nearly weightless, it was too heavy for a helicopter, and it couldn’t be dismantled without causing permanent damage to the heat tiles, so the Science Center decided to foot the roughly $10 million it would take to move the shuttle overland. The chilly late night departure from the hangar at LAX was the start of the transport, but Endeavour’s journey to Exposition Park really started more than a year beforehand.

In April 2011, NASA awarded Endeavour to the California Science Center, one of only four sites to receive a retired shuttle. The application process leading up to the award and the planning and engineering that quickly followed were largely an exercise in logistics. You see, Endeavor is about 122 feet (37 meters) long with a 78 foot (24 meter) wingspan, and with 58 feet (18 meters) to the top of the vertical stabilizer during transport. It also weighs a lot, around 180,000 pounds (80,000 kg), about as much as a large aircraft. Transporting the shuttle through Los Angeles would not be a simple feat. So, the Science Center worked with a number of engineering firms in addition to their heavy transport contractor (many of whom offered their services pro bono) to carefully plan the operation.

The most critical decision to be made was what route Endeavor would take through the streets of LA. The Shuttle couldn’t fit through an underpass, which meant it would have to go over the 405, the only major freeway along its path. It also would face nearly countless obstacles on its journey, including trees, signs, traffic signals, and buildings. 78 feet is wider than most two-lane city streets, and there are a lot of paths in Los Angeles that a Space Shuttle could never traverse. And this isn’t a sleepy part of the city either. Exposition Park and the Science Center are just outside downtown Los Angeles. The engineering team looked at numerous routes to get the Shuttle to its destination, evaluating the obstacles along the way. They ultimately settled on a 12 mile (or 19 kilometer) path that would pass through Inglewood and Leimert Park.

On the NASA side, they had been stripping the Shuttle of the toxic and combustible fuel system used for the reaction control thrusters and explosive devices like hatch covers to make the vehicle safe for display at a museum. With Endeavor attached to the top of a 747 jet, NASA made a series of low altitude flyovers around California to celebrate the shuttle’s accomplishments and retirement before landing and offloading the vehicle at LAX, a short distance but a long journey away from its final destination. Three weeks later, the last leg of that journey began.  

For its ride, the shuttle would sit on top of the Overland Transporter, a massive steel contraption built by NASA in the 1970s to move shuttles between Palmdale and Edwards Air Force Base. Even though it was designed for the shuttle program, this was Endeavour’s first ride on the platform. Before this move, the transporter had been parked in the desert for the last 30 years since the last shuttle was assembled in Palmdale in 1985. Just like the Shuttle Carrier Aircraft, a modified Boeing 747 that ferried the shuttles on its back, and the main fuel tank that attached to the orbiter during launch, the overland transporter used ball mounts that fit into sockets on the shuttle’s underside (two aft and one forward). The contractor used four Self-Propelled Modular Transporters (or SPMTs) to support and move the shuttle. These heavy haul platforms have a series of axles and wheels, all of which can be individually controlled to steer left or right, crab sideways, or even rotate in place (all of which were needed to get this enormous spaceship through the narrow city streets). The SPMTs used for the Endeavour transport also included a hydraulic suspension that could raise or lower the Shuttle to keep it balanced on uneven ground and help avoid obstacles. Each of the four SPMTs could be electronically linked to work together as a single vehicle. An operator with a joystick walked alongside the whole assembly, controlling the move with the help of a team of spotters all around the vehicle. And yes, it was slow enough to walk next to it the entire trip.

About 6 hours into the move, the Shuttle pulled up to the shopping center at La Tijera and Sepulveda Eastway, the first of several stops to allow the public a chance to see the spectacle while also giving crews time to coordinate ahead of the move. Huge crowds gathered all along the route during the move, especially at these pre-planned stops. In fact, the transport project may be one of the most recorded events in LA history, a fact I’m sure gave a little bit of trepidation to the engineers and contractors involved in the project.

Even though this move was pretty unique, super heavy transport projects aren’t unusual. We move big stuff along public roadways pretty regularly when loads are quote-unquote “non-divisible” and other modes of transportation aren’t feasible. I won’t go into a full engineering lesson on roadway load limits here, but I’ll give you a flavor of what’s involved. Every area of pavement sustains a minute amount of damage every time a vehicle drives over. Just like bending a paperclip over and over eventually causes it to break, even small deflections in asphalt and concrete pavements eventually cause them to deteriorate. Those tiny damages add up over time, but some are more tiny than others. As you might expect, the magnitude of that damage is proportional to the weight of the vehicle. But, it’s not a linear relationship. The most widely used road design methodology estimates that the damage caused to a pavement is roughly proportional to the axle load raised to the power of 4. That means it would take thousands of passenger vehicles to create the same amount of damage to the pavement as a single fully-loaded semi truck. And it’s not just pavement. Heavy vehicles can cause embankments to fail and underground utilities like sewer and water lines to collapse.

Because heavy vehicles wear out roadways so quickly, states have load limits on trucks to try and maintain some balance between the benefits of the roadway to commerce and the cost of maintenance and replacement. If you want to exceed that limit, you have to get a permit, which can be a pretty straightforward process in some cases, or can require you to do detailed engineering analysis in others. Of course, nearly every state in the US has different rules, and even cities and counties within the states can have requirements for overweight vehicles. Most states also have exemptions to load limits for certain industries like agricultural products and construction equipment. But, curiously, no state has an exemption for space shuttles. So, in addition to picking a route through which the orbiter could fit, a big part of the Endeavor transport project involved making sure the weight of the shuttle plus the transporter plus the STMPs wouldn’t seriously damage the infrastructure along the way. The engineering team prepared detailed maps of all the underground utilities that could be damaged or crushed by the weight of the orbiter, and roughly 2,700 steel plates borrowed from as far as Nevada and Arizona were placed along the route to distribute the load.

Another place where Endeavour’s weight was a concern was the West Manchester Boulevard bridge over the 405. Around 6:30 PM, 19 hours into the move, Endeavour pulled up to the renowned Randy’s Donuts, its astronomically large donut a perfect prop for photos of such an enormous spacecraft. Photographers had a heyday, and they had time to line up their shots perfectly because there was plenty of work to be done to prepare for the next leg. The shuttle’s permit wouldn’t allow it to be carried over the bridge using the four heavy SPMTs. Instead, they would have to lift it off the transporters and lower it onto a lightweight dolly to get over the 405. The SPMTs were sent over the bridge one at a time ahead of the shuttle. Then, longtime donor and Science Center partner, Toyota, got a chance to shine. The dolly was attached to a stock Toyota Tundra pickup truck that slowly pulled the shuttle across the bridge. Toyota got a nice commercial out of it, and that pickup still sits outside the Science Center as part of a demonstration about leverage (although sadly, it was broken when I was there). By midnight, the shuttle was over the bridge and crews were working to reconnect the SPMTs so that the journey could continue.

Through the night, Endeavour continued its trip eastward, passing the Inglewood City Hall. By 9:30 the next morning, the shuttle had reached its next stop, The Forum arena, where it was greeted by a marching band and speeches by former astronauts. But even though the shuttle was stopped, the crews supporting the move (both ahead of and behind the orbiter) continued working diligently. During preparation for the move, the engineers in charge had used a mobile laser-scanner along the route to create a 3D point cloud of everything that could be in the way. Rather than use crews of surveyors to walk the route and document potential collision points, which would have taken months, they used a digital model of the shuttle to perform clash detection on a computer. This effort allowed the engineering team to optimize the path of the shuttle and avoid as many traffic signals, light poles, street signs, and parking meters as possible. In some cases, the Shuttle would have to waggle down the street to clear impediments on either side, sometimes with inches to spare. The collision detection also helped engineers create a list of all the facilities that would need to be temporarily removed along the way by the Shuttle delivery team. Armies of workers ahead of the move used that list to dismantle and lay obstacles down, and armies of workers behind the move could immediately reassemble them to minimize disruptions, outages, and street closures.

By around noon on Saturday (36 hours into the move), the Shuttle had reached one of the most challenging parts of the route: Crenshaw Drive. This narrow path has apartment buildings tight to the street, narrow straights for an overland space shuttle. Endeavour’s next stop was scheduled for 2PM at Baldwin Hills Crenshaw Plaza, only about 3 miles or 5 kilometers away. But, as the shuttle continued its northward crawl, it encountered several unexpected obstacles, mainly tree branches that had been assumed to be out of the way. By 5PM, the shuttle was still well south of the party as chainsaw crews worked to clear the path, but event organizers decided to go ahead with the performances. The Mayor took the stage to welcome Endeavour to Los Angeles, but the shuttle was still too far to be seen.

Later that night, Endeavour finally made the difficult turn onto Martin Luther King Jr. Boulevard for its final eastward trek, dodging trees all along the way. The trees were probably the most controversial part of the entire shuttle move project, with around 400 needing to be cut down along the route (often in the median between travel lanes). Many in the affected communities felt that having a space shuttle in their science museum wasn’t worth the cost of those trees, several of which were decades old. To try and make up for the loss, the Science Center pledged to replace all the trees that were removed two-to-one and committed to maintain the new trees for at least two years, all at a cost of about $2 million. But the tall pines along MLK Boulevard were planted in honor of the famed civil rights leader and deemed too important to remove. Instead, the shuttle zigzagged its way between the trees on its way to the Science Center. 

Endeavour continued inching eastward toward Exposition Park on the last leg of its journey, facing a few delays from obstacles, plus a hydraulic leak on one SPMT. But, by noon that Sunday, the shuttle was making its turn into Exposition Park to a crowd of cheering spectators. It hadn’t hit a single object along the way. With an average speed of about 2 miles or 3 kilometers per hour, on par with the rest of LA’s traffic, the orbiter was nearing the end of its voyage and achieving the dream of any multi-million dollar engineering project: to come in only 15 hours behind schedule. By the end of the day on Sunday, the shuttle was safely inside its new home at the California Science Center.
It took only a few weeks for the center to open the space to the public, and 10 years later, you can still go visit Endeavour today (and you should!). Here’s a dimly lit picture of the channel’s editor (and my best friend) Wesley and me visiting in 2018. The shuttle sits on top of four seismic isolators on pipe support columns so that it can move freely during an earthquake. But the current building is only meant to be temporary. The Shuttle’s final resting place, the Samuel Oschin Air and Space Center, broke ground earlier this year. Eventually, Endeavour will be moved the short distance and placed vertically, poised for launch complete with boosters and main fuel tank in celebration of all 26 of its missions: 25 into space and 1 through the streets of Los Angeles.

October 18, 2022 /Wesley Crump

What's the Difference Between Paint and Coatings?

October 04, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

There’s a popular myth that I’ve heard about several bridges (including the Golden Gate Bridge in San Francisco and the Forth Bridge in eastern Scotland) that they paint the structure continuously from end to end. Once they finish at one end, they just start back up on the other. It’s not exactly true (at least for any structures I’m familiar with), but if you drive over any steel bridges regularly, it might seem like the painting never quite ends. That’s because, despite its ease of fabrication, relatively low cost, and incredible strength, steel has a limitation that we’re all familiar with: rust. Steel corrodes when exposed to the elements, especially when the elements include salty sea air.

I’m doing a deep dive series into corrosion engineering. We’ve talked about the tremendous cost of rust and how different materials exhibit corrosion, we’ve talked about protecting against rust using dissimilar metals like zinc and aluminum, and now I want to show you the other major weapon in the fight against rust. If you’ve ever thought, “This channel is so good, he could make it interesting to watch paint dry…” well, let’s test it out. I have the rustomatic 3000 set up for another corrosion protection shootout, plus a bunch of other cool demos as well. I’m Grady and this is Practical Engineering. On today’s episode we’re talking about high performance coatings systems for corrosion protection.

You might have noticed a word missing from that episode headline: “paint.” Of course, paint and coatings get used interchangeably, even within the industry, but there is a general distinction between the two. The former has the sole purpose of decoration. For example, nearly everyone has painted the walls of a bedroom to improve the way it looks. Coatings, on the other hand, are used for protection. They look like paint on the surface, but their real purpose is to provide a physical barrier between the metal and the environment, reducing the chance that it will come into contact with oxygen and moisture that lead to corrosion. Combined with cathodic protection (that I covered in a previous video), a coating system properly applied and well maintained can extend the lifespan of a steel structure pretty much indefinitely. Although paint and coatings often include similar ingredients, are applied in the same way, and usually look the same in the end, there are some huge differences as well, the biggest one being the consequences if things go wrong.

There are definitely right ways and wrong ways to paint a bedroom, but generally, the risk of messing it up is pretty small. Sometimes the color is not quite right or the coverage isn’t perfect, but those are pretty easy to fix. In the worst scenario, it’s only a few hundred dollars and a couple of days’ work to completely redo it. Not true with a coating system on a major steel structure. Corrosion is the biggest threat to many types of infrastructure, and if the protection system fails, the structure can fail too. It’s not just money on the line, either. It’s also the environment and public safety. Pipelines can leak or break, and bridges can collapse. Finally, it’s often no simple matter to simply reapply a coating system because many structures are difficult to access and disruptive to shut down. Applying protective coatings is something you only want to do once every so often (ideally every 25 to 50 years for most types of infrastructure). That’s why the materials and methods used to apply them are so far beyond what we normally associate with painting and why the systems are often called “high-performance” coatings.

Let me show you what I mean. These are the standard US federal government specifications used in department of defense projects. We’re in Division 9, which is finishes, and if I scroll down, you can see we have a totally different document for paints and general coatings than the one used for high-performance coatings. There’s even a more detailed spec used for critical steel structures. If you take a peek into this specification, you’ll see that a significant portion of the work isn’t the coating application itself, but the preparation of the steel surface beforehand. It’s estimated that surface prep makes up around 70% of the cost of a coating system and that 80% of coating failures can be attributed to inadequate surface preparation. That’s why most coating projects on major steel structures start with abrasive blasting.

The process of shooting abrasive media through a hose at high pressure, often known as sandblasting, is usually the quickest and most cost efficient way to clean steel of surface rust, old coatings, dirt, and contaminants, and cleanliness is essential for good adhesion of the coating. But, abrasive blasting does more than just clean; It roughens. Most high performance coatings work best on steel that isn’t perfectly smooth. The roughness, also known as the surface profile, gives the coating additional surface area for stronger adhesion. In fact, let’s just take a look at a random product data sheet for a high-performance primer, and you can see right there that the manufacturer recommends blast cleaning with a profile of 1.5 mils. That means the difference between the major peaks and valleys along the surface should be around one and half thousandths of an inch or about 40 microns. It also means we need a way to measure that tiny distance in the field (in other words, without the help of scanning electron microscopy) to make sure that the steel is in the right condition for the best performance of the coating, and there are a few ways to do that.

One method uses a stylus with a sharp point that is drawn across the surface of the steel. The trace can be stored by a computer and the profile is the distance between the highest peak and lowest valley. Another option is just to use a depth micrometer with a sharp point that will project into the valleys to get a measure of the profile. Finally, you can use replica tape that has a layer of compressible foam. I have an example of several grit blasted surfaces here, and I can apply a strip of the replica tape. When I burnish the tape against the steel surface, the foam compresses to form an impression of the peaks and valleys. Here’s what that looks like in a cross-section view. When the tape is removed, we can measure its new thickness, subtract the thickness of the plastic liner, and get a measure of the surface profile. Here’s a look at how the foam looks after burnishing on a relatively smooth surface and a very rough one. I used my depth micrometer to measure a profile of about 1 mil or 25 microns for the smooth surface and about 2.5 mil or 63 microns on the rough one.

Just to demonstrate the importance of surface preparation, I’m going to do a little coating of my own here in my garage. I’ve got four samples of steel here: two I’ve roughened up using a flap disc on a grinder (in lieu of sand blasting), and two I’ve sanded to a fairly smooth surface. They aren’t mirror surfaces, but the surface profile is much lower than that of the roughened samples. I also have some oil and I’ll spread a thin coat on one of the rough samples and one of the smooth ones. I wiped the oil off with a paper towel, but no soap. So now we have all the phases of youth here: smooth and clean, rough and clean, rough and oily, and smooth and oily. I’ll coat one side of all four samples using this epoxy product, leaving the other sides exposed. Notice how the wet paint doesn’t even want to stick to the dirty surfaces, but it eventually does lay down. I put two coats on each sample, and now it’s into the rustomatic 3000, the silliest machine I’ve ever built. I go into more detail on this in the cathodic protection video if you want to learn more, but essentially it’s going to dip these samples in saltwater, let them dry, take a photo, and do it all over again roughly every 5 minutes to stress test these steel samples. We’ll leave it running for a few weeks and come back to see how the samples hold up against corrosion.

There are countless types of coating systems in use around the world to protect steel against corrosion. The chemistry and availability of new and more effective coatings continue to evolve, but there is somewhat of an industry standard system used in infrastructure projects that consists of three coats. The first coat, called the primer, is used to adhere strongly to the steel and provide the first layer of protection. Sometimes the primer coat includes particles of zinc metal. Just like using a zinc anode to provide cathodic protection, a zinc-rich prime coat can sacrifice itself to protect steel from corrosion if any moisture gets through. Next the midcoat provides the primary barrier to moisture and air. Epoxy is a popular choice because it adheres well and lasts a long time. Epoxy often comes in two parts that you have to mix together, like the product I used on those steel samples. But, epoxy has a major weakness: UV rays. So, most coating systems use a topcoat of polyurethane whose main purpose is to protect the epoxy midcoat from being damaged by the rays of the sun. It’s often clear to visible light, but ultraviolet light is blocked so it can’t damage the lower coats.

The coating manufacturer provides detailed instructions on how to apply each coating and under what environmental conditions it can be done. They’ve tested their products diligently and they don’t want to pay out warranties if something goes wrong, so coating manufacturers go to a lot of trouble to make sure contractors use each product correctly. They often have to wait for clear or cool days before coating to make sure each layer meets the specifications for humidity and temperature. Even the applied thickness of the product can affect a coating’s performance. A coating that is too thin may not provide enough of a barrier, and one that is too thick may shrink and crack. Manufacturers often give a minimum and maximum thickness of the coating, both before and after it dries. Wet film thickness can be measured using one of these little gauges. I just press it into the wet paint and I can see the highest thickness measurement that picked up some of the coating. Dry film thickness can also be measured in the field for quality control using a magnetic probe.

Of course, once the coating is applied and dry, it has to be inspected for coverage. Coatings are particularly vulnerable to damage since they are so thin, and defects (called holidays) can be hard to spot by eye. Holiday detecting devices are used by coating inspectors to make sure there are no uncovered areas of steel. Most of them work just like the game of operation, but with higher voltage and fancier probes. If any part of the probe touches bare metal, an alarm will sound, notifying the inspector of even the tiniest pinhole or air bubble in the coating so it can be repaired. Once the system passes the quality control check, the structure can be put into service with the confidence that it will be protected from corrosion for the next several decades to come.

Let’s check in on the rustomatic 3000 and see how the samples did. Surprisingly, you can’t see much difference in the time lapse view. I let these samples run for about 3 weeks, and the uncoated steel underwent much more corrosion than the coated area of each square. I also have dried salt deposits all over my shop now. But, the real difference was visible once the samples were cleaned up. I used a pressure washer to blast off some of the rust, and this was enough to remove the epoxy coating on all the samples except the rough and clean one. That sample took a little more effort to remove the coating. At first glance, the coating appears to have protected all the samples against this corrosion stress test, but if you look around the edges, the difference becomes obvious.

The rough and clean sample had the least intrusion of rust getting under the edges of the coating, and you can see that nearly the entire coated area is just as it was before the test. The smooth and clean sample had much more rust under the edges of the coating that you can see in these semicircular areas protruding into the coated area. Similarly, the roughened yet oily sample had those semicircular intrusions of rust all around the perimeter of the coated area. The smooth and dirty sample was, as expected, the worst of them all. Lots of corrosion got under the coating on all sides, including a huge area along nearly the entire bottom of the coated area. It’s not a laboratory test, but it is a conspicuous example of the importance of surface preparation when applying a coating for corrosion protection.

Like those samples, I’m just scratching the surface of high performance coating systems in this video. Even within the field of corrosion engineering, coatings are a major discipline with a large body of knowledge and expertise spread across engineers, chemists, inspectors, and coatings contractors, all to extend the lifespan and safety of our infrastructure.

October 04, 2022 /Wesley Crump

What Really Happened at the New Harbor Bridge Project?

September 20, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In July of 2022, the Texas Department of Transportation issued an emergency suspension of work on the half-finished Harbor Bridge project in Corpus Christi, citing serious design flaws that could cause the main span to collapse if construction continues. The bridge is a high-profile project and, when constructed, might briefly be the longest cable-stayed bridge in North America. It’s just down the road from me, and I’ve been looking forward to seeing it finished for years. But, it’s actually not the first time this billion dollar project has been put on hold. In a rare move, TxDOT released not only their letters to the bridge developer, publicly castigating the engineer and contractor, but also all the engineering reports with the details of the alleged design flaws. It’s a situation you never want to see, especially when it’s your tax dollars paying for the fight. But it is an intriguing look into the unique challenges in the design and construction of megaprojects. Let’s take a look at the fascinating engineering behind this colossal bridge and walk through the documents released by TxDOT to see whether the design flaws might kill the project altogether. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the New Harbor Bridge project in Corpus Christi, Texas. 

By the way, my new book comes out November 1st. Stay tuned to the end for a sneak preview.

Corpus Christi is a medium-sized city located on the gulf coast of south Texas. But even though the city is well down the list of the largest metropolitan areas in the state, it has one of the fastest growing cargo ports in the entire United States. The Port of Corpus Christi is now the third largest in the country by tonnage, due primarily to the enormous exports of crude oil and liquefied natural gas. But there are a couple of limitations to the port that are constraining its continued growth. One is the depth and width of the ship channel which is currently in the process of being deepened and widened. Dredging soil from the bottom of a harbor is an engineering marvel in its own right, but we’ll save that for another video. The second major limitation on the port is the harbor bridge.

Built in 1959, this bridge carries US Highway 181 over the Corpus Christi ship channel, connecting downtown to the north shore area. When it was constructed, the Harbor Bridge was the largest project ever to be constructed by the Texas Highway Department, later known as TxDOT. It was the pinnacle of bridge engineering and construction for the time, allowing the Army Corps of Engineers to widen the channel below so that the newest supertanker ships of the time could enter the port. The Harbor Bridge fueled a new wave of economic growth in the city, and it’s still an impressive structure to behold… if you don’t look too closely. Now, more than 60 years later, the bridge is a relic of past engineering and past needs. The Harbor Bridge has endured a tough life above the salty gulf coast, and the cost to keep corrosion from the bay at bay has increased substantially year by year. The bridge also lacks pedestrian and bicycle access, meaning the only way across the ship channel is in a watercraft or a motor vehicle (which is not ideal). Finally, the bridge is a bottleneck on the size of ships that can access the port, keeping them from entering or exiting fully-loaded and creating an obstacle to commerce within Corpus Christi. So, in 2011 (over a decade ago, now), the planning process began for a taller and wider structure.

The New Harbor Bridge project includes six-and-a-half miles (or about ten kilometers) of new bridge and roadway that will replace the existing Harbor Bridge over the Corpus Christi ship channel. And here’s a look at how the two structures compare. The new bridge will allow larger ships into the port with its 205 feet (or 62 meters) of clearance above the water. The bridge is being built just a short distance inland from the existing Harbor Bridge, which is a good thing for us because the Port Authority wouldn’t give us permission to cross the old bridge with a drone. It will eventually be demolished at the end of construction. The project also requires lots of roadway reconfigurations in downtown Corpus Christi that will connect the new bridge to the existing highway. The crown jewel will be the cable-stayed main span, supported by two impressive pylons on either side of the ship canal across 1,661 feet or 506 meters. The bridge will feature 3 lanes of traffic each way plus a bicycle and pedestrian shared use path with a belvedere midspan that will give intrepid ramblers an impressive view of Corpus Christi Bay.

The project was procured as a design-build contract awarded to a joint venture between Dragados USA and Flatiron Construction, two massive construction companies, with a huge group of subcontractors and engineers to support the project. Design-build (or DB for those in the industry) really just means that the folks who design it and the folks who build it are on the same team and work (hopefully) in collaboration to deliver the final product. That’s a good thing in a lot of ways, and design-build contracts on large projects often end up moving faster and being less expensive than similar jobs that follow the traditional design-bid-build model where the owner hires an engineer to develop designs and then bids the designs out to hire a separate qualified contractor. When an engineer and contractor work together to solve problems collaboratively, you often end up with innovative approaches and project efficiencies that wouldn’t be possible otherwise. You also don’t have to wait for all the engineering to be finished before starting construction on the parts that are ready, so the two phases can overlap somewhat. However, as we’ll see, DB contracts come with some challenges too. When the engineer and contractor are in cahoots (legally speaking), the owner of the project is no longer in the middle, and so has less control over some of the major decisions. Also, DB contracts force the engineer and contractor to make big decisions about the project very early in the design process, sometimes before they’ve even won the job, which reduces the flexibility for changes as the project matures.

Construction on the New Harbor Bridge project started in 2016 with an original completion date of 2020. But, another bridge halfway across the country would soon throw the project into disarray. In March of 2018, a pedestrian bridge at Florida International University in Miami collapsed during construction, killing six people and injuring ten more. After an extensive investigation, the National Transportation Safety Board put most of the blame for the bridge collapse on a miscalculation by the engineer, FIGG, the same engineer hired by Flatiron and Dragados to design the New Harbor Bridge project in Texas. I should note that FIGG disputes the NTSB’s assessment and has released their own independent analysis pinning the blame for the incident on improper construction. Nevertheless, the FIU collapse led TxDOT to consider whether FIGG was the right engineer for the job.

In November of 2019, they asked the DB contractor to suspend design of the bridge so they could review the NTSB findings and conduct a safety review. And only a few months later, TxDOT issued a statement that they had requested their contractor to remove and replace FIGG Bridge Engineers from the design of the main span bridge. That meant a new engineering firm would have to review the FIGG designs, recertify all the engineering and calculations, and take responsibility for the project as the engineer of record. Later that year, FIGG would be fired from another cable-stayed bridge project in Texas, and in 2021 they were debarred by the Federal Highway Administration from bidding on any projects until 2029. It took about six months for the New Harbor Bridge DB contractor to procure a new engineer for the main span. The contractor said it expected no major changes to the existing design.

Construction on the project forged ahead through most of this shakeup with steady progress on both of the approach bridges that lead to the main span. These are impressive structures themselves with huge columns supporting each span above. The bridge superstructure consists of two rows of segmental box girders, massive elements that are precast from concrete at a site not far from the bridge. For each approach, these segments are lifted and held in place between the columns using an enormous self-propelled gantry crane. Once all the segments within a span are in place, steel cables called tendons are run through sleeves cast into the concrete and stressed using powerful hydraulic jacks. When the post-tensioned tendons are locked off, the span is then self-supporting and the crane can be moved to the next set of columns. This segmental construction is an extremely efficient way to build bridges. It’s used all over the world today, but it actually got its start right here in Corpus Christi. The JFK Memorial Causeway bridge was replaced in 1973 to connect Corpus Christi to North Padre Island over the Laguna Madre. It was the first precast segmental bridge constructed in the US. And if you’re curious, yes qualified personnel can get inside the box girders. It’s a convenient way to inspect the structural members to make sure the bridge is performing well over the long term. The Harbor Bridge project will include locked entryways to the box girders and even lights and power outlets within.

Work on the main span bridge didn’t resume until August of 2021, nearly 2 years after TxDOT first suspended the design of this part of the project. And by the end of 2021, both pylons were starting to take shape above the ground. Early this year, the contractor mobilized two colossal crawler cranes to join the tower cranes already set up at both the main span pylons. These crawlers were used to lift the table segments where the bridge superstructure connects to the approaches. The next step in construction is to begin lifting the precast box girder sections into place while crews continue building the pylons upward toward their final height. Rather than doing the entire span at once, these segments will be lifted into place using a balanced cantilever method, where each one is connected to the bridge from the pylon outward.

But, it probably won’t happen anytime soon after TxDOT suspended construction on the main span in July and has continued a very public feud with the contractor since then that is far from resolved. During the shakeup with FIGG, TxDOT hired their own bridge engineer to review the designs and inform their decision that ultimately ended with FIGG fired from the project. When the DB contractor hired a new engineer to recertify the bridge designs, TxDOT kept their independent engineer to review the new designs. Unfortunately, many of the flaws identified in the FIGG design persisted into the current design of the bridge. In April of 2022, TxDOT issued the contractor a notice of nonconforming work. This is a legal document in a construction project used to let a contractor know that something they built doesn’t comply with the terms of the contract. And when that happens, it is the contractors job to fix the nonconforming work at their own cost. The notice included the entire independent review report and a summary table of 23 issues that TxDOT said reflected breaches of the contract, and it required their contractor to submit a schedule detailing the plan to correct the nonconforming work. But they didn’t provide that schedule, or at least not to TxDOT’s standards. So, in July, TxDOT sent another letter enacting a clause in the contract that lets them immediately suspend work in an emergency situation that could cause danger to people or property, citing five serious issues with design of the main span. So let’s take a look at them.

The first two of the alleged flaws are related to the capacity of the foundation system that supports each of the two pylons. Each tower sits on top of an enormous concrete slab or cap that is the area of two basketball courts and 18 feet or 5-and-a-half meters thick. Below that slab are drilled shaft piles, each one about 10 feet or 3 meters in diameter and 210 feet or 64 meters deep. The most critical loads on the pylons are high winds that push the bridge and towers horizontally. You might not think that wind is powerful enough to affect a structure of this size, but don’t forget that Corpus Christ is situated on the gulf coast and regularly subject to hurricane force winds. The independent reviewer estimated that, under some loading conditions, many of the piles holding a single tower would be subject to demands of more than 20% of their capacity. In other words, they would fail. The primary design error identified in the analysis was that the original engineer had assumed that the pile cap, that concrete slab between the tower and the piles, was perfectly rigid in the calculations.

All of engineering involves making simplifying assumptions during the design process. Structures are complicated, soils are variable, loading conditions are numerous. So, to make the process simpler, we neglect factors that aren’t essential to the design. And with a pile cap that is greater in depth than most single story buildings, you might think it’s safe to assume that the concrete isn’t going to flex much. But, we’re talking about extreme loads. When you take into account the flexibility of the pile cap, you find out that the stresses from the pylon aren’t distributed to each pile evenly. Instead, some become overloaded, and you end up with a foundation that the design reviewer delicately labeled as “exceedingly deficient to resist design loadings.”

The next critical design problem identified is related to the delta frame structures that transfer the weight of the bridge’s superstructure into each cable stay. These delta frames connect to the box girders below the bridge deck using post-tensioned tendons. But, these tendons can’t be used to resist shear forces, those sliding forces between the girders and delta frames. For those forces, according to the code, you need conventional steel reinforcement through this interface. Without it, a crack could develop, and the interface could shear apart.

The fourth issue identified is related to the bearings that transfer the weight of the bridge deck near each pylon. The independent reviewer found that, under some load conditions, the superstructure could lift up rather than pushing down on the tower. That would not only cause issues with the bearings themselves, which need to be able to resist movement in some directions while allowing movement in others. It would also cause loads to redistribute, reducing the stiffness of the bridge that depends on a rigid connection to each tower.

The final issue identified, and the most urgent, is related to the loads during construction of the bridge. Construction is a vulnerable time for a bridge like this, especially before the deck is connected between the pylons and the first piers of the approaches. The contractor is planning to lift derrick cranes onto the bridge deck that will be used to hoist the girder segments into place and attach them to each cable stay. TxDOT and their independent reviewer allege that the bridge isn’t strong enough to withstand these forces during construction and will need additional support or more reinforcement.

For the contractor’s part, they have denied that there are design issues and issued a statement to the local paper saying that they were “confident in the safety and durability of the bridge as designed.” In their letter to TxDOT, they cite their disagreements with the conclusions of the independent design reviewer and accuse TxDOT of holding back the results of the review while allowing them to continue with construction and ignoring attempts to resolve the differences. Because of TxDOT’s directive to suspend the work, they have already started demobilizing at the main span, reassigning crews, and reallocating resources. In August, TxDOT sent another letter notifying the contractor of a default in the contract and giving them 15 days to respond.

It’s hard to overstate the disruption of suspending work in this way. Construction projects of this scale are among the most complicated and interdependent things that humans do. They don’t just start and stop on a dime, and these legal actions will have implications for thousands of people working on the New Harbor Bridge project. Just the daily rental fees of those two crawler cranes alone is probably in the tens of thousands of dollars per day. Add up all the equipment and labor on a job this size, and you can see that the stakes are incredibly high when interrupting an operation like this. It’s never a good sign when the insurance company is cc’ed on the letter.

If the bridge design is truly flawed (and clearly TxDOT thinks that it is since they are sharing the evidence publicly), it’s a good thing that they stopped the work so the issues can be addressed before they turn into a dangerous situation for the public. But it also begs the question of why these concerns were handled in a way that let the contractor keep working even when TxDOT knew there were issues. Megaprojects like this are immensely complex, and their design and construction rarely goes off without at least a few complications. There just isn’t as much precedent for the engineering or construction. But, we have processes in place to account for bumps in the road (and even bumps in the bridge deck). Those processes include thorough quality control on designs before construction starts.

So who’s at fault here? Is it the DB contractor for designing a bridge, then recertifying that design with a completely new engineering team, that apparently had a number of serious flaws? Or is it TxDOT for failing to catch the alleged errors (or at least failing to stop the work) until the very last minute after hundreds of millions of taxpayer dollars have already been spent on construction that may now have to be torn down and rebuilt? The simple answer is probably both, but it’s a question that is far from settled, and the battle is sure to be dramatic for those who follow infrastructure, if not discouraging for those who pay taxes. The design issues are serious, but they’re not insurmountable, and I think it’s highly unlikely that TxDOT won’t see the project to completion in one way or another. Some work may have to be replaced while other parts of the project may be fine after retrofits. The best case scenario for everyone involved is for TxDOT to repair their relationship with their contractor and get the designs fixed instead of firing them and bringing on someone new. In the industry, they call that stepping into a dead man’s shoes, and there won’t be many companies jumping for a chance to take over this controversial job halfway through construction. 
Two things are for sure (as they almost always are in projects of this magnitude): The bridge is going to cost more than we expected, and it’s going to take longer to build than the current estimated completion date in 2024. There’s actually another, much longer, cable-stayed bridge racing to finish construction in the US and Canada between Detroit, Michigan and Windsor, Ontario. Barring any major issues, it is currently scheduled to be complete by the end of 2024 and will probably now beat the Corpus Christi project. Every single person who crosses over either one of these bridges, once they’re complete, will do so as an act of trust in the engineers who designed them and the agencies who oversaw the projects. So, I’m thankful that TxDOT is at least being relatively transparent about what’s happening behind the scenes to make sure the New Harbor Bridge is safe when it’s finished. As someone who lives in south Texas, I’m proud to have this project in my backyard, and I’m hopeful that these issues can be resolved without too much impact to the project’s schedule or cost. The latest headlines make it seem like things are headed in that direction. Until then, if you’re in Corpus Christi crossing the ship channel, as you drive over the aging but still striking (and still standing) old Harbor Bridge, you’ll have a really nice view of an impressive construction site and what was almost the nation’s longest cable-stayed bridge.

September 20, 2022 /Wesley Crump

These Metals Destroy Themselves to Prevent Rust

September 06, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the old Howard Frankland Bridge that carries roughly 180,000 vehicles per day across Old Tampa Bay between St. Petersburg and Tampa, Florida. A replacement for the bridge is currently under construction, but the Florida Department of Transportation almost had to replace it decades earlier. The bridge first opened for traffic in 1960, but by the mid-1980s it was already experiencing severe corrosion to the steel reinforcement within the concrete members. After less than 30 years of service, FDOT was preparing to replace the bridge, an extremely expensive and disruptive endeavor. But, before embarking on a replacement project, they decided to spend a little bit of money on a test, a provisional retrofit to try and slow down the corrosion of steel reinforcement within the bridge’s substructure. Over the next two decades, FDOT embarked on around 15 separate corrosion protection projects on the bridge. And it worked! The Howard Frankland Bridge lasted more than 60 years in the harsh coastal environment before needing to be replaced, kept in working condition for a tiny fraction of the cost of replacing it in the 1980s.

The way that bridge in Tampa was protected involves a curiously simple technique, and I’ve built a ridiculous machine in my garage so we can have a corrosion protection shootout and see how it measures up. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about cathodic protection for corrosion control.

Of all the structural metals in use today, most applications in infrastructure consist of mild steel (just plain old iron and carbon). There are so many applications where steel infrastructure comes into contact with moisture, including bridges, spillway gates, water tanks, and underground pipelines. That means there are so many opportunities for rust to deteriorate the constructed environment. We’re in the middle of a deep dive series on rust, and in the previous video about corrosion, I talked about its astronomical cost, which equates to roughly $1,400 per person per year, just in the United States alone. Of course, we could build everything out of stainless steel, but it’s about 5 times as expensive for the raw materials, and much more difficult to weld and fabricate than mild steel. Instead, it’s usually more cost effective to protect that mild steel against corrosion, and there are a number of ways to do it. Paint is an excellent way to create a barrier so that moisture can’t reach the metal, and I’ll cover coatings in a future video. But, there are some limitations to paint, including that it’s susceptible to damage and it’s not always possible to apply (like for rebar inside concrete). That’s where cathodic protection comes in handy.

Let me introduce you to what I am calling the Rustomatic 3000, a machine you’re unlikely to ever need or want. It consists of a tank full of salt water, and a shaft on a geared servo. These plastic arms lower steel samples down into the saline water and then lift them back up so the fan can dry them off, hopefully creating some rust in the process. Corrosion is an electrochemical process. That just means that it’s a chemical reaction that works like an electrical circuit. The two individual steps required for corrosion (called reduction and oxidation) happen at separate locations. This is possible because electrons can flow through the conductive metal from areas of low electric potential (called anodes) to those of high potential (called cathodes). As the anode loses electrons, it corrodes. This reaction is even possible on the same piece of metal because different parts of the material may have slightly different charges that drive the corrosion cell.

However, you can create a much larger difference in electric potential by combining different metals. This table is called the galvanic series, and it shows the relative inertness or nobility (in other words, resistance to corrosion) of a wide variety of metals. When any two of these materials are joined together and immersed in an electrolyte, the metal with lesser nobility will act as the anode and undergo corrosion. The more noble metal becomes the cathode and is protected from corrosion.

You can see that steel sits near the bottom of the galvanic table, meaning it is less noble and more prone to corrosion. But, there are a few metals below it, including some commonly available ones like Aluminum, Zinc, and Magnesium. And wouldn’t you know it, I have some pieces of Aluminum, Zinc, and Magnesium here in my garage that I attached to samples of mild steel in this demo. We can test out the effects of cathodic protection in the rustomatic 3000.  Each time the samples are lifted to dry, the arduino controlling the whole operation triggers a couple of cameras to take a photo. One of the samples is a control with no anode, then the other three have anodes attached consisting of magnesium, aluminum, and zinc from left to right. I’ll set this going and come back to it in a few minutes your time, three weeks my time.

One application of cathodic protection you might be familiar with is galvanizing, which involves coating steel in a protective layer of zinc. The coating acts kind of like a paint to physically separate the steel from moisture, but it also acts as a sacrificial anode because it is electrically coupled to the metal. Galvanizing steel is relatively inexpensive and extremely effective at protecting against corrosion, so nearly all steel structures exposed to the environment have some kind of zinc coating, including framing for buildings, handrails, stairs, cables, sign support structures, and more. Most outdoor-rated nails and screws are galvanized. You can even get galvanized rebar for concrete structures, and there are applications where it is worth the premium to extend the lifespan of the project.

But because it’s normally a factory process that involves dipping assemblies into gigantic baths of molten zinc, you can’t really re-galvanize parts after the zinc has corroded to the point where it’s no longer protecting the steel. Also, in aggressive environments like the coast or cold places that use deicing salts, a thin zinc coating might not last very long. In many cases, it makes more sense to use an anode that can be removed and replaced, like I’ve done in my demonstration here. Cathodic protection anodes like this are used on all kinds of infrastructure projects, especially those that are underground or underwater.

I let this demonstration run for 3 weeks in my garage. Each cycle lasted about 5 minutes, meaning these samples were dipped in salt water just about 6,000 times. And here’s a timelapse of those entire three weeks. Correct me if you find something better, but I think this might be the highest quality time lapse video of corrosion that exists on the internet.

It’s actually really pretty, but if you’re the owner of a bridge or pipeline that looks like this sample on the left, you’re going to be feeling pretty nervous. You can see that the unprotected steel rusts far faster than the other three and the rust attacks the sample much more deeply. The sample with the magnesium looks like it was most protected from corrosion, but watch the anode. It’s nearly gone after just those three weeks, and that makes sense. It’s the least noble metal on the galvanic series by a long shot. The samples with aluminum and zinc anodes do experience some surface corrosion, but it’s significantly less than the control.

In fact, this is exactly how the lifespan of the Howard Frankland bridge in Tampa was extended for so long. Zinc was applied around the outside of concrete girders and in jackets around the foundation piles, then coupled to the reinforcing steel within the concrete so it would act as a sacrificial anode, significantly slowing down the corrosion of the vital structural components.

Here’s a closeup of each sample after I took them down from the Rustomatic 3000, and you can really see how dramatic the difference is. The pockets of rust on the unprotected steel are so thick compared to the minor surface corrosion experienced by the samples with magnesium, aluminum, and zinc anodes. The anodes went through some pretty drastic changes themselves. After scraping off the oxides, the zinc anode is nearly intact, and you can even see some of the original text cast into the metal. The aluminum anode corroded pretty significantly, but there is still a lot of metal left. On the other hand, there’s hardly anything left of the magnesium anode after only three weeks. And here’s a look at the metal after I wire brushed all the rust off each sample. The difference in roughness is hard to show on camera, but it was very dramatic to the touch. There’s no question that the samples with cathodic protection lost much less material to corrosion over the duration of the experiment.

There’s actually one more trick to cathodic protection used on infrastructure projects. Rather than rely on the natural difference in potential between different materials, we can introduce our own electric current to force electrons to flow in the correct direction and ensure that the vulnerable steel acts as the cathode in the corrosion cell. This process is called impressed current cathodic protection. In many places, pipelines are legally required to be equipped with impressed current cathodic protection systems to reduce the chance of leaks which can create huge environmental costs. The potential between the pipe and soil is usually only a few volts, around that of a typical AA battery, but the current flow can be in the tens or hundreds of amps. If you look along the right-of-way for a buried pipeline, especially at road crossings, you can often see the equipment panels that hold rectifiers and test stations for the underground cathodic protection system. The Howard Frankland bridge also had some impressed current systems in addition to the passive protection to further extend its life, proving a valuable lesson we learn over and over again.


The maintenance and rehabilitation of existing facilities is almost always less costly, uses fewer resources, and is less environmentally disruptive than replacing them. You don’t need a civil engineer to tell you that an ounce of prevention is worth a pound of cure (or the whatever the metric equivalent of that is). It’s true for human health, and it’s true for infrastructure. Making a structure last as long as possible before it needs to be replaced isn’t just good stewardship of resources. It’s a way to keep the public safe and prevent environmental disasters too. Corrosion is one of the number one ways that infrastructure deteriorates over time, so cathodic protection systems are an essential tool for keeping the constructed environment safe and sound.

September 06, 2022 /Wesley Crump

What Really Happened During the Yellowstone Park Flood?

August 16, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Every year, a deluge of tourists stream into Yellowstone National Park, America’s first and possibly most famous national park, and (I would argue) one of the most beautiful and geographically rich places on earth. But this past June of 2022, many of those tourists, along with some of the permanent residents of the area, found themselves at ground zero of a natural disaster. Torrential rainfall in Wyoming and Montana brought widespread flooding to the streams and rivers that flow through this treasured landscape and beyond. Homes, bridges, roadways, and utilities were swept away and over 10,000 people were evacuated. As of this video’s production, the National Park Service is still picking up the pieces and deciding how to restore the damaged infrastructure within the park, but while the NPS is busy with that monumental task, I wanted to share the engineering details we already know about what happened during the flood, how they might rebuild the roads and bridges stronger than before, and why they might not want to. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the 2022 Montana and Yellowstone floods.

If you didn’t grow up with posters of the Old Faithful geyser on your classroom walls and watching Yogi Bear raiding picinic baskets, that’s okay. I can give you a quick tour. Yellowstone National Park celebrates its 150th birthday this year since it was established in March of 1872. The park covers the northwestern corner of Wyoming and extends into Montana to the north and Idaho to the west. It’s a big place, roughly half the area of Wales, if that’s a helpful equivalency for those more familiar with the metric system. And there really is a lot to see. There are geysers here, here, and here where hot water and steam are ejected from the earth at regular or irregular intervals. In fact half of the world’s geysers are located in the park. There are hot springs, vents, and mudpots here, here, and here . There is a massive natural lake that freezes over each winter here. Waterfalls here and here. Plus mountains, valleys, wolves, bears, bison, and lots more spread throughout the entire park.

A series of roadways connects the five park entrances to the various attractions, lodges, campsites, and of course, their respective parking lots. Indeed, for better or for worse, the park service estimates that 98% of visitors never get more than a half mile away from their car. We bucked that trend during our visit in 2019, but only for a single hike. Otherwise we stayed on the beaten path along with the roughly 3 to 4 million other visitors per year that cram into the same 1% of the park’s total area.

Here’s why that’s important to the story: Many of the most visited areas of Yellowstone are along the rivers and streams that run through the park, largely due to the unmistakable beauty of those rivers and streams as they flow into and over the striking geologic features. However, that proximity of development to the watercourses in the park became a serious and nearly deadly complication this June. On the night of the 12th into the morning of the 13th, an enormous storm system dropped rain across nearly the entire Yellowstone area and large parts of Montana to the north. Some areas saw more than 4 inches or 100 millimeters of rain in less than 24 hours. What’s worse is that a lot of those inches and millimeters fell on top of snow-covered ground, rapidly melting the snowpack and exacerbating runoff. These so-called “rain-on-snow” events have a long history of contributing to floods, and the 2017 Oroville Dam spillway failure that I’ve also covered on the channel was partly a result of rain-on-snow flooding.

All this rain and snowmelt concentrated in the streams and rivers that flow through the park. The US Geological Survey has several stream gages spread throughout the park and southern Montana, so we can take a look at the data to see exactly what happened. And the National Park Service posted an album of aerial photos on their Flikr page so we can compare the streamflow records to the damage on the ground.

A few places on the edge of the storm only saw a small spike in streamflow. For example, the Firehole River that carries water from Old Faithful only went up by about a foot and a half (or 45 centimeters). That river comes together with the Gibbon River along the West Entrance Road, where, again, the increase in streamflow wasn’t overwhelming. But near the northern border of the park, things were much more serious. The river in the Lamar Valley, sometimes called America’s Serengeti for the huge populations of bison and other large animals, came up nearly 9 feet or about 3 meters, briefly surpassing the “moderate flood” stage, which is the level at which the National Weather Service expects damage to buildings and infrastructure to begin. At locations where the valley narrows, the torrent of water eroded and destabilized the river bank, threatening, and in some cases destroying the adjacent roadway. The Soda Butte Picnic Area was hit the hardest in this part of the park.

The Gardner River at the north entrance of the park came up about 2 feet (60 centimeters) at the stream gage, but that number doesn’t quite capture the devastation. A good portion of the flood damage in the park happened along a single stretch of road where the Gardner River created massive washouts and rockslides. In many places, the entire road has been completely washed away where the river altered its course to flow through where the road once was.

Many of these streams confluence into the Yellowstone River that flows through southern Montana, and flooding continued along this river out of the park. One employee housing structure fell completely into the river and floated away. The USGS estimated that the Yellowstone River exceeded the 500 year flood stage nearly all the way to Billings, wreaking havoc on the communities along the river. I’ve talked about this “blank-year” flood in a previous video, but I’ll explain it briefly here. Engineers can look at historical data to estimate a relationship between a flood’s magnitude and its likelihood of happening in a given year. The 500-year flood is just a point on this line. Obviously this is not an exact science (for a bunch of reasons), but it’s helpful for engineers, actuaries, and planners to think of flood magnitude in terms of its probability. Even though the name implies it can only happen once every five hundred years, the actual definition is a flood magnitude with a 0.2% percent chance of being exceeded in a given year.

With this widespread and tremendous flooding, more than 10,000 people were evacuated from Yellowstone National Park. Although the National Weather Service had rain in the forecast, there was no expectation of such significant rainfall, forcing employees to scramble overnight to close roads and get people out of harm’s way. Remarkably, not a single person was injured or killed in Yellowstone as a result of the flooding. Also incredibly, on July 2 (only two-and-a-half weeks after the flood occurred), the park announced the north loop was back open to vehicular traffic. As of this video, the only major parts of Yellowstone that are still closed are the two northern entrances and their respective roads leading into the park. This is due in large part to the fact that there were already roadway contractors working on other projects when the floods happened. We don’t have all the details yet, but it’s likely the Federal Highway Administration was able to amend one of those contracts to get help repairing some of the flood damages expeditiously. 

Speaking of those damages, we still don’t know their full extent. The Park Service has a lot of work ahead of them to inspect the condition of backcountry bridges, trails, campsites, and park infrastructure. Over $60 million dollars in “quick release” emergency funds have already been released to help with emergency repairs, and some news agencies have speculated that the total repairs will cost up to a billion dollars based on costs of similar repair projects at national parks.

The highest priority repairs will be those along the northern entrances to the park where the rivers changed their courses into roadways. It’s not just the park that is affected by those closures but the communities outside the park that depend on seasonal tourism. Damage in these areas will also be the most challenging and difficult repairs to complete, likely requiring completely new roadway alignments that will come with environmental and archaeological studies, public feedback, permits, geotechnical studies, and careful design all before construction begins.
As an example, the Yellowstone River Bridge replacement project started planning and design in 2019 and was set to start construction this year until floods delayed the project, so that’s a roughly 4-year pre-construction phase. Some people might call this unnecessary bureaucracy and red tape, and certainly the communities that depend on Yellowstone traffic will be hoping for much speedier temporary repairs to these roadways. But, many might also consider this careful planning and design as good stewardship for one of the most beautiful places on earth. Hasty engineering of large infrastructure can be extremely damaging to natural systems like those in Yellowstone, and you don’t want to invest millions of dollars into repairs that might be subject to similar flooding in the future. After all, we build parks (and roads to parks) to get closer to the natural environment and all its wildness, and there’s almost nothing more natural or wild than a flood.

August 16, 2022 /Wesley Crump

You Spend More on Rust Than Gasoline (Probably)

August 02, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In July of 1995, Folsom Lake, a reservoir created by Folsom Dam in Northern California, reached its full capacity as snow continued to melt in the upstream Sierra. With the power plant shut down for maintenance, the dam’s operator needed to open one of the spillway gates to maintain appropriate flow in the river below. As the gate began to rise, one side suddenly collapsed and swung open, allowing an uncontrolled torrent of water to flow past the gate down the spillway. With no way to control the flow, the water level of Folsom Lake began to drop… and drop and drop. By the time the deluge had slowed enough that operators could block the opening, nearly half the water stored in Folsom Lake had been lost.

Forensic investigation of the failure revealed that the gate malfunctioned because of corrosion of its pivot mechanism, called the trunnion, creating excessive friction. Essentially, the gate was stuck at its hinges. When the hoist tried to raise it, instead of pivoting upwards, the struts buckled, causing the gate to collapse. This gate operated flawlessly for 40 years before the failure in 1995. However, corrosion is an insidious issue. Because it occurs gradually, it’s hard to know when to sound the alarms. But, there are alarms to sound!

It’s been estimated that we lose roughly two-and-a-half trillion dollars per year globally because of the collective corrosion of the things we make and build. That is a colossal cost for a simple chemical reaction, and there’s an entire field of engineering dedicated to grappling with the problem. So, this is the first in a series of videos on corrosion engineering. Make sure you subscribe to catch them all. You probably don’t have a line item in your household budget for rust, but you might add one after this video. I’m Grady, and this is Practical Engineering. In today’s episode we’re talking about corrosion engineering for infrastructure.

It will come as no surprise to you that we build a lot of stuff out of metal. Entire periods of human civilization are named after the kinds of metals we learned to use, like the bronze age and the following iron age. These days nearly every humanmade object is made at least partly of metal or in a metallic machine, from devices and vehicles to the infrastructure we use everyday, including bridges, pipelines, sewers, pumps, tanks, gates, and transmission towers. Metals are particularly useful for so many applications, and we humans have invented a plethora of processes (like smelting, refining, and alloying) to assemble metallic molecules in various ways according to our needs. But, mother nature is resolved to dismantle (in due course) the materials we create through a process called corrosion. It seems so self-evident that structures deteriorate over time that it might not seem worth the fuss to worry about why. But, infrastructure is expensive and we all pay for it in some way or another, so we need it to last as long as possible. Not only that, but the failure of infrastructure has consequences to life safety and the environment as well, so keeping corrosion in check is big business. But what is corrosion anyway?

You’re here for engineering, not chemistry, so I’ll keep this brief. Corrosion is an electrochemical descent into entropy: a way for mother nature to convert a refined metal into a more stable form (usually an oxide). Corrosion requires four things to occur: an anode (that’s the corroding metal), a cathode (the metal that doesn’t corrode), a path for the electrical current between the two, and an electrolyte (typically water or soil) to complete the circuit. And the anode and cathode can even be different areas of the same piece of metal with slightly different electrical charges. The combination of these elements is a corrosion cell, and the process that corrode metals in nature are nearly identical to those used in batteries to store electricity. In short, corrosion is a redox (that is, reduction-oxidation) reaction, which means electrons are transferred, in this case from the metal in question to a more stable (and usually much less useful) material called an oxide. For corroded iron or steel, we call the resulting oxide, rust.

Here’s a little model bridge I made from steel wires in a bath of aerated salt water. I added a little bit of hydrogen peroxide to speed up the process so you could see it clearer on camera. This timelapse ran for a few days, and the corrosion is hard to miss. Of course, we don’t keep our bridges in aquariums full of salt water and hydrogen peroxide, but we do expose our infrastructure to a huge variety of conditions and configurations that create many forms of corrosion.

You’re probably familiar with uniform corrosion that happens on the surface of metal, like the beautiful green patina of copper oxides and other corrosion compounds covering the Statue of Liberty. But corrosion takes many forms, and corrosion engineers have to be familiar with all of them. These engineers know the common design pitfalls that exacerbate corrosion like not including drainage holes, leaving small gaps in steel structures, and mixing different types of metals. Corrosion can occur from the atmosphere or simply by allowing dissimilar metals to contact one another, called galvanic corrosion. Even using an ordinary steel bolt on a stainless steel object can lead to degradation over time. Corrosion can happen in crevices, pits, or between individual grains of the metal’s crystalline structure. Even concrete structures are vulnerable to corrosion of the steel reinforcement embedded within. When rebar rusts, it expands in volume, creating internal stresses that lead to spalling or worse.

Just as there are lots of kinds of corrosion, there are also many, many professionals with careers dedicated to the problem. After all, the study of corrosion and its prevention is a topic that combines various fields of chemistry, material science, and structural engineering. There’s even a major professional organization: the AMPP or Association for Materials Protection and Performance, that offers training and certifications, develops standards, and holds annual conferences for professionals involved in the fight against corrosion. Those professionals employ a myriad of ways to protect structures against this insidious force, that I’ll cover in this series.

One of the simplest tools in the toolbox is just material selection. Not all metals corrode at the same rate or in the same conditions, and some barely corrode at all. Gold, silver, and platinum aren’t just used in jewelry because they’re pretty. These so-called noble metals are also prized because they aren’t very reactive to atmospheric conditions like moisture and oxygen. But, you won’t see many bridges built from gold, both because it’s too expensive and too soft.

Steel is the most common metal used in structures because of its strength and cost. It simply consists of iron and carbon. Steel is easy to make, easy to machine, easy to weld, and quite strong, but it’s also one of the materials most susceptible to corrosion. I’ve got another demonstration set up here in my garage. This is a tank full of salt water, a bubbler to keep the water oxygenated, and a few bolts made from different materials. I’ll let the time lapse run, and let you guess which bolt is made from steel. It doesn’t take long at all for that characteristic burnt orange iron oxide to show up. Even the steel bolt to the left that has a protective coating of zinc is starting to rust after a day or two of this harsh treatment. That humanmade protective layer on the galvanized bolt gives a hint about why the other ones shown are able to avoid corrosion in the saltwater. Unlike iron oxide that mostly flakes and falls off, there are some oxides that form a durable and protective film that keeps the metal from corroding further. This process is called passivation. Metals that passivate are corrosion resistant precisely because they’re so reactive to water and air.

In my demo I included several metals that undergo passivation, including an aluminum bolt (or aluminium for the non-north-americans), which is typically quite corrosion resistant in air, but struggled against the saltwater. I also included a bronze bolt which is an alloy of copper and (in this case) silicon. Finally, I included two types of stainless steel, created by adding large amounts, sometimes as much as 10%, of chromium and nickel to steel. There are two major types of stainless steel, called 304 and 316 in the US. 316 is more resistant to saltwater environments, but I didn’t really notice a difference between the two over the duration of my test.

I should also note that there are even steel alloys whose rust is protective! Weathering steel (sometimes known by its trade name of Corten Steel) is a group of alloys that are naturally resilient against rust because of passivation. A special blend of elements, including manganese, nickel, silicon, and chromium don’t keep the steel from rusting, but they allow the layer of rust to stay attached, forming a protective layer that significantly slows corrosion. If you keep an eye out, you’ll see weathering steel used in many structural applications. One of my favorite examples is the Pennybacker bridge outside of Austin. The U.S. Steel Tower, the tallest building in Pittsburgh, Pennsylvania, was famously designed to incorporate corten steel in the building’s facade and structural columns. Rather than fireproof the columns with a concrete coating, the engineers elected to make them hollow and fill them with fluid so the corten steel could remain exposed as an exemplification of the material. Corten steel is in wide use today. Architects love the oxidized look, engineers love that it’s just as strong as mild steel and almost as cheap, and owners love not having to paint it on a regular schedule. That saves a lot of cost. In fact, the cost of corrosion is the main point I want to express in this video.

In 1998, the Federal Highway Administration conducted a 2-year study on the monetary impacts of corrosion across nearly every industry sector, from infrastructure and transportation to production and manufacturing. They found that the annual direct costs of corrosion in the U.S. made up an astronomical $276 billion dollars, over three percent of the entire GDP. Assuming we still spend roughly as much today, that amounts to over 1,400 dollars per person per year, more than the average American spends on gasoline! Of course, you don’t get a monthly rust bill. Corrosion costs show up in increased taxes to pay for infrastructure; increased rates for water, sewer, electricity, and natural gas; increased costs of goods; and shorter lifespans for the metal things you buy (especially vehicles). But corrosion has costs that go even beyond money.
In 2014, the City of Flint Michigan began using water from the Flint River as their main source of drinking water to save money. The river water had a higher chloride concentration than the previous supply sourced from Lake Huron, making it more corrosive. Many cities add corrosion inhibitors to their water supply to prevent decay of pipe walls over time, but the City of Flint decided against it, again to save on costs. The result was that water in the city’s distribution system began leaching lead from aging pipes, exposing residents to this extremely dangerous heavy metal and sparking a water crisis that lasted for 5 years. A public health emergency, nearly 80 lawsuits (many of which are still ongoing), government officials fired and in some cases criminally charged, and upwards of 12,000 kids exposed to elevated levels of lead all resulted because of poor management of corrosion. Sadly, it’s just a single example in a long line of infrastructure problems caused by corrosion. Metals are so necessary and important to modern society that we’ll never escape the problem, but the field of corrosion engineering continues to advance so that we can learn more about how to manage it and mitigate its incredible cost.

August 02, 2022 /Wesley Crump

What Happens When a Reservoir Goes Dry?

July 19, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In June of 2022, the level in Lake Mead, the largest water reservoir in the United States formed by the Hoover Dam, reached yet another all-time low of 175 feet or 53 meters below full, a level that hasn’t been seen since the lake was first filled in the 1930s. Rusted debris, sunken boats, and even human remains have surfaced from beneath the receding water level. And Lake Mead doesn’t stand alone. In fact, it’s just a drop in the bucket. Many of the largest water reservoirs in the western United States are at critically low storage with the summer of 2022 only just getting started. Lake Powell upstream of Lake Mead on the Colorado River is at its lowest level on record. Lake Oroville (of the enormous spillway failure fame) and Lake Shasta, two of California’s largest reservoirs, are at critical levels. The combined reservoirs in Utah are below 50% full. Even many of the westernmost reservoirs here in Texas are very low going into summer.

People use water at more or less a constant rate and yet, mother nature supplies it in unpredictable sloshes of rain or snow that can change with the seasons and often have considerable dry periods between them. If the sloshes get too far apart, we call it a drought. And at least one study has estimated that the past two decades have been the driest period in more than a thousand years for the southwestern United States, leading to a so-called “mega-drought.” Dams and reservoirs are one solution to this tremendous variability in natural water supply. But what happens when they stop filling up or (in the case of one lake in Oklahoma), what happens when they never fill up in the first place? I’m Grady, and this is Practical Engineering. On today’s episode we’re talking about water availability and water supply storage reservoirs. 

The absolute necessity of water demands that city planners always assume the worst case scenario. If you have a dry year (or even a dry day), you can’t just hunker down until the rainy weather comes back. So the biggest question when developing a new supply of water is the firm yield. That’s the maximum amount of water the source will supply during the worst possible drought. Here’s an example to make this clearer:

Imagine you’re the director of public works for a new town. To keep your residents hydrated and clean, you build a pumping station on a nearby river to collect that water and send it to a treatment plant where it can be purified and distributed. This river doesn’t flow at a constant rate. There’s lots of flow during the spring as mountain snowpack melts and runs off, but the flow declines over the course of the summer once that snow has melted and rain showers are more spread out. In really dry years, when the snowpack is thin, the flow in the river nearly dries up completely. In other words, the river has no firm yield. It’s not a dependable supply of water in any volume. Of course, there is water to be used most of the time, but most of the time isn’t enough for this basic human need. So what do you do? One option is to store some of that excess water so that it can keep the pumps running and the taps flowing during the dry times. But, the amount of storage matters.

A clearwell at a water treatment plant or an elevated water tower usually holds roughly one day’s worth of supply. Those types of tanks are meant to smooth out variability in demands over the course of a day (and I have a video on that topic), but they can’t do much for the reliability of a water source. If the river dries up for more than one day at a time, a water tower won’t do much good. For that, you need to increase your storage capacity by an order of magnitude (or two). That’s why we build dams to create reservoirs that, in some cases, hold trillions of gallons or tens of trillions of liters at a time, incredible (almost unimaginable) volumes. You could never build a tank to hold so much liquid, but creating an impoundment across a river valley allows the water to fill the landscape like a bathtub. Dams take advantage of mother nature’s topography to form simple yet monumental water storage facilities.

Let’s put a small reservoir on your city’s river and see how that changes the reliability of your supply. If the reservoir is small, it stays full for most of the year. Any water that isn’t stored simply flows downstream as if the reservoir wasn’t even there. But, during the summer, as flows in the river start to decrease, the reservoir can supplement the supply by making releases. It’s still possible that in those dry years, you won’t have a lot of water stored for the summer, but you’ll still have more than zero, meaning your supply has a firm yield, a safe amount of water you can promise to deliver even under the worst conditions, roughly equal to the average flow rate over the course of a dry year.

Now let’s imagine you build a bigger dam to increase the size of your reservoir so it can hold more than just a season’s worth of supply. Instead of simply making up a deficit during the driest few months, now you can make up the deficit of one or more dry years. The firm yield of your water source goes up even further, approaching the long-term average of river flows, and completely eliminating the idea of a drought by converting all those inconsistent sloshes of rain and snow into a perfectly constant supply. Beyond this, any increase in reservoir capacity doesn’t contribute to yield. After all, a reservoir doesn’t create water, it just stores what’s already there. 

Of course, dams do more than merely store water for cities that need a firm supply for their citizens. They also store water for agriculture and hydropower that have more flexibility in their demand. Reservoirs serve as a destination for recreation, driving massive tourism economies. Some reservoirs are built simply to provide cooling water for power plants. And, many dams are constructed larger than needed for just water conservation so they can also absorb a large flood event (even when the reservoir is full). Every reservoir has operating guidelines that clarify when and where water can be withdrawn or released and under what conditions and no two are the same. But, I’m explaining all this to clarify one salient point: an empty reservoir isn’t necessarily a bad thing.

Dams are expensive to build. They tie up huge amounts of public resources. They are risky structures that must be vigilantly monitored, maintained, and rehabilitated. And in many cases, they have significant impacts on the natural environment. Put simply, we don’t build dams bigger than what’s needed. Empty reservoirs might create a negative public perception. Dried up lake beds are ugly, and the “bathtub ring” around Lake Mead is a stark reminder of water scarcity in the American Southwest. But, not using the entire storage volume available can be considered a lack of good stewardship of the dam, and that means reservoirs should be empty sometimes. Why build it so big if you’re not going to use the stored water during periods of drought? Storage is the whole point of the thing… except there’s one more thing to discuss:

Engineers and planners don’t actually know what the worst case scenario drought will be over the lifetime of a reservoir. In an ideal world, we could look at thousands of years of historical streamflow records to get a sense of how long droughts can last for a particular waterbody. And in fact, some rivers do have stream gages that have been diligently collecting data for more than a century, but most don’t. So, when assessing the yield of a new water supply reservoir, planners have to make a lot of assumptions and use indirect sources of information. But even if we could look at a long-term historical record as the basis of design, there’s another problem. There’s no rule that says the future climate on earth will look anything like the past one, and indeed we have reason to believe that the long-term average streamflows in many areas of the world - along with many other direct measures of climate - are changing. In that case, it makes sense to worry that reservoirs are going dry. Like I said, reservoirs don’t create water, so if the total amount delivered to the watershed through precipitation is decreasing over time, so will a reservoirs firm yield

That brings me to the question of the whole video: what happens when a reservoir runs out of water? It’s a pretty complicated question, not only because water suppliers and distributors are relatively independent of each other and decentralized (capable of making very different decisions in the face of scarcity), but also because the effects happen over a long period of time. Most utilities maintain long-term plans that look far into the future for both supply and demand, allowing them to develop new supplies or implement conservation measures well before the situation becomes an emergency for their customers. Barring major failures in government or public administration, you’re unlikely to turn on your tap someday and not have flowing water. In reality, water availability is mostly an economic issue. We don’t so much run out as we just use more expensive ways to get it. Utilities spend more money on infrastructure like pipelines that bring in water from places with greater abundance, wells that can take advantage of groundwater resources, or even desalination plants that can convert brackish sources or even seawater into a freshwater source. Alternatively, utilities might invest in advertising and various conservation efforts to convince their customers to use less. Either way, those costs get passed down to the ratepayers and beyond.

For some, like those in cities, the higher water prices might be worth the cost to live in a climate that would otherwise be inhospitable. For others, especially farmers, the increased cost of water might offset their margins, forcing them to let fields fallow temporarily or for good. So, while drying reservoirs might not constitute an emergency for most individuals, the impacts trickle down to everyone through increased rates, increased costs of food, and a whole host of other implications. That’s why many consider what’s happening in the American southwest to be a quote-unquote “slow moving trainwreck.”

In 2019, all the states that use water from the Colorado River signed a drought contingency plan that involves curtailing use, starting in Arizona and Nevada. Those curtailments will force farmers to tap into groundwater supplies which are both expensive and limited. Eventually, irrigated farming in Arizona and Nevada may become a thing of the past. There’s no question that the climate is changing in the American Southwest, as years continue to be hotter and drier than any time in recorded history. It can be hard to connect cause and effect for such widespread and dramatic shifts in long-term weather patterns, but I have one example of an empty reservoir where there’s no question about why it’s dry.
In 1978, the US Army Corps of Engineers completed Optima Lake Dam across the Beaver River in Oklahoma. The dam is an earth embankment 120 feet (or 37 meters) high and over 3 miles or 5 kilometers long. The Beaver River in Oklahoma had historically averaged around 30 cubic feet or nearly a cubic meter per second of flow and the river even had some major floods, sending huge volumes of water downstream. However, during construction of the dam, it became clear that things were rapidly changing. It turns out that most of the flows in the Beaver River were from springs, areas where groundwater seeps up to the surface. Over the 1960s and 70s, pumping of groundwater for cities and agriculture reduced the level of the aquifer in this area, slashing streamflow in the Beaver River as it did. The result was that when construction was finished on this massive earthen dam, the reservoir never filled up. Now Optima Lake Dam sits mostly high and dry in the Oklahoma Panhandle, never having reached more than 5 percent full, as a monument to bad assumptions about the climate and a lesson to engineers, water planners, and everyone about the challenges we face in a drier future.

July 19, 2022 /Wesley Crump

How Do You Steer a Drill Below The Earth?

July 05, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In December 2019, the City of Fort Lauderdale, Florida experienced a series of catastrophic ruptures in a critical wastewater transmission line, releasing raw sewage into local waterways and neighborhoods. Recognizing the need for improvements to their aging infrastructure, the City embarked on a plan to install a new pipeline to carry sewage from the Coral Ridge Country Club pumping station across 7 miles (or 12 kilometers) to the Lohmeyer Wastewater Treatment Plant. But just drawing a line on the map hides the enormous complexity of a project like this. Installing an underground pipeline through the heart of a major urban area while crossing three rivers is not a simple task.

Underground utilities are usually installed by a technique known as trenching. In other words, we excavate a trench down from the surface, place the line, backfill the soil, and repair whatever damage to the streets and sidewalks remains. That type of construction is profoundly disruptive, requiring road closures, detours, and pavement repairs that never quite seem as nice as the original. Trenches are also dangerous for the workers inside, so they have to be supported to prevent collapse. Beyond the human risk, in sensitive environmental areas like rivers and streams, trenching is not only technically challenging but practically unachievable because of the permits required. In fact, trenching in urban areas to install pipelines these days is for the birds. When the commotion of construction must be minimized, there are many trenchless technologies for installing pipes below the ground. One of those methods helped Fort Lauderdale get a 7-mile-long sewer built in less than a year and half, and is used across the world to get utility lines across busy roadways and sensitive watercourses. I’m Grady and this is Practical Engineering. On today’s episode, we’re talking about horizontal directional drilling.

If you’ve ever seen one of these machines on the side of the road, you’ve seen a trenchless technology in action. Although there are quite a few ways to install subsurface pipelines, telecommunication cables, power lines, and sewers without excavating a trench, only one launches lines from the surface. That means you’re much more likely to catch a glimpse. Like laparoscopic surgery for the earth, horizontal directional drilling (or HDD) doesn’t require digging open a large area like a shaft or a bore pit to get started. Instead, the drill can plunge directly into the earth’s surface. From there, horizontal directional drilling is pretty straightforward, but it’s not necessarily straight. In fact, HDD necessarily uses a curved alignment to enter the earth, travel below a roadway or river, and exit at the surface on the other side. Let me show you how it works and at the end, we’ll talk about a few of the things that can go wrong.

The first step in an HDD installation is to drill a pilot hole, a small diameter borehole that will guide the rest of the project. A drill rig at the surface has all the tools and controls that are needed. These rigs can be tiny machines used to get a small fiber-optic line under a roadway or colossal contraptions capable of drilling large-diameter boreholes for thousands of feet at a time. As such, many of the details of HDD vary across projects, but the basic steps and equipment are all the same.

As the drill bit advances through the earth, the rig adds more and more segments of pipe to lengthen the drill string. Through this pipe, drilling fluid is pumped to the end of the string. Drilling fluid, also known as mud or slurry, serves several purposes in an HDD project. First, it helps keep the drill bit lubricated and cool, reducing wear and tear on equipment and minimizing the chances of a tool breaking and getting stuck in the hole. Next, drilling fluid helps carry away the excavated soil or rock, called the cuttings, and clear them from the hole. Finally, drilling fluid stabilizes and seals the borehole, reducing the chance of a collapse. 

I have here an acrylic box partly full of sand, a setup you’re probably quite familiar with if you follow my channel. Turns out a box of sand can show a lot of different phenomena in construction and civil engineering. Compared to soils that hold together like clay, sand is the worst case scenario when it comes trying to keep a borehole from collapsing. If I pull away this support, the simulated borehole face caves in no time. If I add groundwater to the mix, the problem is even worse. Pulling away the support, the wall of my borehole doesn’t stand a chance. Let me show you how drilling fluid solves this problem.

I’m mixing up a handcrafted artisanal batch of drilling mud, a slurry of water and bentonite powder. This is a type of clay created by volcanic ash that swells and stays suspended when mixed with water. It’s pretty gloopy stuff, so it gets used in cosmetics and even winemaking, but it’s also the most common constituent in drilling fluids. If I add the slurry to one side of the demo, you can see how the denser fluid displaces the groundwater. It’s not the most appetizing thing I’ve ever put on camera, but watch what happens when I remove the rigid wall. The drilling fluid is able to support the face of the sand, preventing it from collapsing. In addition to supporting the sand, the drilling fluid seals the surface of the borehole to reduce migration of water into or out of the interface. In most HDD operations, the drilling fluid flows in through the drill string and back out of the borehole, carrying the cuttings along toward the entry location where it is stored in a tank or containment pit for later disposal or reuse.

So far HDD follows essentially the same steps as any other drilling into the earth, but that first ‘D’ is important. Horizontal directional drilling means we have to steer the bit. The drill string has to enter the subsurface from above, travel far enough below a river or road to avoid impacts, evade other subsurface utilities or obstacles below the ground, and exit the subsurface on the other side in the correct location. I don’t know if you’ve ever tried to drill something, but so far when I do it, I’ve never been able to curve the bit around objects. So how is it possible in horizontal directional drilling?

There are really two parts to steering a drill string. Before you can correct the course, you need to know where you are in the first place, and there are a few ways to do it. One option is a walkover locating device that can read the position and depth of a drill bit from the surface. A transmitter behind the bit in the drill string sends a radio signal that can be picked up by a handheld receiver. Other options include wire-line or gyro systems that use magnetic fields or gyroscopes to keep track of the bit's location as it travels below the surface. Once you know where the bit is, you can steer it to where you want it to go.

I’ve made up a batch of agar, which is a translucent gel made from the cell walls of algae. I tried this first in the same acrylic box, but the piping hot jelly busted a seam and came pouring out into my bathtub, creating a huge mess. So, you’ll have to excuse the smaller glassware demo. My simulated drill string is just a length of wire. There are two things to keep in mind about directional drilling: (1) Although they seem quite rigid, drill pipes are somewhat flexible at length. If I take a short length of this wire and try to bend it, it’s pretty difficult, but a longer segment deflects with no problem. And, (2) you don’t have to continuously rotate the drill string in order to advance the borehole. You can just push on it, forcing the bit through the soil. 

My wire pushes through the agar without much force at all, and a drill string can be advanced through the soil in a similar way, especially when lubricated with water or drilling fluid. The real trick for steering a drill string is the asymmetric bit. Watch what happens when I put a bend on the end of my wire and advance it through the agar. It takes a curved path, following the direction of the bend. If I rotate the wire and continue advancing, I can change the direction of the curve. The model drill string is biased in one direction because of the asymmetry, and I can take advantage of that bias to steer the bit. I can steer the string left, then rotate and advance again to steer the bit to the right. I’m a little bit clumsy at this, but with enough control and practice, I could steer this wire to any location within the agar, avoid obstacles, and even have it exit at the surface wherever I wanted.

This is exactly how many horizontal directional drills work. The controls on the rig show the operator which way the bit is facing. The drill string can be rotated to any angle (called clocking), then advanced to change the direction of the borehole. Sometimes a jet nozzle at the tip of the bit sprays drilling fluid to help with drilling progress. If the nozzle is offset from the center, it can help create a steering bias like the asymmetric bit. Just like the Hulk’s secret is that he’s always angry, a directional drill string’s secret is that it’s always curving. The rig operator’s only steering control is the direction the drill string curves. My friend Daniel at the Coding Train channel built a 2D simulator so you can try steering one of these rigs for yourself. Here's me trying to navigate the drill string around an obstacle and exit on the other side. He’s got a video all about the programming of this simulator, so go check out his video after this one.

DUB: And hey, if that sounds like something you’d like to try for yourself, my friend Dan Shiffman over at the Coding Train YouTube channel built a 2D horizontal directional drilling simulator. This is an open-source project, so you can contribute features yourself, but it’s also really fun if you just want to play a few rounds. If you’re into coding or you're wanting to get started, there is no better way than working through all the incredible and artistic examples Dan comes up with for his coding challenges. Go check him out his video on HDD after this one.

Once the drill string is headed in the right direction, it can just be continuously rotated to keep the bit moving in a relatively straight line. The pilot hole for an HDD project is just an exercise in checking the location and adjusting the clock position of the drill string over and over until the drill string exits on the other side, hopefully in exactly the location you intended. But, not all soil conditions allow for a drill string to simply be pushed through the subsurface.

Rocky conditions, in particular, make steering a drill rig challenging. An alternative to simply ramming the bit through the soil is to use a downhole hydraulic motor. Also known as mud motors, these devices convert the hydraulic energy from the drilling fluid being pumped through the string to rotate a drill bit that chews through soil and rock. This allows for faster, more efficient drilling without having to rotate the whole drill string. The housing of the mud motor is bent to provide steering bias, and the drill string can be clocked to change the direction of the borehole.

Once the pilot hole exits on the other side, it has to be enlarged to accommodate the pipe or duct. That process is called reaming. A reamer is attached to the drill string from the exit hole and pulled through the pilot toward the drill rig to widen the hole. Depending on the size of the pipe to be installed, contractors may ream a hole in multiple steps. The final reaming is combined with the installation of the pipeline. This step is called the pull back. The pipe to be installed in the borehole is lined up on rollers behind the exit pit. The end of the pipe is attached to the remaining assembly, and the whole mess is pulled with tremendous force through the borehole toward the rig. Finally, it can be connected at both ends and placed into service.

That’s how things work when everything goes right, but there are plenty of things that can go wrong with horizontal directional drilling too. Parts of the drill string can break and get stuck in the pilot hole. Drilling can inadvertently impact other subsurface utilities or underground structures. The pipeline can get stuck or damaged on pullback. Or, the borehole can collapse. 

The controversial Mariner East II pipeline in Pennsylvania experienced a litany of environmental problems during its construction between 2017 and 2022. Most of those problems happened on HDD segments of the line and involved inadvertent returns of drilling fluid. That’s the technical term for the situation when drilling fluid exits a borehole at the surface instead of circulating back to the entrance pit. The inadvertent returns in the Mariner East II line created water quality issues in nearby wells, led to sinkholes in some areas, and spilled drilling fluid into sensitive environmental areas. The pipeline owner was fined more than $20 million over the course of construction due to violations of their permits, and they are still mired in legal battles and extreme public opposition to the project to date.

In the case of Mariner East II, most of the drilling fluid spills were partially related to the difficult geology in Pennsylvania. Clearly HDD isn’t appropriate for every project. But in most cases, trenchless technologies are the environmentally-superior way to install subsurface utilities because they minimize disruptions on the surface to the people in urban areas and sensitive habitat around rivers and wetlands.

July 05, 2022 /Wesley Crump

4 Myths About Construction Debunked

June 21, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Construction is something you probably either love or hate, depending on your commute or profession. Obviously, as a civil engineer, it’s something I think a lot about, and over the past 6 years of reading emails and comments from people who watch Practical Engineering, I know that parts of heavy construction are consistently misunderstood. Also, I talk a lot about failures and engineering mistakes in my videos because I think those stories are worth discussing, but if that’s all you ever hear about the civil engineering and construction industries, you can be forgiven for having an incomplete perspective of how things really work. So I combed through YouTube comments and emails over the past few years and pulled together a few of the most common misconceptions. I’m Grady and this is Practical Engineering. In today’s episode we’re debunking some myths about construction.

Myth: Construction Workers Just Stand Around

If you’re one of those people who hate construction, this is probably a frustrating image: one guy running the excavator and everyone else just standing around just watching. It seems like a familiar scene, especially when the project’s schedule is dragging along. But, looks can be deceiving. Think of it this way: contractors are running an extremely risky business on relatively thin margins. In most cases, they already know how much money they’ll be paid for a project, so the only way to make a profit is to carefully manage expenses. And what’s their biggest expense? Labor! A worker standing around with nothing to do is the first thing to get cut from a project. Individual laborers might be paid hourly, but their employers are paid by the contract and have every incentive to get the job done as quickly and efficiently as possible. So why do we see workers standing around? There are a few reasons.

Firstly, construction is complicated. Honestly, it’s a logistical nightmare to get materials, subcontractors, tools, equipment, and workers in the right place at the right time. Almost every task is sequential, which means anything that doesn’t line up perfectly affects the schedule of everything else. Construction is a hurry-up-and-wait operation, and the waiting is often easier to spot than the hurrying. Most of the folks you see on a construction site, whether they’re standing around or not, have been or will be hustling for most of the day, which leads me to my second point: construction is hard work.

Anyone working in the trades will tell you that it’s a physically demanding job. You can’t just show up at 6AM, run a shovel for 10 hours, go home, and do it again the next day. You need breaks if you’re working hard. Standing around is often as simple as that: workers resting from a difficult task. Plus a person with a shovel isn’t that useful when you have a tracked excavator on site that can do the work of 20. So, the laborers you see outside of machines are often doing jobs that are adjacent to the actual work like running tools, directing traffic, or documenting. That leads me to my third point: not everyone on a construction site is a tradesperson.

Keeping an eye on things is an actual job, and in some cases it is more than one. Inspectors are often on site to make sure a contractor doesn’t misinterpret a design or build something incorrectly. An engineer may be on site to answer questions or check on progress. And trust me, you don’t want us anywhere near the cab of a crane or excavator. Safety spotters are sometimes required to keep workers out of dangerous situations. Plus, foremen and supervisors are on site to direct their crews. These folks are doing necessary jobs that might look just like standing around if you’re not familiar with the roles.

Lastly, construction is often out in the open unlike many other jobs. Confirmation bias makes it easy to pass by a construction site in a car and notice the people who aren’t actively performing a task while ignoring the ones who are. If those construction workers stepped into any office building, they might see you hanging around the water cooler talking about your favorite YouTube channels and start a rumor that office workers are so lazy.

Myth: Ancient Roman Roads and Concrete Were Better

I made an entire video comparing “Roman concrete” to its modern equivalent, but I still get emails and comments all the time about the arcane secrets possessed by the ancient Romans that have since been lost to the sands of time. It’s not true, really. I mean, the ancient Roman concrete used in some structures did have some interesting and innovative properties, and the Romans did invest significantly in the durability of their streets and roads. But, I think a Roman engineer would be astounded to learn that most modern highways handle hundreds of thousands of trucks that can weigh upwards of 80,000 pounds before being replaced. And, I think a Roman engineer might wet their toga if they were to see a modern concrete-framed skyscraper. There are a few reasons why it seems that the Romans had us outclassed when it comes to structural longevity.

First there’s survivor bias. We only see the structures that lasted these past 2,000 years and not the vast majority of buildings and facilities, which were destroyed in one way or another. Second, there's the climate. I haven’t personally been to the parts of the world surrounding the Mediterranean Sea, but I hear most of them are quite nice. Cycles of freezing and thawing are absolutely devastating to almost every part of the constructed environment. The ancient Romans were in an area particularly well-suited to making long-lasting concrete structures, especially compared to the frozen wastelands that some other of Earth’s denizens call home. Finally, there’s just a difference in society and government. Ancient Rome was wildly different from modern countries in a lot of ways, but particularly in how much they were willing to spend on infrastructure and how they were willing to treat laborers. Modern concrete mixes and roadway designs are far superior to those of ancient Rome, but our collective willingness to spend money on infrastructure is different too.

I think a lot of the feedback I get on Roman construction is based on the extremely pervasive sentiment that “we just don’t build stuff like we used to.” It’s an easy shortcut to equate quality with longevity, especially for infrastructure projects where we aren’t directly involved in managing the costs. I regularly have people tell me that we shouldn’t use reinforcing steel in concrete, because when it corrodes, it decreases the lifespan (which is completely true). But also, unreinforced concrete is essentially just brick. And not to disparage masonry, but there’s a lot it can’t do in structural engineering.

A lot of people even go so far as to accuse engineers of using planned obsolescence - the idea that we design things with an intentionally limited useful life. And I don’t know anything about consumer goods or devices, but at least in civil engineering, those people are exactly right. We always think about and make conscious decisions regarding lifespan during the design of a project. But it’s not to be nefarious or create artificial job security. It’s because, in simplistic terms, the capital cost of a construction project and its lifespan exist on either side of a spectrum, and engineers (by necessity) have to choose where a project sits between the two. Will you build a bridge that’s inexpensive, but will have to be replaced in 25 years, or will you spend twice the money for more concrete and more steel to make it last for 50? We make this decision constantly when we pay for things in our own lives, choosing between alternatives that have various costs and benefits. But it’s much more complicated to draw that line as the steward of tax dollars for an entire population.

That’s why engineering exists in the first place. With an unlimited budget, my 2-year-old could design a bridge that carries monster trucks over the English channel for a million years. Engineers compare the relative costs and benefits of design decisions from start to finish to meet project requirements, protect public safety, and do so with the limited available resources. Part of that is evaluating alternatives like the cheap bridge versus the expensive bridge, plus their long-term maintenance and replacement costs to see which one can best meet the project goals. In that case, planned obsolescence means being a good steward of public money (which is always limited), by not gold-plating where it’s not necessary so that funds can be used where they’re needed most.

Myth: Lowest Bidder = Lowest Quality

There’s a story about legendary astronaut John Glenn being asked by a reporter about what it felt like to be inside the Mercury capsule on top of an Atlas LV-3B rocket before takeoff. He reportedly said he felt exactly how one would feel sitting on top of two million parts - all built by the lowest bidder on a government contract. And indeed, most construction projects are contracted using bids, and regulations require that public entities award the contract to the lowest bidder. Those rules are in place to make sure that the taxpayer is getting the most value for their money. But, it doesn’t necessarily mean that our infrastructure projects suffer in quality as a result.

Most construction projects are bid using a set of drawings and a book of specifications that include all the detail necessary to build them. An engineer, architect, or both has gone to great lengths to draw and specify exactly what a contractor should build, often to the tiniest details about products, testing, and procedures. You can see for yourself; just google your city or state, plus “standard specifications,” and scroll through what you find to get a sense of how detailed contract documents can be. We go to that level of detail in defining the project before construction so that it can be let for bidding with the confidence that an owner will end up with essentially the same product at the end of construction, no matter which contractor wins the job.

Bidding on contracts is a tough way to win work, by the way. Imagine if on January 1st, your employer gave you a list of all the tasks that needed to be complete by the end of the year, and you had to guess how many hours it would take. And, if you guessed a higher number than your coworker, you got fired. And if you guessed lower than the actual number of hours it took, too bad, you only got paid for the hours you guessed. It might incentivize you to look for innovative ways to get your job done more efficiently, but (admittedly) it might also encourage you to cut corners and ignore opportunities to add value where it’s not explicitly required.

Many public entities are moving away from contracting using the lowest bidder model for types of procurement that allow them to recognize and award other measures of value than just cost like innovation, schedule, and past experience. These alternative delivery methods can help foster a more collaborative relationship between the owner, contractor, and designer, making the construction process smoother and more efficient. But, the lowest bidder model is still used around the world because it generally rewards efficient use of public funds. After all, John Glenn did make it safely to space, became the first American to orbit the earth, and came back with no issues on those two million parts provided by the lowest bidders.

Myth: Foundations Must Go To Bedrock

If you’ve ever played minecraft, you know that at a certain depth below the ground, you reach an impenetrable layer of voxels called bedrock. And indeed, in most parts of the world, geologic layers do get firmer and more stable, the farther down you go. Engineers often take advantage of this fact to secure tall buildings and major structures using deep foundation systems like driven or drilled piles. “Bedrock” is such a familiar concept that it’s easy to look at the world through minecraft-colored glasses and assume there’s (always and everywhere) some firm layer below - but not too far from - the surface of the earth, and all tall buildings and structures must sit atop it. But, the real world is a little more complicated than that. Different geologic layers may be considered bedrock, depending on whether you’re a well driller, foundation designer, pile driver, or geology textbook author. There’s no strict definition of bedrock, and there are vast spectrums of soil and rock properties that might make stable foundations depending on the loading and environmental conditions.
In engineering especially, there doesn’t always exist a firm geologic layer at a reasonable depth below the surface of the earth our buildings and structures can be attached to. And even if there is, that may not be the most cost-effective way to meet design requirements. There may be shallow foundation concepts that are appropriate (and much cheaper) depending on the situation. There’s a famous parable about a wise man who built his house on the rock, but not every wise man can afford a piece of property on the rocky side of town, especially in today’s real estate market. Civil engineers don’t always have the luxury of founding structures on the most stable of subgrades, so we’ve come up with foundations that keep structures secure on sand, silt, clay, and even floating on water. When the rain comes down, and the streams rise, and the winds blow and beat against our structures, they almost always remain standing no matter what the geology is below.

June 21, 2022 /Wesley Crump

The Bizarre Paths of Groundwater Around Structures

June 07, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In 2015, an unusual incident happened on the construction site for a sewage lift station in British Columbia, Canada. WorksafeBC, the provincial health and safety agency, posted a summary of the event on YouTube. A steel caisson had been installed to hold back soil while the lift station could be constructed. One worker on the site was suddenly pulled into a sinkhole when the bottom of the caisson blew out. The cause of the incident was related to groundwater within the soils below the site. We don’t all have to live in fear of the ground opening up below our feet, but engineers who design subsurface structures do have to consider the impact that groundwater can have. The solutions to subsurface problems are almost always hidden from public view, so you might never even know they’re there. This video is intended to shed some light on those invisible solutions (including what could have been done to prevent that incident in BC). I’m Grady and this is Practical Engineering. In today's episode, we’re talking about how groundwater affects structures.

Groundwater has always been a little mysterious to humanity since it can’t easily be observed. It also behaves much differently than surface waters like rivers and oceans, sometimes defying expectations, as I’ve shown in a few of my previous videos. One of the most important places where groundwater shows up in civil engineering is at a dam. That’s because groundwater flows from high pressure to low pressure, and a dam, at its simplest, is just a structure that divides those two conditions. And what do you know, I’ve got an acrylic box in my garage full of sand to show these concepts in real life.

You can imagine this soil sits below the base of a dam, and I can adjust the water levels on either side of the structure to simulate how groundwater will flow. Blue dye placed in the sand helps show the direction and speed of water movement below the surface. A higher level on the upstream side creates pressure, driving water in the subsurface below the dam to the opposite end of the model. I’ll be the first to say it: this is not the most mind-blowing revelation. You probably could have predicted it without the fancy model. But to a civil engineer, this is not an inconsequential phenomenon, and for a couple of reasons. 

First, water seeping below a dam can erode soil particles away, a phenomenon called piping. Obviously, you don’t want part of your structure’s foundation to be stolen from underneath it, and piping can create a positive feedback loop where failure progresses rapidly. I have a whole video on piping that you can check out after this one. The second negative effect of groundwater is less obvious. In fact, until around the 1920s, dam engineers didn’t even take it into account (leading to the demise of many early structures in history).

The engineering of a dam is largely an exercise in resisting hydrostatic pressure. Water in the reservoir applies an enormous force to the upstream face of a dam, and if not designed properly, that force can cause the dam to slide downstream or overturn. The hydrostatic force is actually pretty simple to approximate. Pressure in a fluid increases with depth, so you get a triangular distributed load. Once you know that load, you can design a structure to resist it, and there are a lot of ways to do that. One of the most common types of dam just uses its own weight for stability. Gravity dams are designed to be heavy enough that hydrostatic forces can’t slide them backwards or turn them over. But, to the dismay of those early engineers, pressure from the reservoir is not the only destabilizing force on a dam.

Take a look at this pipe I’ve included in the model that shows the water level between the two boundaries. If the base of a structure was below the water level shown here, the groundwater would be applying pressure to the bottom, counteracting its weight. We call this uplift pressure. Remember that the only reason gravity dams stay put is because of their weight, so you can see how having an unanticipated force effectively subtracting some of that weight would be a bad thing. Many concrete gravity dams have failed because this uplift force was neglected by engineers, including the St. Francis Dam in California that killed more than 400 people when it collapsed in 1928. Many consider this to be the worst American civil engineering disaster of the 20th century.

Unlike the hydrostatic force of a reservoir, uplift pressure from groundwater is a much more complicated force to characterize. It exists in the interface between the structure and its foundation, in the cracks and pores of the underlying soil, and even within the joints of the concrete structure itself. The flow of groundwater is affected by soil properties, the geometry of the dam, the water levels upstream and downstream, and even the subsurface features. How these factors affect the uplift pressure can be pretty challenging to predict. But engineers do have to predict it. After all, we can’t build a dam, measure the actual uplift force, and add weight if necessary. It’s gotta work the first time.

One way to characterize groundwater flow around structures is the flow net. This is a graphical tool used by engineers to estimate the volume and pressure of seepage in the subsurface. In simple terms, you divide the flow area into a curvilinear grid, where one axis represents pressure and the other represents flow. If this looks familiar, you might notice that a flow net is essentially a 2D solution to the Laplace equation, which also applies to other areas of physics including heat flow and magnetic fields. Developing flow nets is almost an art as much as a science, so it’s probably a good thing that groundwater problems are mostly solved using software these days. But, we can still use flow nets to demonstrate a few of the ways engineers combat this nefarious uplift force on gravity dams. And one common idea is a cutoff wall.

If water flowing below a dam causes so many problems, why not just create a vertical wall to cut it off? We do it all the time. But, how deep does it need to be? Some dams might have a convenient geological layer into which a cutoff can be terminated, creating an impenetrable envelope to keep seepage out. But, many don’t. Cutoff walls can still reduce the volume of flow and the pressure, even if seepage can still make its way underneath. Let’s take a look at the model to see why. I’ve added a vertical wall of acrylic below the upstream face of my dam, and we’ll see how it affects the flow. [Beat]. The groundwater flow lines adjust to go under the wall and back up to the other side of the model. If you look closely you’ll see a slight decrease in the uplift measurement pipe below the dam. The only thing I changed between this model and the last one was adding the cutoff wall. So why would the pressure decrease on the downstream side?

The flow of groundwater is described with a fairly simple formula known as Darcy’s law. Besides the permeability of the soil, the only other factor controlling the speed water flows is the hydraulic gradient, which consists of the difference in pressure over the length of a flow path. By adding a cutoff wall, I didn’t change the difference in pressure between one side of the model and the other, but I did increase the length of the flow path water had to take below the dam, reducing the hydraulic gradient. I can sketch a flow net over the model to make this clearer. The black lines are equipotentials; they connect areas of equal pressure. The blue lines show the directions of flow. Without a cutoff, the flow paths are shorter, and thus the equipotential lines are closer together. With the cutoff wall, the equipotential lines are spread out. That means both the volume of seepage and the uplift pressure at the base of the structure have been reduced.

Cutoff walls on dams have a long history of use, and nearly all large gravity dams have at least some kind of cutoff. It can be as simple as excavating a wide area of the dam’s foundation before starting on construction, and that’s a popular choice because it gives engineers a chance to observe the subsurface conditions and make sure there are no faults or problems before the dam gets built. Another option is to excavate a deep trench and fill it with grout, concrete, or a slurry of impermeable clay. For smaller or temporary structures, sheet piles can be driven into the subsurface to create a cutoff. One final option is to inject high pressure grout to create an impenetrable curtain below the dam.

The other way to deal with seepage and uplift pressure are drains. Drains installed below a dam do two important jobs. First, they filter seepage using sand and gravel so that soil particles can’t be piped out from the foundation. Second, they relieve uplift pressure by removing the water. Let’s see how this works in my model. Upstream of my uplift monitor, I’ve added a hole through the back of the model with a tube to drain seepage out. Instead of flowing all the way downstream, now some of the seepage flows up to and through the drain, and you can see this in the streamlines of dye flowing in the subsurface. Again, the effect is subtle, but the uplift pressure monitor is showing a slight decrease in pressure compared to the original configuration. There is less pressure on the base of the dam than there would be without the drain. Plotting a flow net over the model, you can see why it behaves this way. The drain relieves the uplift on the base by creating an area of low pressure below the dam. You can also note that the drain actually increases the hydraulic gradient by shortening the flow paths, so there’s actually more seepage happening than there would be without the drain. However, because the drains are installed with filters to reduce the chance of piping, that additional seepage is often worth the decrease in uplift pressure.

Many concrete dams include a row of vertical drains into the foundation, and some even use pumps to depress the groundwater level further, minimizing the uplift. I can simulate this by lowering the downstream level as if a pump was removing the water. Watch how the flow lines adjust when I make this change in the model. Like drains, these relief wells create more seepage below a dam because of the greater difference in pressure between the two sides, but they can significantly reduce the uplift pressure and thus increase a structure’s stability.

I’ve been using dams as the main example of managing groundwater flow, but lots of other structures have similar issues. Retaining walls and temporary shoring have to contend with groundwater challenges, including caissons, which are watertight chambers sunk into the earth to hold back soil during construction. Remember the worker I mentioned in the intro? He was on a site near a caisson. It’s typical to dewater a structure like this, meaning the water is pumped out, creating a dry area for construction crews to work. Let’s take a look at how this works in the model. I’m simulating the act of pumping water out of the caisson by draining out of the model at the bottom of the structure. When a caisson is dewatered, it is essentially working like a dam, separating an area of high pressure from low pressure within only a short distance between them. And, as you know, distance matters when it comes to groundwater, because the shorter the flow paths, the greater the hydraulic gradient, and thus the higher the volume and velocity of seepage.

If you look closely, you can see the sand boiling up as the seepage exits the soil into the bottom of the caisson. This elevated pressure in the subsurface and high velocity of flow means that the soil particles themselves aren’t being strongly held together. All it takes is a little agitation for the soil to liquefy and flow into the bottom of the caisson, creating a sinkhole that can easily swallow anything at the surface. One way of mitigating this hazard is dewatering the soil outside the caisson. Construction crews use well points, small evenly spaced wells and pumps, to draw water out of the soil so it can’t seep to areas of lower pressure. Caissons can also be driven deeper into the subsurface, creating a condition similar to a cutoff wall on a dam. They can even go deep enough to reach an impermeable layer, creating a better seal that prevents water from flowing in through the bottom. 

Thankfully for the worker in BC, his colleagues were able to rescue him before he was consumed by the earth. Next time you see a dam, retaining wall, caisson, or any other subsurface construction, there’s a good chance that engineers have had to consider how groundwater will affect the stability. Even though you’d never know they’re there, some combination of drains and cutoffs were probably installed to keep the structure (and the people around it) safe and sound.

June 07, 2022 /Wesley Crump

How We Track COVID-19 (And Other Weird Stuff) In Sewage

May 17, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

When the COVID-19 pandemic was just getting started in early 2020, every major city, state health department, and federal agency involved built out data dashboards you could access online to check case counts and trends. Public health officials could constantly be heard asking everyone to “flatten the curve,” that curve being a graph of infection rates over time. But how do you get such a graph? By and large, our measure of the pandemic came through individual case counts confirmed with laboratory testing and reported to a data clearinghouse like the local public health department or the CDC. There was a lot of confusion about testing, positivity rates, how that information applied to the greater population, and how it could be used to implement measures to slow the spread of disease. The limitations of individual testing data - including test shortages, reporting delays, and unequal access to healthcare - made public health decisions extremely challenging. Much of the controversy surrounding mask mandates and stay-at-home orders was provoked by the disconnect between what we could reliably measure and the reality of the pandemic on the ground. Public health officials were constantly on the lookout for more indicators that could help inform decisions and manage the spread of disease.

One of these measures didn’t really show up in the online data dashboards, but it was and continues to be, used as a broad measure of infection rates in cities. It’s a topic that combines public health, epidemiology, and infrastructure that didn’t get much coverage in the news. And there are both some interesting privacy implications and some really fascinating applications on the horizon. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about wastewater surveillance for public health.

If you are unfamiliar with the inner workings of a modern municipal wastewater collection system, boy do I have the playlist for you. But, if you don’t want to watch 5 of my other videos before you watch this one, I can give you a one-sentence rundown: Wastewater flows in sewers, primarily via gravity, combining and concentrating as it continues to a treatment plant where a number of processes are used to rid it of concomitant contaminants so it can be reused or discharged back into the environment. Just like a watershed is the area of land that drains to a specific part of a river or stream, a “sewershed'' isn’t an outhouse but an area of a city that drains to a specific wastewater treatment plant. The largest sewersheds can include hundreds of thousands, or even millions of people, all of whose waste flows to a single facility designed to clean it up.

Wastewater treatment plants regularly collect samples of incoming sewage to characterize  various constituents and their strengths. After all, you have to know what’s in the wastewater to track whether or not it’s been sufficiently removed at the other end of the plant. In the early days of sewage treatment, sampling consisted only of measuring the basic contaminants such as nutrients and suspended solids. But, as our testing capabilities increased, it slowly became easier and less expensive to measure other impurities, sometimes known as contaminants of emerging concern. These included pharmaceuticals, pesticides, personal care products, and even illicit drugs. It didn’t take too long to realize that tracking these contaminants was not only a tool for wastewater treatment but also a source of information about the community within the sewershed, the gathering of which is a notoriously difficult challenge in the field of public health.

Rather than coordinating expensive and arduous survey campaigns where many people aren’t always truthful anyway or going through the hoops of privacy laws to gather information from healthcare providers, we can just take a sample of sludge from the bottom of a clarifier, send it off to a lab, and roughly characterize in hours or days, the dietary habits, pharmaceutical use, and even cocaine consumption of a specific population of people. If you’re a public health researcher or public official, that is a remarkable capability. To quote one of the research papers I read, “Wastewater is a treasure trove of biological and chemical information.”

Think about all the stuff that gets washed down the drain and all the things you consume that might create unique metabolites that find their way out the other side of your excretory system. Although wastewater surveillance is a relatively new field of study, we’re already able to measure licit and illicit drugs, cleaning and personal care products, and even markers of stress, mental health, and diet. That’s a lot of useful information that can be used to monitor public health, but one particular wastewater constituent took center stage starting in early 2020. Of course, for decades, we’ve tracked pathogens in wastewater to make sure they aren’t released into the environment in treatment plant effluent, but the COVID19 pandemic created a vacuum of information on virus concentrations that had never been experienced before.

We realized early in the pandemic that the SARS-CoV-2 virus is shed in the feces of most infected people. Even before widespread tests for the virus were available, many public health agencies were sampling the wastewater in their communities as a way to track the changes in infection rates over time. Realizing the importance of coordinating all these separate efforts, many countries created national tracking systems to standardize the collection and reporting of virus concentrations in sewage. In the US, the CDC launched the National Wastewater Surveillance System in September of 2020, complete with its own logo and trademarked title. Let me know if I should try to license this design for my merch store.

Individual communities can collect and test wastewater for SARS-CoV-2, and then submit the data to the CDC for a process called normalization. Virus concentrations go up and down with infections, but they also go up and down with dilution from non-sewage flows and changes in population (for example, in sewersheds with large event venues or seasonal tourism). Normalization helps correct for these factors so that comparisons of virus loads between and among communities is more meaningful.

There are some serious benefits from tracking COVID-19 infections using wastewater surveillance. It’s a non-intrusive way to monitor health that’s relatively impartial to differences in access to healthcare or even whether infections are symptomatic or not. Next, it is orders of magnitude less expensive than testing individuals. Nearly 80% of US households are served by a municipal wastewater collection system, so you can get a much more comprehensive picture of a population for just the cost of a laboratory test. It can also provide an earlier indicator of changes in community-wide infection rates. Individual tests can have delays and miss asymptomatic infections, and hospitalization counts come well after the onset of infection, so wastewater surveillance can provide the first clue of a COVID-19 spike, sometimes by several days. Finally, now that vaccination programs are widespread and there is significantly less testing being carried out, wastewater surveillance is a great tool to keep an eye out for a resurgence in COVID-19 infections, and it can even be used to monitor for new variants.

Of course, wastewater surveillance has some limitations too, the biggest one being accuracy. The science is still relatively new, and there are lots of confounding variables to keep in mind. In addition to changes in dilution from other wastewater flows and sewershed population mentioned before, the quantity of viruses shed varies significantly between individuals and even  over the course of any one infection. Right now, wastewater surveillance just isn’t accurate enough to provide a count of infected individuals within a population, so it’s mostly useful in tracking whether infections are increasing or decreasing and by what magnitude.

There are also some ethical considerations to keep in mind. That term “surveillance” should at least prick up your ears a little bit. Monitoring the constituents in wastewater at the treatment plant averages the conditions for a large population, but what if samples were taken from a lift station that serves a single apartment complex, school, or office building? What if a sample was taken from a manhole on the street right outside your house? Could the police department use the data to deploy more officers to neighborhoods where illicit drugs are found in the sewage? Could a city or utility provider sell wastewater data to private companies for use in research or advertising? That’s a lot of hypotheticals, but I wouldn’t be surprised to see a Black Mirror episode where some tech company provides free trash and sewer service just to collect and sell the data from each household. If you wanted to open a new coffee shop, how much would you pay to learn which parts of town have the highest concentrations of caffeine in the sewage? Maybe it would be called Brown Mirror.


The truth is that public health professionals have put a tremendous amount of thought into the ethics and privacy concerns of wastewater surveillance, but (as with any new field of science), there are still a lot of questions to be answered. One of those questions is, “What comes next in this burgeoning field of wastewater surveillance,” where public health researchers have access to a literal stream of data. There are many measures of public health that can be valuable to policy makers and health officials, including stress levels, changes in mental health, and the prevalence of antimicrobial-resistant bacteria (one of the greatest human health challenges of our time). Of course, all the work that went into standardizing and building out capabilities of tracking infections will certainly give us a leg up on resurgences of COVID-19 or any future new virus, heaven forbid. My weather report already has a lot more information than it did 20 years ago, including pollen counts of various allergy-inducing tree species, air pollution levels, and UV-ray strength. We might soon see infection rates of the various diseases that spread through community populations to help individuals, planners, and public officials make better-informed decisions about our health. Sewers were one of the earliest and most impactful advents of public health in urban areas, and it’s exciting that we’re still finding new ways to use them to that end.

May 17, 2022 /Wesley Crump

How Wells & Aquifers Actually Work

May 03, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

It is undoubtedly unintuitive that water flows in the soil and rock below our feet. A 1904 Texas Supreme Court case famously noted that the movement of groundwater was so “secret, occult and concealed” that it couldn’t be regulated by law. Even now, the rules that govern groundwater in many places are still well behind our collective knowledge of hydrogeology. So it’s no surprise that misconceptions abound around water below the ground. And yet, roughly half of all drinking water and irrigation water used for crops comes from underneath the surface of the earth. You can’t really look at an aquifer, but you can look at a model of one I built in my garage. And at the end of the video, I’ll test out one of the latest technologies in aquifer architecture to see if it works. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about groundwater and wells.

Not all water that falls as precipitation runs off into lakes and rivers. Some of it seeps down into the ground through the spaces between soil and rock particles. Over time, this infiltrating water can accumulate into vast underground reservoirs. A common misconception about groundwater is that it builds up in subterranean caverns or rivers. Although they do exist in some locations, caves are relatively rare. Nearly all groundwater exists within geologic formations called aquifers that consist of sand, gravel, or rock saturated with water just like a sponge. It just so happens you’re watching the number one channel on the internet about dirt, and there are a lot of interesting things I can show you about how aquifers behave.

I built this acrylic tank in my garage to illustrate some of the more intriguing aspects of groundwater engineering. I can fill it up with sand and add blue dye to create two-dimensional scenarios of various groundwater conditions. It also has ports in the back that I can open or close to drain various sections of the model. And, on both sides, there’s a separation that simulates a boundary condition on the aquifer. Water can flow through these dividers along their height. Most of the shots you’ll see of this have been sped up because, compared to surface water, groundwater flows quite slowly. Depending on the size of soil or rock particles, it can take a very long time for water to make its way through the sinuous paths between the sediments. The property used to characterize this speed is called hydraulic conductivity, and you can look up average values for different types of soil online, if you’re curious to learn more. In fact, different geologic layers affect the presence and movement of groundwater more than any other factor, which is why there is so much variability in groundwater resources across the world.

Like all fluids, groundwater flows from areas of high pressure toward areas of low pressure. To demonstrate this, I can set the left boundary level a little higher than the one on the right. This creates a pressure differential across the model so water flows from left to right through the sand. I added dye tablets at a few spots so you can see the flow. This is a simple example because the pressure changes linearly through a consistent material, but any change in these conditions can add a lot of complexity. In purely mathematical terms, you can consider this model a 2D vector field because the groundwater can have a different velocity - that is direction and speed - at any point in space. Because of this, there are a lot of really neat analogies between groundwater and other physical phenomena. My friend Grant of the 3Blue1Brown YouTube channel has an excellent video on vector field mathematics if you want to explore them further after this.

We often draw a bright line between groundwater and surface water resources like rivers and lakes because they behave so differently. But water is water. It’s all part of the hydrologic cycle, and many surface waters have a nexus with groundwater resources, meaning that changes in groundwater may impact the volume and quality of surface water resources and vice versa. Let me show you an example. In the center of my model, I’ve made a cross section of a river. The drain at the bottom of the channel simulates water flowing along the channel, in this case leaving my model. If I turn on the pumps to simulate a high water table in the aquifer, the groundwater seeps into the river channel and out of the model. The dye traces show you how the groundwater moves over time. If you encounter a situation like this in real life, you might see small springs, wet areas of the ground, and (during the winter) even icicles along slopes where the groundwater is becoming surface water before your eyes.

Likewise, surface water in a river can flow into the earth to recharge a local aquifer. I’ve reconfigured my model so the pump is putting water into the river and the outer edges of the reservoir are drained, simulating a low water table. Some of the water in the river flows back out of the model through the overflow drain, showing that while not all the water in a river seeps into the ground, some does. You can see the dye traces moving from the river channel into the aquifer formation, transforming from surface water into groundwater as it does. As you can see, surface water resources are often key locations where underground aquifers are recharged.

This is all fun and interesting, but much of groundwater engineering has more to do with how we extract this groundwater for use by humans. That’s the job of a well, which, at its simplest, is just a hole into which groundwater can seep from the surrounding soil. Modern wells utilize sophisticated engineering to provide a reliable and long-lasting source of fresh water. The basic components are pretty consistent around the world. First, a vertical hole is bored into the subsurface using a drill rig. Steel or plastic pipe, called casing, is placed into the hole to provide support so that loose soil and rock can’t fall into the well. A screen is attached at the depth where water will be withdrawn creating a path into the casing. Once both the casing and screen are installed, the annular space between them and the bore hole must be filled. Where the well is screened, this space is usually filled with gravel or coarse sand called the gravel pack. This material acts as a filter to keep fine particles of the aquifer formation from entering the well through the screen. The space along the unscreened casing is usually filled with clay, which swells to create an impermeable seal so that shallow groundwater (which may be lower quality) can’t travel along the annular space into the screen.

Wells use pumps to deliver water that flows into the casing up to the surface. Shallow wells can use jet pumps that draw water up using suction like a straw. But, this method doesn’t work for deeper wells. When you drink through a straw, you create a vacuum, allowing the pressure of the surrounding atmosphere to push your beverage upward. However, there’s only so much atmosphere available to balance the weight of a fluid in a suction pipe. If you could create a complete vacuum in a straw, the highest you could draw a drink of water is around 10 meters or 33 feet. So, deeper wells can’t use suction to bring water to the surface. Instead, the pump must be installed at the bottom of the well so that it can push water to the top. Some wells use submersible pumps where the motor and pump are lowered to the bottom. Others use vertical turbine pumps where only the impellers sit at the bottom driven by a shaft connected to a motor at the surface.

All that pumping does a funny thing to an aquifer. I can show you what I mean in the model. As water is withdrawn from the aquifer, it lowers the level near the well. The further away from the well you go, the less influence it has on the level in the aquifer. Over time, pumping creates a cone of depression around the well. This is important because one well’s cone of depression can affect the capacity of other wells and even impact nearby springs and rivers if connected to the aquifer. Engineers use equations and even computer models to estimate the changes in groundwater level over time, based on pumping rate, recharge, and local geology.

One fascinating aspect of deeper aquifers is that they can be confined. My model isn’t quite sophisticated enough to show this well, but I can draw it for you. A common situation is that an aquifer exists at an angle to the ground surface. It can recharge in one location, but becomes confined by a less permeable geologic layer called an aquitard. Water flowing into a confined aquifer can even build up pressure, so that when you tap into the layer with a well, it flows readily to the surface (called an artesian well). It can happen in oil reservoirs as well, which is why you occasionally see oil wells blow out.

A part of the construction of wells that I didn’t mention yet is the top. A well creates a direct path for water to come out of an aquifer, and if not designed, constructed, and maintained properly, it can also be a direct path into the aquifer for contaminants on the surface. In my model, I can simulate this by dropping some dye into the well to represent an unwanted chemical spilled at the surface. Say some rainwater enters too, washing the contaminant through the well into the aquifer. Now, as groundwater naturally moves in the subsurface, it carries a plume of contamination along as well. You can see how this small spill could spread out in an aquifer, contaminating other wells and ruining the resource for everyone. So, wells are designed to minimize the chances of leaks. The uppermost section of the annular space is permanently sealed, usually with cement grout. In addition, the casing is often extended above the surface with a concrete pad extending in all directions to prevent damage or infiltration to the well.

We’ve been talking so much about how to get water out of an aquifer, but there are some times where we want to do the reverse. Injection wells are nothing new; deep belowground can be a convenient and out-of-the-way place to dispose of unwanted fluids including sewage, mining waste, saltwater, and CO2. But until recently, it hasn’t been a place to store a fluid with the intent of taking it back out at a later date. Aquifer Storage and Recovery or ASR is a relatively new technology that can help smooth out variability in water resources where the geology makes it possible. Large-scale storage of water is mostly restricted to surface water reservoirs formed by dams that are expensive and environmentally unfriendly to construct. With enough pressure, water can be injected through a well into an aquifer. You can see on my model that introducing water to the well causes the level in the aquifer to rise over time. Eventually, this water will flow away, but (as I mentioned) groundwater movement is relatively slow. In the right aquifer, you won’t lose too much water before the need to withdraw it comes again.

Taking advantage of the underutilized underground seems obvious, but there are some disadvantages too. You need a goldilocks formation where water won’t flow away too fast, but is also not so tight that it takes super-high pressure for injection. You also need a geologic formation that is chemically compatible with the injected water to avoid unwanted reactions and bad tastes. Of course, you always have costs, and ASR systems can be expensive to operate because the water has to be pumped twice - once on the way in and again on the way out. 

Finally, you can have issues with speed. In many places, the surplus water that needs to be stored comes during a flood - massive inflows that arrive over the course of a few hours or days. A dam is a great tool to capture floodwaters in a reservoir for later use. Injection wells, on the other hand, move water into aquifers too slowly for that. They’re more appropriate where surplus water is available for long durations. For example, one of the few operating ASR projects is right here in my hometown of San Antonio. When water demands fall below the permitted withdrawals from our main water source, the Edwards Aquifer, we take the surplus and pump it into a different aquifer. If demands rise above the permitted withdrawals, we can make up the difference from the ASR.

You can add more injection wells to increase the speed of recharge, but above a certain pressure, some funny things start to happen: underground formations break apart and erode in a phenomenon called hydraulic fracturing or just fracking. Breaking apart underground formations of rock and soil has been a boon for the oil and gas industry. But, just like that Texas groundwater in 1904, the regulation of fracking is mired in confusion and controversy, in no small part because it happens below the surface of the earth, hidden from public view. I’ll save those details for a future video.

May 03, 2022 /Wesley Crump

The Engineering Behind Russia's Deadlocked Pipeline: Nord Stream 2

April 19, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Since 2011, the Russian energy corporation Gazprom and a group of large investors have been working on one of the longest and highest capacity offshore gas pipelines in the world. The Nord Stream 2 is a pair of large-diameter natural gas pipelines running along the bottom of the Baltic Sea from the Russian coast near St Petersburg to the northern coast of Germany near Greifswald. Planning, design, and construction of Nord Stream 2 was mired in political controversy not only because of climate-related apprehensions over new fossil fuel infrastructure but also over concerns that the pipeline could be used as a geopolitical weapon by Russia against other European countries. Still, construction began in 2016 and finished 5 years later at the end of 2021.

As the German government worked toward certifying the pipeline to begin operation, Russia launched a military invasion of Ukraine. This unjustified and unconscionable attack on a sovereign nation has received widespread international condemnation followed up with a litany of sanctions on Russia and its most senior leaders. Part of the response included Germany halting the certification of this divisive, ten-billion-dollar megaproject. As of this video’s production, the invasion of Ukraine is ongoing and future international relations between Russia and most of the developed world are unlikely to improve any time soon.

The U.S. put sanctions on the company in charge of the pipeline and its senior officers. The project’s website has been taken offline, and most of the employees have been fired or quit. These circumstances raise plenty of questions: How do you install a pipeline at the bottom of the Baltic Sea? Why is this line so important to geopolitics? And what does the future hold for what may be the world’s most controversial infrastructure project? I’m Grady, and this is Practical Engineering. In today's episode, we’re talking about the Nord Stream 2 pipeline.

Like its predecessor, Nord Stream, the goal of the Nord Stream 2 pipeline is to provide a direct connection between the vast reserves of natural gas in Russia and the energy-hungry markets of Europe. With a length of 1,230 kilometers or 764 miles each, the twin pipes pass through the territorial waters or economic exclusion zones of five countries: the two landfall nations of Russia and Germany as well as Finland, Sweden, and Denmark. Also like its predecessor, the Nord Stream 2 is owned by a subsidiary of Gazprom (a Russian-state-owned enterprise and one of the largest companies in the world) and financed by a coterie of other international oil and gas firms. The project has a long, complex, and controversial history. This video is meant to highlight the engineering details of the project, but in this case, the politics can’t be ignored. I’ll do my best to hit the high points, but check out some of the more comprehensive journalism on the subject before you form any strong opinions.

Even before construction began, Nord Stream 2 had some massive obstacles to overcome. The Baltic is one of the world's most polluted seas, and all the countries around it have a vested interest in making sure those conditions don’t worsen. Pipeline construction can create harmful levels of underwater noise, affect fisheries, disrupt water quality, and even impact the cultural heritage of shipwrecks along the seafloor. Each country along the route imposed strict environmental requirements before construction permits would be issued. The planning phase for the pipeline involved detailed underwater surveys of the seabed to help choose the most feasible route along the way. This survey also helped identify unexploded ordinances from World Wars 1 and 2. Where possible, the pipeline was routed around these munitions, but in some cases they had to be detonated in place. When this was done, the contractors used bubble curtains around each explosion to mitigate the noise impacts on marine life.

The logistics of producing so much pipe was also a huge challenge. The pipe sections used for the Nord Stream 2 were about 1150 mm or 45 inches in diameter and 12 meters or 40 feet long. They started out as steel plates that were rolled into pipe sections, welded, stretched, beveled, and inspected for quality. An interior epoxy anti-friction coating was applied to minimize the pressure losses in the extremely long line. Then an exterior coating was applied to protect against corrosion in the harsh saltwater environment. And the entire project required manufacture of more than 200,000 of these pipe sections. That’s an average production rate of nearly 100 pipe sections per day spread between three suppliers.

Each pipe section was transported by rail to a port in Finland or Germany to receive another exterior coating, this time of concrete. This concrete weight coating was applied to increase the pipeline’s stability on the seabed. Doubling the weight of each pipe from 12 to 24 metric tons, the concrete would help resist the buoyancy and underwater currents that could move the line over time. It also provided mechanical protection during handling, transport, pipelay, and for long-term exposure along the seabed. After weight coating, the pipes were shipped to storage yards along the coast where they would eventually be transported by ship to large pipelay vessels working in the Baltic Sea.

These pipelay vessels were floating factories employing hundreds of workers each, and the Nord Stream 2 project had up to 5 working simultaneously. On the largest vessels, the basic process for pipelaying was first to weld two pipe sections together to create what’s called a double-joint. These welds got a detailed inspection, and if they passed, the double-joint moved to a central assembly line to be connected to the main pipe string. There you got more welding and inspection. If everything checked out, a heat-shrink sleeve was placed around each weld, and then polyurethane foam poured into a mold between the concrete coatings to further protect against corrosion while allowing the pipe string to flex during placement. Once complete, the vessel could advance a little further along the route while lowering the pipeline into its final position. This was a 24/7 operation and some of these pipelay vessels could complete 3 kilometers in a day.

In many locations, they could just lay pipe directly on the seabed. It was smooth enough to keep the line from deflecting too much and soft enough to avoid damage to the pipes. However, that wasn’t the case along the entire route. In some shallow waters where the pipelines were exposed to hydrodynamic forces like waves and currents, the lines were placed in excavated trenches and backfilled. There were also many areas along the route that were rugged enough to create free spans of unsupported pipeline. Fallpipe vessels were deployed ahead of the pipe installation to fill depressions with rock and gravel to provide a smoother path along the seabed for the line. Finally, at locations where the Nord Stream 2 lines would cross other subsea cables or pipes like power, telecommunications cables and other pipelines, rock mattresses were installed to protect each utility at the intersection.

Each end of the pipeline came with a tremendous amount of infrastructure as well. At the German landfall, the pipe was tunneled onshore to the receiving station. This facility includes shut down valves, filters, preheaters, and pressure reduction equipment to allow gas to be delivered into the European natural gas grid. Both facilities also included equipment for Pipeline Inspection Gauges (also known as PIGs). These devices are launched from Russia into each pipeline, pushed along by the gas pressure for the entire 1,200 kilometer journey. The PIGs scan for problems like corrosion or mechanical damage and collect data that can be downloaded when they reach the end of the line in Germany.

Installing multiple sections of pipe simultaneously sped up construction of the line, but it created a serious challenge as well. How do you connect segments of pipe that have already been installed along the seabed? That’s the job of maybe the most impressive operation of the entire project: the above water tie in or AWTI. The separate sections of pipeline were carefully installed on the seabed so their ends overlapped. When it came time to tie them together, first divers installed buoyancy tanks to each end to make them easier to lift. Then davit cranes along the side of the tie-in vessel attached to each pipe and lifted their ends above the waterline. These ends didn’t have a concrete weight coating to make them lighter and able to be cut to the exact length needed. The pipes were cut and beveled, welded, tested, and coated for corrosion protection. Finally, the tie-in vessel could lay the complete pipe back down on the seafloor, forming a small horizontal arc off the main alignment where divers removed the buoyancy tanks and detached the cranes. The Nord Stream 2 required several above water tie-ins during construction. It seems simple enough, but each one took about three weeks to complete. The final AWTI was completed in September 2021, marking the end of construction of the Nord Stream 2.

Although Europe is in the midst of a major transition away from fossil fuels to renewable sources, the demand for natural gas is still high and expected to remain that way for the foreseeable future. In addition, Germany is planning to shutter the last 3 of its nuclear plants by the end of 2022, using natural gas as a bridge toward the expansion of wind and solar. With gas demands remaining consistently high, many fear that the Nord Stream and Nord Stream 2 pipelines put Russia in a position to exert political influence over its European neighbors. Nord Stream 2 would also allow more Russian gas to bypass Ukraine, depriving it of the transit fees it gets from gas lines through its borders.

As early as 2016, politicians in various countries around the world were coming out in opposition to the project. The U.S. played a large role in trying to delay or stop Nord Stream 2 altogether with sanctions on the ships involved in construction plus a host of Russian companies while carefully avoiding serious impacts to the contractors of its German ally. U.S. President Biden waived those sanctions in mid-2021 in a bid to improve US-German relations, but the Russian invasion of Ukraine changed everything. The U.S. immediately reimposed the sanctions and Germany froze certification of the project. The Nord Stream 2 company has been mostly silent so far, but there aren’t many good outcomes of spending $10B on design and construction of a pipeline that can’t be used. Most news sources appear to agree that they are completely insolvent and have fired all their employees. In addition, most of the non-Russian companies involved in the project have already written off their investments and walked away.

This simple pipeline highlights the tremendous complexity of infrastructure and geopolitics. It can be extremely difficult for a normal citizen to know what they stand to gain or lose from a project like this. We want cheap energy. We want warm homes during the winter. But we don’t want the global climate to change. And we definitely don’t want an unpredictable and misguided authoritarian leader to hold a major portion of Europe’s gas supplies hostage for political gains. In some ways, Putin’s invasion of Ukraine simplified these complex issues because it gave Germany and the US no choice but to kill the project. There’s a lot of uncertainty right now with how the conflict will end and what the world will look like when the dust settles. But it seems doubtful now that the Nord Stream 2 - this incredible achievement of engineering, logistics, and maritime construction - will ever be anything more than an empty tube of steel and concrete at the bottom of the Baltic Sea (and maybe that’s for the best). Thank you for watching and let me know what you think.


April 19, 2022 /Wesley Crump

What Sewage Treatment and Brewing Have in Common

April 05, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

I’m on a mission to show the world how engrossing human management of sewage can be, and in fact, we’ve followed the flow of domestic wastewater through sewers, lift stations, and primary treatment in previous videos on this channel. If you’ve watched those videos or others I’ve made, you know I like to build scale demonstrations of engineering principles. I did some testing for the next step of wastewater treatment to see if I could make it work, and the results were just… bad. Even with the blue dye disguising the disgustingness of this demo, operating a small-scale wastewater treatment plant in my garage is probably the most misguided thing I’ve ever done for a video. So I got to thinking about other ways humans co-opt microorganisms to convert a less desirable liquid into a better one, and there is one obvious equivalent: making alcoholic drinks. So I’ve got a couple of gallons of apple cider, a packet of yeast, and a big glass vessel called a carboy. Even if you don’t imbibe, whether by law or by choice, I promise you’ll enjoy seeing the similarities and differences between cleaning up domestic wastewater and the ancient art form of fermenting beverages. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about secondary wastewater treatment… and a little bit about homebrewing too.

You probably don’t think about cellular biology when you consider civil engineers, even though we’re made of cells just like everyone else. We’re associated more with steel, concrete, and earthwork. But, the engineers who design wastewater treatment plants, and the operators who run them, have to know a lot about microbes. Here’s why: The worst part about sewage isn’t the solids. (They can be pretty easily removed in settling basins, as I’ve shown in a previous video). It’s not even the pathogens - dangerous organisms that can make us sick. (Those can be eliminated using disinfection processes like UV light or chlorine). The worst part about sewage is the nutrients it contains. I’m talking about organic material, nitrogen, phosphorous, and other compounds. You can’t just release this stuff into a creek, river, or ocean because the microbes already in the destination water like bacteria and algae will respond by increasing their population beyond what the ecosystem would ever see under natural conditions. As they do, they use up all the oxygen dissolved in the water, ruining the habitat and killing fish and other wildlife. Nutrient pollution is one of the most severe and challenging environmental issues worldwide, so one of the most critical jobs wastewater plants do is clean nutrients out of the water before it can be discharged. But, because they are dissolved into solution at the molecular scale, nutrients are much harder to separate from sewage than other contaminants.

Like domestic wastewater, making a fermented beverage starts with a liquid full of dissolved nutrients that we want to convert into something better. In this case, the nutrients are sugars that we’re trying to convert into alcohol. I should point out that making cider is technically not brewing since there’s no heat used to extract the sugars. But, the fermentation process we’re talking about in this video is the same, no matter whether you’re making beer, wine, or even distilled spirits. It all starts out with some kind of sugary liquid. The way we measure the nutrient concentration in brewing is pretty simple. dissolved sugars increase the density or specific gravity of the liquid. This glass tool is called a hydrometer, and it floats upright when suspended in a liquid. Just like a ship sits a little higher in seawater than it does in freshwater, a hydrometer floats to a different height depending on the density of the fluid. The more sugar, the higher the hydrometer rises.

On the other hand, characterizing the strength of sewage is equally as important but a little more complicated. For one, not all nutrients change the density of the fluid equally. But more importantly, there are a lot more of them than just sugar, and they can all exist at different strengths. So rather than try and separate all that complexity, we usually measure what matters most: how much dissolved oxygen would organisms steal from the water to break down the nutrients within a sewage sample. The technical term for this is Biochemical Oxygen Demand or BOD. In general terms, treatment plant operators measure the amount of oxygen dissolved in a sewage sample before setting it aside for a 5-day period. During that time, critters in the sample will eat up some of the nutrients, robbing the dissolved oxygen as they do. The difference in oxygen before and after the five days is the BOD. Once you know your initial concentration of nutrients, whether sugars or… other stuff… you can work on a way to get them out of there.

In both sewage and brewage, we expropriate tiny biological buddies for this purpose. In other words, we use them to our advantage. Wastewater treatment plants rely primarily on bacteria with some protozoa as well. There are a myriad of secondary treatment processes used around the world, but one is more common than all the rest, and it has the best name too: activated sludge. After the primary treatment of removing solids from the flow, wastewater passes into large basins where enormous colonies of microorganisms are allowed to thrive. At the bottom of the basins are diffusers that bubble prodigious quantities of air up through the sewage, dissolving as much oxygen as possible into the liquid and maximizing the microorganisms’ capacity to consume organic material. This combination of wastewater and biological mass is known as mixed liquor, but that’s just a coincidence in this case. Either way, you definitely don’t want to be drinking too much of it.

Fermentation of an alcoholic beverage - the process where sugars are converted to ethanol - works a little bit differently. First, the microorganisms doing the work in fermentation are yeast. These are single-cell organisms from the fungus kingdom, in some ways similar but in many ways quite unlike the bacteria and protozoa in a wastewater treatment plant. In fact, brewers work pretty hard to keep equipment clean and sanitized so that bacteria can’t colonize the brew. The foam you see in the carboy before I filled it with apple juice is a no-rinse sanitizer meant to kill unwanted microorganisms before pitching the wanted ones in. The yeast themselves will even take advantage of the antimicrobial effects of the very ethanol they produce.

Another difference between the processes is air. Except at the very beginning, when the yeast are first expanding their population, fermentation is an anaerobic process. That means it happens in the absence of oxygen. A wastewater treatment plant adds air to speed up the process. However, yeast exposed to oxygen stop producing alcohol, so the vessel is usually sealed to minimize the chances of that. The bubbles you see are carbon dioxide that the yeast create in addition to the ethanol. An airlock device lets the carbon dioxide vent so it can’t build up pressure without letting airborne contaminants inside. As the sugars are converted and CO2 gas leaves the vessel, the density of the liquid drops, and that change can be measured using a hydrometer. My cider started at a specific gravity of 1.06 and fermented down to 1.00, meaning it has an alcohol content of around 8% by volume. However, just like the outflow from an activated sludge basin, it’s not quite ready to drink.

Once the microorganisms have done their job and the liquid is nearly free of nutrients or sugars, you need to get them out. In both brewing and wastewater treatment, that usually happens through settling. I have a separate video that goes into more detail about this process, but the basics are pretty simple. Most solid particles, including microorganisms, are denser than water and thus will sink. But, they sink slowly, so you have to keep the liquid still for this type of separation to work well. Wastewater treatment plants use settling tanks called clarifiers that send the mixed liquor slowly from the center outward so that it drops the sludge of microorganisms to the bottom as it does, leaving clear effluent to pass over a weir around the perimeter to leave the tank. Similarly, you can see a nice layer of mostly dead yeast on the bottom of my fermentation vessel, typically called the lees or trub. Homebrewers use a process called racking, which is just siphoning the liquid from the fermentation vessel while leaving the solids behind.

In both cases, these microorganisms are not all dead. That’s where the “activated” in activated sludge comes from. A rotating arm in the clarifier pushes the sludge to a center hopper. From there, it is collected and returned to the aeration chamber to seed the next colony that will treat new wastewater entering the tanks. Of course, not all thatsludge is needed, so the rest must be discarded, creating a whole separate waste disposal challenge (but that’s a topic for another video). Similarly, the yeast at the bottom of my fermenter are not all dead and can be reused in another batch. Commercial breweries and homebrewers alike often use yeast over and over again. However, they mutate pretty quickly because of their short lifetime, so the flavor can drift over time.

At this point, both the wastewater and my hard cider are quote-unquote nutrient-free. They are generally ready to be safely released into a nearby watercourse and my tummy, respectively. However, there are some final tasks that may be wanted or needed in both cases. As you can see, my hard cider doesn’t look quite like what you would buy in a can or bottle at the grocery store. I’m not going to carbonate it in this video, but that is an extra step that many cidermakers and most beer brewers take. I will add an enzyme that helps clear up the haze from the unfiltered apple juice. It doesn’t make it taste any different, but it does look a lot nicer.

Like the finishing steps of homebrewing, many wastewater plants use tertiary treatment processes to target other pollutants the bugs couldn’t get. Depending on where the effluent is going, standards might require more purification than primary and secondary treatment can achieve on their own. In fact, wastewater treatment plants have been experiencing a relatively dramatic shift over the past few decades as they treat sewage less like a waste product and more like an asset. After all, raw sewage is 99.9 percent water, and water is a valuable resource to cities. In places with water scarcity, it can be cost-effective to treat municipal wastewater beyond what would typically be required so that it can be reused instead of discarded.


A few places across the world have potable reuse (also known as toilet-to-tap) where sewage is cleaned to drinking water quality standards and reintroduced to the distribution system. Wichita Falls, Texas and the International Space Station are notable examples. However, most recycled water isn’t meant for human consumption. Plenty of uses don’t require potable water, including industrial processes and the irrigation of golf courses, athletic fields, and parks. Many wastewater treatment plants are now considered water reclamation plants because, instead of discharging effluent to a stream or river, they pump it to customers that can use it, hopefully reducing demands on the potable water supply as a result. In many countries, purple pipes are used to distinguish non-potable water distribution systems, helping to prevent cross-connections. And sometimes you’ll see signs like this one to prevent people from getting sick. [Drink] On the other hand, Practical Engineering’s “Effervescent Effluent,” when enjoyed responsibly, is perfectly safe to drink. Cheers!

April 05, 2022 /Wesley Crump
  • Newer
  • Older