Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

The Most Mindblowing Infrastructure in My City

March 25, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

I’m standing in front of a pair of water towers near my house in San Antonio. If you’ve seen my video on water towers, or you just know about how they work, they might look a little odd to you. It’s not only unusual that there are two tanks right next to each other, but that they’re completely different heights. This difference is a little hint that there’s something more interesting below the surface here. Some engineering achievements are visibly remarkable. It’s easy to celebrate the massive projects across the world: the Hoover Dams and the Golden Gate Bridges. But it’s just as easy to overlook the less notable infrastructure all around us that makes modern life possible. If you’ve seen any of my videos, you know that I think structures hidden in plain sight are just as worthy of celebration. In fact, I think infrastructure is so remarkable, I wrote a book about it that you can preorder starting today. I can’t tell you how excited I am to announce this project, but first let me tell you a little bit about these water towers and a few other things in San Antonio too. I’m Grady and this is Practical Engineering. In today’s episode, I misguidedly chose  the coldest day of the year to film my first on location video here in my home city to talk about a few of my favorite parts of the constructed environment.

Luckily the drone footage was taken on a sunnier day. You may have guessed already that these two towers aren’t connected to the same water distribution system. If they were, water would just drain out of the upper tank and overflow the lower one. San Antonio actually has a second system that takes recycled water from sewage treatment plants and delivers it to golf courses, parks, and commercial and industrial customers throughout the city. Treated wastewater isn’t clean enough to drink, but it’s more than clean enough to water the grass or use in a wide variety of industrial processes. So, instead of discarding it, we treat it as an asset, delivering it to customers that can use it. That reduces the demand on the potable water supply (which is scarce in this part of Texas). Some people call this the purple pipe system, because recycled water pipes have a nice lavender shade to differentiate them and prevent cross connections. San Antonio actually has one of the largest recycled water delivery systems in the country, and this water tower is one of the many tanks they use to buffer the supply and demand of recycled water around town.

Not too far from the two towers is this unofficial historic landmark of San Antonio. It may just look like a simple concrete wall, but Olmos Dam is one of the most important flood control structures in the city. This structure was originally built in 1927 after a massive flood demolished much of downtown. A roadway along the top of the dam had electric lights and was a popular driving destination with nice views. The roadway has since been replaced by a more hydraulically-efficient curved crest. I have a special connection to this dam because I worked as an intern on a rehabilitation project at the engineering firm hired to design the repairs. The project involved the installation of about 70 post-tensioned anchors to stabilize the dam against extreme loads from flooding. Each anchor was drilled through the structure and grouted into the rock below. Then a massive hydraulic jack was used to tension the strands and lock each anchor off at the top to stitch the dam to its foundation like gigantic steel rubber bands. The contractor even had to use a special drill rig to fit under this highway bridge. San Antonio is in the heart of flash flood alley in Texas, named because of the steep, impermeable terrain and intense storms we get. Olmos Dam helped protect downtown from many serious floods in its hundred year lifetime. But, it’s not the only interesting flood control structure in town.

I’m here at the Flood Control Tunnel Inlet Park, one of the best-named parks in the City if you ask me. And below my feet is one of the most interesting infrastructure projects in all of San Antonio. These gates might not look too interesting at first glance, but during a flood, water in the San Antonio River flows into ports of this inlet structure instead of continuing downstream toward downtown. From this inlet, the floodwaters pass down a vertical shaft more than a hundred feet (or 35 meters) below the ground. The tunnel at the bottom of the shaft runs for about 3 miles (or 5 kilometers) below downtown to the south, allowing floodwaters to bypass the most vulnerable developed areas and saving hundreds of millions of dollars in property damages from flooding.

When in use, the floodwaters from the tunnel flow back up a vertical shaft and come out here at the Flood Control Tunnel Outlet on the Mission Reach of the San Antonio River. Under normal conditions, there are pumps that can recirculate river water through the tunnel, keeping things from getting stagnant and providing a fresh supply of water to flow through the downtown riverwalk. This part of the San Antonio River south of downtown is one of my favorite places because it’s a perfect example of how urban and natural areas can coexist.

When you consider infrastructure and construction, you might think about concrete, steel, and hard surfaces. But this part of the San Antonio River was included in one of the largest ecosystem restoration projects in the US. Before the project, this was your typical ugly, channelized, urban river, but now it’s been converted back to a much more natural state with native vegetation and its original meandering path. But, the project didn’t only improve the habitat along the river. It also included recreational improvements to make this stretch a destination for residents and tourists. For example, these grade control structures help keep the river from eroding downward, but they also feature canoe chutes so you can paddle the river without interruptions. There are several new parks along the river, including Confluence Park, home to this beautiful pavilion made of concrete petals. Most importantly, there is a continuous dedicated hike-and-bike trail along the entire stretch.

Everyone knows about the Alamo because of the famous battle, but there are actually 5 Spanish missions established in the early 1700s along the San Antonio River. The sites together are now a historic National Park and a UNESCO World Heritage site. You can tour the missions to learn about the history of Spanish colonialism and interwoven cultures of Spain and the Indigenous people of Texas and Mexico. The Mission Reach trail provides a connection to all the missions and a bunch of other interesting destinations along the river, including parks, public art, and my favorite spots: the historic and modern water control infrastructure projects.

So far all the structures I’ve shown you have been water-related. That’s my professional background, but we could do similar deep dives just here in San Antonio about the power grid, highways, bridges, telecommunications, and even construction projects. And, preferably on warmer days, we could do similar field guides in every urban area around the world. In fact, that’s the premise of my new book, Engineering in Plain Sight: An Illustrated Field Guide to the Constructed Environment. I’ve been working so hard on this project for the past two years, and I’m thrilled to finally tell you about it.

Just like there are written guides to birds, rocks, and plants, Engineering in Plain Sight is a field guide to infrastructure that provides colorful illustrations and accessible explanations of nearly every part of the constructed world around us. It’s essentially 50 new Practical Engineering episodes crammed between two covers. Imagine if you could look at the world through the eyes of the engineers who designed the infrastructure you might not even be noticing in your everyday life. I wrote this book with the goal of transforming your perspective of the built environment, and I think once you read it, you’ll never look at your city the same again. You can explore it like an encyclopedia - picking pages in no order. Or treat the sights of your city’s infrastructure like a treasure hunt and try to collect them all.

The book comes out in August, but I would love if you preorder your copy right now because, in the world of books, presales are the best way to get the attention of bookstores and libraries. If you preorder directly from the publisher, you’ll get a discount off the regular price, and you can preorder signed copies directly from my website that come with an exclusive enamel pin as a gift. Preordering is the only way to get your hands on this custom pin that was designed by the book’s illustrator.

Use the link in the description to find all the preorder locations. And one more thing: between now and when the book publishes, I’m going to be posting some short explainers about interesting infrastructure on all my social media channels, and I want to encourage you to do the same. I’ll be sending 5 signed copies of my new book to my favorite social media posts about infrastructure that use the hashtag #EngineeringInPlainSight. Check out the link below for more information. And from the bottom of my heart, thank you for watching and let me know what you think!

March 25, 2022 /Wesley Crump

How to Clean Sewage with Gravity

March 01, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Stickney Water Reclamation Plant in Chicago, the largest wastewater treatment plant in the world. It serves more than two million people in the heart of the Windy City, converting all their showers, flushes, and dirty dishwater, plus the waste from countless commercial and industrial processes into water safe enough to discharge into the adjacent canal which flows eventually into the Mississippi River. It all adds up to around 700 million gallons or two-and-a-half billion liters of sewage each day, and the plant can handle nearly double that volume on peak days. That’s a lot of olympic sized swimming pools, and in fact, the aeration tanks used to biologically treat all that sewage almost look like something you might do a lap or two in (even though there are quite a few reasons you shouldn’t). However, flanking those big rectangular basins are rows of circular ponds and smaller rectangular basins that have a simple but crucial responsibility in the process of treating wastewater. We often use chemicals, filters, and even gigantic colonies of bacteria to clean sewage on such a massive scale, but the first line of defense in the fight against dirty water is usually just gravity. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about settlement for water and wastewater treatment.

This video is part of a series on municipal wastewater handling and treatment. Rather than put out a single video overview of treatment plants (which many other channels have already masterfully done), we’re taking a deep dive into a few of the most interesting parts of converting sewage into clean water. Check out the wastewater playlist linked in the card above if you want to learn more.

The job of cleaning water contaminated by grit, grime, and other pollutants is really a job of separation. Water gets along with nearly every substance on earth. That’s why it’s so useful for cleaning and a major part of why it does such a good job carrying our wastes away from homes and businesses in sewers. But once it reaches a wastewater treatment plant, we need to find a way to separate the water from its inhabitant wastes so it can be reused or discharged back into the environment. Some contaminants chemically dissolve into the water and are difficult to remove at municipal scales. Others are merely suspended in the swift and turbulent flow and will readily settle out if given a moment of tranquility. That’s the trick that wastewater treatment engineers use as the first step in cleaning wastewater.

Once it passes through a screen to filter out sticks and rags, sewage entering a wastewater treatment plant’s first step, or primary treatment, is the simple process of slowing the wastewater down to allow time for suspended solids to settle out. How do you create such placid conditions from a constant stream of wastewater? You can’t tell people to stop flushing or showering to slow down the flow. Velocity and volumetric flow are related by a single parameter: the cross-sectional area. If you increase this area without changing the flow, the velocity goes down as a result. Basins used for sedimentation are essentially just enormous expansion fittings on the end of the pipe, dramatically increasing the area of flow so the velocity falls to nearly zero. But just because the sewage stream is now still and serene doesn’t mean impurities and contaminants instantly fall to the bottom. You’ve got to give them time.

How much time is a pretty important question if you’re an engineer because it affects the overall size of the basin, and thus it affects the cost. Particles falling out of suspension quickly reach a terminal velocity, just like a skydiver falling from a plane. That maximum speed is largely a function of each particle’s size, and I have a demonstration here in my garage to show you how that works. I think it’s intuitive that larger particles fall through a liquid faster than smaller ones. Compare me dropping a pebble to a handful of sand. The pebble reaches the bottom in an instant, while the smaller particles of sand settle out more slowly. Wastewater contains a distribution of particles from very small to quite large, and ideally we want to get rid of them all. 

As an example, I have two colors of sand here. I sifted the white sand through a fine mesh, discarding the smaller particles and keeping the large ones. I sifted the black sand through the same mesh, this time keeping the fine particles and discarding the ones retained by the sieve. After that, I combined both sands to create a gray mixture, and we’ll see what happens when we put it into a column of water. This length of pipe is full of clean water, and I’m turning it over so the mixture is at the top. Watch what happens as the sand settles to the bottom of the pipe. You can see that, on the whole, the white sand reaches the bottom faster, while the black sand takes longer to settle. The two fractions that were previously blended together separate themselves again just from falling through a column of water.

Of course, physicists have used sophisticated fluid dynamics with partial differential equations to work out the ideal settling velocity of any size of spherical particle in a perfectly still column of water based on streamlines, viscosity, gravity, and drag forces. But, we civil engineers usually just drop them in the water and time how quickly they fall. After all, there’s hardly anything ideal about a wastewater treatment plant. As water moves through a sedimentation basin and individual particles fall downward out of suspension, they take paths like the ones shown here. Based on this diagram, you would assume that depth of the basin would be a key factor in whether or not a particle reaches the bottom or passes through to the other side. Let me show you why settling basins defy your intuitions with just a tiny bit of algebra.

You’ve got a particle coming in on the left side of the basin. It has a vertical velocity - that’s how fast it settles - and a horizontal velocity - that’s how fast the water’s moving through the basin. If the time it takes to fall the distance D to the bottom is shorter than the time it takes to travel the length L of the basin, the particle will be removed from the flow. Otherwise it will stay in suspension past the settling basin. That’s what we don’t want. As I mentioned, the speed of the water is the flow rate divided by the cross sectional flow area - that’s the basin’s width times its depth. Since both the time it takes for a particle to travel the length of the basin and the time it takes to settle to its bottom are a function of the basin’s depth, that term cancels out, and you’re left with only the basin's length times width (in other words, its surface area). That’s how we measure the efficiency of a sedimentation basin. Divide the flow rate coming in by the surface area, and you get a speed that we call the overflow or surface loading rate. All the particles that settle faster than the overflow rate will be retained by the sedimentation basin, regardless of its depth.

Settlement is a cheap and efficient way to remove a large percentage of contaminants from wastewater, but it can’t remove them all. There are a lot more steps that follow in a typical wastewater treatment plant, but in addition to being the first step of the process, settlement is also usually the last one as well. Those circular ponds at the Stickney plant in Chicago are clarifiers used to settle and collect the colonies of bacteria used in the secondary treatment process. Clarifiers are just settlement basins with mechanisms to automatically collect the solids as they fall to the bottom. The water from the secondary treatment process, called mixed liquor, flows up through the center of the clarifier and slowly makes its way to the outer perimeter, dropping particles that form a layer of sludge at the bottom. The clarified water passes over a weir so that only a thin layer farthest from the sludge can exit the basin. A scraper pushes the sludge down the sloped bottom of the clarifier into a hopper where it can be collected for disposal.

Settlement isn’t only used for wastewater treatment. Many cities use rivers and lakes as sources of fresh drinking water, and these surface sources are more vulnerable to contamination than groundwater. So, they go through a water purification plant before being distributed to customers. Raw surface water contains suspended particles of various materials that give water a murky appearance (called turbidity) and can harbor dangerous microorganisms. The first step in most drinking water treatment plants is to remove these suspended particles from the water. But unlike the larger solids in wastewater, suspended particles creating turbidity in surface water don’t readily settle out. Because of this, most treatment plants use chemistry to speed up the process, and I have a little demo of that set up here in the studio.

I have two bottles full of water that I’ve vigorously mixed with dirt from my backyard. One will serve as the control, and the other as a demonstration. The reason these tiny soil particles remain suspended without settling is that they carry an electrical charge. Therefore, each particle repels its neighbors, fighting the force of gravity, and preventing them from getting too close to one another. Chemical coagulants neutralize the electric charges so fine particles no longer repel one another. Additional chemicals called flocculants bond the particles together into clumps called flocs. As the flocs of suspended particles grow, they eventually become heavy enough to settle out, leaving clarified water at the top of the bottle. Treatment plants usually do this in two steps, but the pool cleaner I’m using in the demo does both at once. It’s a pretty dramatic difference if you ask me. In a clarifier, this sludge at the bottom would be pumped to a digester or some other solids handling process, and the clear water would move on to filtration and disinfection before being pumped into the distribution system of the city.


Our ability to clean both drinking water and wastewater at the scale of an entire city is one of the most important developments in public health. Sedimentation is used not only in water treatment plants but also ahead of pumping stations to protect the pumps and pipes against damage, with canals to keep them from silting, in fish hatcheries, mining, farming, and a whole host of other processes that create or rely on dirty water. The science of settlement and sedimentation is something that impacts our lives in a significant way and hopefully learning a little bit about it helps you recognize the brilliant engineering keeping our water safe.

March 01, 2022 /Wesley Crump

What Really Happened During the 2003 Blackout?

February 15, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On August 14, 2003, a cascading failure of the power grid plunged more than 50 million people into darkness in the northeast US and Canada. It was the most significant power outage ever in North America, with an economic impact north of ten billion dollars. Calamities like this don’t happen in a bubble, and there were many human factors, political aspects, and organizational issues that contributed to the blackout. But, this is an engineering channel, and a bilateral task force of energy experts from the US and Canada produced this in-depth 240-page report on all of the technical causes of the event that I’ll try to summarize here. Even though this is kind of an older story, and many of the tough lessons have already been learned, it’s still a nice case study to explore a few of the more complicated and nuanced aspects of operating the electric grid, essentially one of the world’s largest machines. I’m Grady, and this is Practical Engineering. Today, we’re talking about the Northeast Blackout of 2003.

Nearly every aspect of modern society depends on a reliable supply of electricity, and maintaining this reliability is an enormous technical challenge. I have a whole series of videos on the basics of the power grid if you want to keep learning after this, but I’ll summarize a few things here. And just a note before we get too much further, when I say “the grid” in this video, I’m really talking about the Eastern Interconnection that serves the eastern two-thirds of the continental US plus most of eastern Canada. 

There are two big considerations to keep in mind concerning the management of the power grid. One: supply and demand must be kept in balance in real-time. Storage of bulk electricity is nearly non-existent, so generation has to be ramped up or down to follow the changes in electricity demands. Two: In general, you can’t control the flow of electric current on the grid. It flows freely along all available paths, depending on relatively simple physical laws. When a power provider agrees to send electricity to a power buyer, it simply increases the amount of generation while the buyer decreases their own production or increases their usage. This changes the flow of power along all the transmission lines that connect the two. Each change in generation and demand has effects on the entire system, some of which can be unanticipated. 

Finally, we should summarize how the grid is managed. Each individual grid is an interconnected network of power generators, transmission operators, retail energy providers, and consumers. All these separate entities need guidance and control to keep things running smoothly. Things have changed somewhat since 2003, but at the time, the North American Electric Reliability Council (or NERC) oversaw ten regional reliability councils who operated the grid to keep generation and demands in balance, monitored flows over transmission lines to keep them from overloading, prepared for emergencies, and made long-term plans to ensure that bulk power infrastructure would keep up with growth and changes across North America. In addition to the regional councils, there were smaller reliability coordinators who performed the day-to-day grid management and oversaw each control area within their boundaries.

August 14th was a warm summer day that started out fairly ordinarily in the northeastern US. However, even before any major outages began, conditions on the electric grid, especially in northern Ohio and eastern Michigan were slowly degrading. Temperatures weren’t unusual, but they were high, leading to an increase in electrical demands from air conditioning. In addition, several generators in the area weren’t available due to forced outages. Again, not unusual. The Midwest Independent System Operator (or MISO), the area’s reliability coordinator, took all this into account in their forecasts and determined that the system was in the green and could be operated safely. But, three relatively innocuous events set the stage for what would follow that afternoon.

The first was a series of transmission line outages outside of MISO’s area. Reliability coordinators receive lots of real-time data about the voltages, frequencies, and phase angles at key locations on the grid. There’s a lot that raw data can tell you, but there’s also a lot of things it can’t. Measurements have errors, uncertainties, and aren’t always perfectly synchronized with each other. So, grid managers often use a tool called a state estimator to process all the real-time measurements from instruments across the grid and convert them into the likely state of the electrical network at a single point in time, with all the voltages, current flows, and phase angles at each connection point. That state estimation is then used to feed displays and make important decisions about the grid.

But, on August 14th, MISO’s state estimator was having some problems. More specifically, it couldn’t converge on a solution. The state estimator was saying, “Sorry. All the data that you’re feeding me just isn't making sense. I can’t find a state that matches all the inputs.” And the reason it was saying this is that twice that day, a transmission line outside MISO’s area had tripped offline, and the state estimator didn’t have an automatic link to that information. Instead it had to be entered manually, and it took a bunch of phone calls and troubleshooting to realize this in both cases. So, starting around noon, MISO’s state estimator was effectively offline.

Here’s why that matters: The state estimator feeds into another tool called a Real-Time Contingency Analysis or RTCA that takes the estimated state and does a variety of “what ifs.” What would happen if this generator tripped? What would happen if this transmission line went offline? What would happen if the load increased over here? Contingency analysis is critical because you have to stay ahead of the game when operating the grid. NERC guidelines require that each control area manage its network to avoid cascading outages. That means you have to be okay, even during the most severe single contingency, for example, the loss of a single transmission line or generator unit. Things on the grid are always changing, and you don’t always know what the most severe contingency would be. So, the main way to ensure that you’re operating within the guidelines at any point in time is to run simulations of those contingencies to make sure the grid would survive. And MISO’s RTCA tool, which was usually run after every major change in grid conditions (sometimes several times per day), was offline on August 14th up until around 2 minutes before the start of the cascade. That means they couldn’t see their vulnerability to outages, and they couldn’t issue warnings to their control area operators, including FirstEnergy, the operator of a control area in northern Ohio including Toledo, Akron, and Cleveland.

That afternoon, FirstEnergy was struggling to maintain adequate voltage within their area. All those air conditioners use induction motors that spin a magnetic field using coils of wire inside. Inductive loads do a funny thing to the power on the grid. Some of the electricity used to create the magnetic field isn’t actually consumed, but just stored momentarily and then returned to the grid each time the current switches direction (that’s 120 times per second in the US). This causes the current to lag behind the voltage, reducing its ability to perform work. It also reduces the efficiency of all the conductors and equipment powering the grid because more electricity has to be supplied than is actually being used. This concept is kind of deep in the weeds of electrical engineering, but we normally simplify things by dividing bulk power into two parts: real power (measured in Watts) and reactive power (measured in var). On hot summer days, grid operators need more reactive power to balance the increased inductive loads on the system caused by millions of air conditioners running simultaneously.

Real power can travel long distances on transmission lines, but it’s not economical to import reactive power from far away because transmission lines have their own inductance that consumes the reactive power as it travels along them. With only a few running generators within the Cleveland area, FirstEnergy was importing a lot of real power from other areas to the south, but voltages were still getting low on their part of the grid because there wasn’t enough reactive power to go around. Capacitor banks are often used to help bring current and voltage back into sync, providing reactive power. However, at least four of FirstEnergy’s capacitor banks were out of service on the 14th. Another option is to over-excite the generators at nearby power plants so that they create more reactive power, and that’s just what FirstEnergy did.

At the Eastlake coal-fired plant on Lake Erie, operators pushed the number 5 unit to its limit, trying to get as much reactive power as they could. Unfortunately, they pushed it a little too hard. At around 1:30 in the afternoon, its internal protection circuit tripped and the unit was kicked offline - the second key event preceding the blackout. Without this critical generator, the Cleveland area would have to import even more power from the rest of the grid, putting strain on transmission lines and giving operators less flexibility to keep voltage within reasonable levels. 

Finally, at around 2:15, FirstEnergy’s control room started experiencing a series of computer failures. The first thing to go was the alarm system designed to notify operators when equipment had problems. This probably doesn’t need to be said, but alarms are important in grid operations. People in the control room don’t just sit and watch the voltage and current levels as they move up and down over the course of a day. Their entire workflow is based on alarms that show up as on-screen or printed notifications so they can respond. All the data was coming in, but the system designed to get an operator’s attention was stuck in an infinite loop. The FirstEnergy operators were essentially driving on a long country highway with their fuel gauge stuck on “full,” not realizing they were nearly out of gas. With MISO’s state estimator out of service, Eastlake 5 offline, and FirstEnergy’s control room computers failing, the grid in northern Ohio was operating on the bleeding edge of the reliability standards, leaving it vulnerable to further contingencies. And the afternoon was just getting started.

Transmission lines heat up as they carry more current due to resistive losses, and that is exacerbated on still, hot days when there’s no wind to cool them off. As they heat up, they expand in length and sag lower to the ground between each tower. At around 3:00, as the temperatures rose and the power demands of Cleveland did too, the Harding-Chamberlin transmission line (a key asset for importing power to the area) sagged into a tree limb, creating a short-circuit. The relays monitoring current on the line recognized the fault immediately and tripped it offline. Operators in the FirstEnergy control room had no idea it happened. They started getting phone calls from customers and power plants saying voltages were low, but they discounted the information because it couldn’t be corroborated on their end. By this time their IT staff knew about the computer issues, but they hadn’t communicated them to the operators, who had no clue their alarm system was down.

With the loss of Harding-Chamberlin, the remaining transmission lines into the Cleveland area took up the slack. The current on one line, the Hanna-Juniper, jumped from around 70% up to 88% of its rated capacity, and it was heating up. About half an hour after the first fault, the Hanna-Juniper line sagged into a tree, short circuited, and tripped offline as well. The FirstEnergy IT staff were troubleshooting the computer issues, but still hadn’t notified the control room operators. The staff at MISO, the reliability coordinator, with their state estimator issues, were also behind on realizing the occurrence and consequences of these outages. 

FirstEnergy operators were now getting phone call after phone call, asking about the situation while being figuratively in the dark. Call transcripts from that day tell a scary story.

“[The meter on the main transformer] is bouncing around pretty good. I’ve got it relay tripped up here…so I know something ain't right,” said one operator at a nearby nuclear power plant.

A little later he called back: “I’m still getting a lot of voltage spikes and swings on the generator… I don’t know how much longer we’re going to survive.”

A minute later he calls again: “It’s not looking good… We aint going to be here much longer and you’re going to have a bigger problem.”

An operator in the FirstEnergy control room replied: “Nothing seems to be updating on the computers. I think we’ve got something seriously sick.”

With two key transmission lines out of service, a major portion of the electricity powering the Cleveland area had to find a new path into the city. Some of it was pushed onto the less efficient 138 kV system, but much of it was being carried by the Star-South Canton line which was now carrying more than its rated capacity. At 3:40, a short ten minutes after losing Hanna-Juniper, the Star-South Canton line tripped offline when it too sagged into a tree and short-circuited. It was actually the third time that day the line had tripped, but it was equipped with circuit breakers called reclosers that would energize the line automatically if the fault had cleared. But, the third time was the charm, and Star-South Canton tripped and locked out. Of course, FirstEnergy didn’t know about the first two trips because they didn’t see an alarm, and they didn’t know about this one either. They had started sending crews out to substations to get boots on the ground and try to get a handle on the situation, but at that point, it was too late.

With Star-South Canton offline, flows in the lower capacity 138 kV lines into Cleveland increased significantly. It didn’t take long before they too started tripping offline one after another. Over the next half hour, sixteen 138 kV transmission lines faulted, all from sagging low enough to contact something below the line. At this point, voltages had dropped low enough that some of the load in northern Ohio had been disconnected, but not all of it. The last remaining 345 kV line into Cleveland from the south came from the Sammis Power Plant. The sudden changes in current flow through the system now had this line operating at 120% of its rated capacity. Seeing such an abnormal and sudden rise in current, the relays on the Star-Sammis line assumed that a fault had occurred and tripped the last remaining major link to the Cleveland area offline at 4:05 PM, only an hour after the first incident. After that, the rest of the system unraveled.

With no remaining connections to the Cleveland area from the south, bulk power coursing through the grid tried to find a new path into this urban center. 

First overloads progressed northward into Michigan, tripping lines and further separating areas of the grid. Then the area was cut off to the east. With no way to reach Cleveland, Toledo, or Detroit from the south, west, or north, a massive power surge flowed east into Pennsylvania, New York, and then Ontario in a counter-clockwise path around Lake Erie, creating a major reversal of power flow in the grid. All along the way, relays meant to protect equipment from damage saw these unusual changes in power flows as faults and tripped transmission lines and generators offline

Relays are sophisticated instruments that monitor the grid for faults and trigger circuit breakers when one is detected. Most relaying systems are built with levels of redundancy so that lines will still be isolated during a fault, even if one or more relays malfunction. One type of redundancy is remote backup, where separate relays have overlapping zones of protection. If the closest relay to the fault (called Zone 1) doesn’t trip, the next closest relay will see the fault in its Zone 2 and activate the breakers. Many relays have a Zone 3 that monitors even farther along the line.

When you have a limited set of information, it can be pretty hard to know whether a piece of equipment is experiencing a fault and should be disconnected from the grid to avoid further damage or just experiencing an unusual set of circumstances that protection engineers may not have anticipated. That’s especially true when the fault is far away from where you’re taking measurements. The vast majority of lines that went offline in the cascade were tripped by Zone 3 relays. That means the Zone 1 and 2 relays, for the most part, saw the changes in current and voltage on the lines and didn’t trip because they didn’t fall outside of what was considered normal. However, the Zone 3 relays - being less able to discriminate between faults and unusual but non-damaging conditions - shut them down. Once the dominos started falling in the Ohio area, it took only about 3 minutes for a massive swath of transmission lines, generators, and transformers to trip offline. Everything happened so fast that operators had no opportunity to implement interventions that could have mitigated the cascade.

Eventually enough lines tripped that the outage area became an electrical island separated from the rest of the Eastern Interconnection. But, since generation wasn’t balanced with demands, the frequency of power within the island was completely unstable, and the whole area quickly collapsed. In addition to all of the transmission lines, at least 265 power plants with more than 508 generating units shut down. When it was all over, much of the northeastern United States and the Canadian province of Ontario were completely in the dark. Since there were very few actual faults during the cascade, reenergizing happened relatively quickly in most places. Large portions of the affected area had power back on before the end of the day. Only a few places in New York and Toronto took more than a day to have power restored, but still the impacts were tremendous. More than 50 million people were affected. Water systems lost pressure forcing boil-water notices. Cell service was interrupted. All the traffic lights were down. It’s estimated that the blackout contributed to nearly 100 deaths.

Three trees and a computer bug caused a major part of North America to completely grind to a halt. If that’s not a good example of the complexity of the power grid, I don’t know what is. If you asked anyone working in the power industry on August 13, whether the entire northeast US and Canada would suffer a catastrophic loss of service the next day, they would have said no way. People understood the fragility of the grid, and there were even experts sounding alarms about the impacts of deregulation and the vulnerability of transmission networks, but this was not some big storm. It wasn’t even a peak summer day. It was just a series of minor contingencies that all lined up just right to create a catastrophe.


Today’s power grid is quite different than it was in 2003. The bilateral report made 46 recommendations about how to improve operations and infrastructure to prevent a similar tragedy in the future, many of which have been implemented over the past nearly 20 years. But, it doesn’t mean there aren’t challenges and fragilities in our power infrastructure today. Current trends include more extreme weather, changes in the energy portfolio as we move toward more variable sources of generation like wind and solar, growing electrical demands, and increasing communications between loads, generators, and grid controllers. Just a year ago, Texas saw a major outage related to extreme weather and the strong nexus between natural gas and electricity. I have a post on that event if you want to take a look after this. I think the 2003 blackout highlights the intricacy and interconnectedness of this critical resource we depend on, and I hope it helps you appreciate the engineering behind it. Thank you for reading and let me know what you think.

February 15, 2022 /Wesley Crump

Can You Pump Sewage?

February 01, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The Crossness Pumping Station in London has ornate architecture and elaborate ironwork which belie its original, somewhat disgusting purpose: to lift raw sewage from London’s southern outfall, the lowest point in one of London’s biggest sewers, up to the ground surface where it could be discharged directly into the Thames River. Of course, we don’t typically release raw sewage into waterways anymore, and Crossness has long been decommissioned after newer treatment works were built in the 1950s. It’s now in the process of being restored as a museum you can visit to learn more about the fascinating combined history of human waste and Victorian engineering. But even though we have more sophisticated ways to treat wastewater before discharging it into streams and rivers, there’s one thing that hasn’t changed. We still use gravity as the primary way of getting waste to flow away from homes and businesses within the sewers belowground. And eventually, we need a way to bring that sewage back up to the surface of the earth. But that’s not as easy as it sounds. I’m Grady, and this is Practical Engineering. Today, we’re talking about sewage lift stations.

I have a post all about the engineering of sewers, and today we’re following that wastewater one more step on its smelly journey through a typical city. You can go check that out after this if you want to learn more, but I’ll summarize it quickly here. Most sewers flow by gravity from each home or business toward a wastewater treatment plant. They’re installed as pipes, but sewers usually flow only partly full like smelly water slides or underground creeks and rivers. This is convenient because we don’t have to pay a monthly gravity bill, and it almost never gets knocked out during a thunderstorm. It’s a free and consistent force that compels sewage downward. But, because Earth’s gravity only pulls in one direction, sewers must always slope, meaning they often end up well below the ground surface, especially toward their downstream ends. And that can be problematic. Here’s why.

Sewers are almost always installed in open excavations also known as trenches. This might seem obvious, but the deeper a trench must be dug, the more difficult, dangerous, disruptive, and ultimately expensive construction becomes. In some cases, it just stops being feasible to chase the slope of a sewer farther and farther below the ground surface. A good alternative is to install a pumping station that can lift raw sewage from its depths back closer to the surface. Lift stations can be small installations designed to handle a few apartment complexes or massive capital projects that pump significant portions of a city's total wastewater flow. A typical lift station consists of a concrete chamber called a wet well. Sewage flows into the wet well by gravity, filling it over time. Once the sewage reaches a prescribed depth, a pump turns on, pushing the wastewater into a specialized sewer pipe called a force main. You always want to keep the liquid moving swiftly in pipes to avoid the solids settling out, so this intermittent operation makes sure that there are no slow trickles during off-peak hours. The sewage travels under pressure within the force main to an uphill manhole where it can continue its journey downward via gravity once again.

Another important location for lift stations is at the end of the line. Once wastewater reaches its final destination, there are no magical underground sewage outlets. Septic systems get rid of wastewater through leach fields that infiltrate the subsurface, but they’re designed for individual buildings and aren’t feasible on a city scale. That would require enormous areas of land to get so much liquid to soak into the soil, not to mention the potential for contamination of the groundwater. Ignoring, for now, the fact that we need to clean it up first, we still need somewhere for our sewage to go. In most cases, that’s a creek, river, or the ocean, meaning we need to lift that sewage up to the surface of the earth one last time. Rather than build wastewater treatment plants in underground lairs like stinky superheroes so we only pump clean water, it’s much easier just to lift the raw sewage up to the surface to be treated and eventually discharged. That means we have to send some pretty gross stuff (sewage) through some pretty expensive and sophisticated pieces of machinery (the pumps), and that comes with some challenges.

We often think of sewage as its grossest constituents: human excrement, you know, poop. But, sewage is a slurry of liquids and solids from a wide variety of sources. Lots of stuff ends up in our wastewater stream, including soil, soap, hair, food, wipes, grease, and trash. These things may make it down the toilet or sink drain and through the plumbing in your house, but in the sewer system, they can conglomerate into large balls of grease, rags, and other debris (sometimes called “pig tails” or “fatbergs” by wastewater professionals). In addition, with many cities putting efforts into conserving water, the concentration of solids in wastewater is trending upward. Conventional pumps handle liquids just fine but adding solids in the stream increases the challenge of lifting raw sewage.

Appropriately sized centrifugal pumps can handle certain types and sizes of suspended solids just fine. Sewage pumps are designed for the extra wear and tear. The impellers have fewer vanes to avoid snags and the openings are larger so that solids can freely move through them. Different manufacturers have proprietary designs to minimize obstructions to the extent possible, but no sewage pump is clog-proof. Especially with today’s concentrated wastewater full of wipes that have been marketed as flushable, clogs in lift stations can be a daily occurrence. Removing a pump, clearing it of debris, and replacing it is a dirty and expensive job (especially if you have to do it frequently). Most lift stations have an alarm when the level gets too high, but if a clog doesn’t get cleared fast enough, raw sewage can back up into houses and businesses or overflow the wet well, potentially exposing humans and wildlife to dangerous biohazards.

A seemingly obvious solution to the problem of clogging is to use a screen in the lift station wet well to prevent trash from reaching the pumps. But, screens have a limitation: they can clog up too. By adding a screen, you’ve traded pump maintenance for another kind of maintenance: removing and hauling away debris. Smaller lift stations with bar or basket screens can get away with maybe a once-a-week visit from a crew to clean them. Larger pump stations often feature automatic systems that can remove solids from the screen into a dumpster that can be hauled to a landfill every so often.

Sometimes using a screen is an effective way to protect against clogging, but it’s not always convenient, especially because it creates a separate waste stream to manage. For example, if a lift station is remote where it’s inconvenient to send crews for service and maintenance, you might prefer that all the solids remain in the wastewater stream. After all, treatment plants are specifically designed to clean wastewater. They have better equipment and consistent staffing, so it often just makes sense to focus the investments of time and effort at the plant rather than individual lift stations along the way. In these cases, there’s another option for minimizing pump clogs: grinding the solids into smaller pieces.

There’s a nice equivalent to a lift station grinder that can be found under the sinks of many North American homes: the garbage disposal. This common household appliance saves you the trouble and smell of putting food scraps into the wastebasket. It works like a centrifugal pump with a spinning impeller, but it also features a grinding ring consisting of sharp blades and small openings. As the impeller spins the solids, they scrape against the grinding ring, shearing into smaller pieces that can travel through the waste plumbing.

Some lift stations feature grinding pumps that are remarkably similar to under-sink garbage disposals. Others use standalone grinders that simply chew up the solids before they reach the pumps. Grinders are often required at medical facilities and prisons where fibrous solids are more likely to find their way into the wastewater stream. Large grinders are also used where storm drains and sewers are combined because those systems see heavier debris loads from rainwater runoff. A grinder is another expensive piece of equipment to purchase and maintain at a lift station, but it can offer better reliability, fewer clogs, and thus decreased maintenance costs.


Of course, clogging is not the only practical challenge of operating a sewage lift station. When you depend on electromechanical equipment to provide an essential service, you always have to plan for things to go wrong. Lift stations usually feature multiple pumps so that they can continue operating if one fails. They often have backup generators so that sewage can continue to flow even if grid power is lost. Another issue with lift stations is air bubbles getting into force mains and constricting the flow. Automatic air release valves can purge force mains of these bubbles, but venting sewer gas into populated areas isn’t usually a popular prospect. Although our urban lives depend on sewers to carry waste away before it can endanger public health, reminders that they exist are usually unwelcome. Hopefully this breaks that convention to help you understand a little about the challenges and solutions of managing wastewater to keep your city clean and safe.

February 01, 2022 /Wesley Crump

Why Buildings Need Foundations

January 04, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

When we bought our house several years ago, we fell in love with every part of it except one: the foundation. At 75 years old, we knew these old piers were just about finished holding this old house up. This year we finally bit the bullet to have them replaced. Any homeowner who’s had foundation work done can commiserate with us on the cost and disruption of a project like this. But homes aren’t the only structures with foundations. It is both a gravitational necessity and a source of job stability to structural and geotechnical engineers that all construction - great and small - sits upon the ground. And the ways in which we accomplish such a seemingly unexceptional feat are full of fascinating and unexpected details. I’m Grady and this is Practical Engineering. Today, we’re talking about foundations.

There’s really just one rule for structural and geotechnical engineers designing foundations: when you put something on the ground, it should not move. That seems like a pretty straightforward directive. You can put a lot of stuff on the ground and have it stay there. For example, several years ago I optimistically stacked these pavers behind my shed with the false hope that I would use them in a landscaping project someday, but their most likely future is to sit here in this shady purgatory for all of eternity. Unfortunately, buildings and other structures are a little different. Mainly, they are large enough that one part could move relative to the other parts, a phenomenon we call differential movement. When you move one piece of anyTHING relative to the rest of it, you introduce stress. And if that stress is greater than the inherent strength of the thing, that thing will pull itself apart. It happens all the time, all around the world, including right here in my own house. When one of these piers settles or heaves more than the others, all the stuff it supports tries to move too. But doorframes, drywall, and ceramic tile work much better and last much longer when the surrounding structure stays put.

There are many kinds of foundations used for the various structures in our built environment, but before we dive into how they work, I think it will be helpful to first talk about what they’re up against, or actually down against. Of course, buildings are heavy, and one of the most important jobs of a foundation is to evenly distribute that weight into the subsurface as downward pressure. Soil isn’t infinitely strong against vertical loads. It can fail just like any other component of a structural system. When the forces are high enough to shear through soil particles, we call it a bearing failure. The soil directly below the load is forced downward, pushing the rest of the soil to either side, eventually bulging up around the edges.

Even if the subsurface doesn’t full-on shear, it can still settle. This happens when the particles are compressed more closely together, and it usually takes place over a longer period of time. (I have a post all about settlement that you can check out after this.) So, job number 1 of a foundation is to distribute the downward force of a structure over a large enough area to reduce the bearing pressure and avoid shear failures or excessive settlement.

Structural loads don’t just come from gravity. Wind can exert tremendous and rapidly-fluctuating pressure on a large structure pushing it horizontally and even creating uplift like the wing of an airplane. Earthquakes also create loads on structures, shifting and shaking them with very little warning. Just like the normal weight of a structure, these loads must also be resisted by a foundation to prevent it from lifting or sliding along the ground. That’s job number 2.

Speaking of the ground, it’s not the most hospitable place for many building materials. It has bugs, like termites, that can eat away at wooden members over time, reducing their strength. It also has moisture that can lead to mold and rot. My house was built in the 1940s on top of cedar piers. This is a wood species that is naturally resistant to bugs and fungi, but not completely immune to them. So, job number 3 of a foundation is to resist the effects of long-term degradation and decay that come from our tiny biological neighbors.

Another problem with the ground is that soil isn’t really as static as we think. Freezing isn’t usually a problem for me in central Texas, but many places in the world see temperatures that rise and fall below the freezing point of water tens or hundreds of times per year. We all know water expands when it freezes, and it can do so with prodigious force. When this happens to subsurface water below a structure, it can behave like a jack to lift it up. Over time, these cycles of freeze and thaw can slowly shift or raise parts of a structure more than others, creating issues. Similarly, some kinds of soil expand when exposed to moisture. I also have a post on this phenomenon, so you have two to read after this one. Expansive clay soil can create the same type of damage as cycles of freeze and thaw by subtly moving a structure in small amounts with each cycle of wet and dry. So job number 4 of a foundation is to reach a deep enough layer that can’t freeze or that doesn’t experience major fluctuations in moisture content to avoid these problems that come with water in the subgrade below a structure.

Job number 5 isn’t necessarily applicable to most buildings, but there are many types of structures (like bridges and retaining walls) that are regularly subject to flowing water. Over time (or sometimes over the course of a single flood), that water can create erosion, undermining the structure. Many foundations are specifically designed to combat erosion, either with hard armoring or by simply being installed so deep into the earth that they can’t be undermined by quickly flowing water.

Job number 6 really applies to all of engineering: foundations have to be cost effective. Could the contractor who built my house in the 1940s have driven twice as many piers, each one to three times the depth? Of course it can be done, but (with some minor maintenance and repairs), this one lasted 75 years before needing to be replaced. With the median length of homeownership somewhere between 5 and 15 years, few people would be willing to pay more for a house with 500 years of remaining life in the foundation than they would for one with 30. I could have paid this contractor to build me a foundation that will last hundreds of years... but I didn’t. Engineering is a job of balancing constraints, and many of the decisions in foundation engineering come down to the question of “How can we achieve all of the first 5 jobs I mentioned without overdoing it and wasting a bunch of money in the process?” Let’s look at a few ways.

Foundations are generally divided into two classes: deep and shallow. Most buildings with only a few stories, including nearly all homes, are built on shallow foundations. That means they transfer the structure’s weight to the surface of the earth (or just below it). Maybe the most basic of these is how my house was originally built. They cut down cedar trees, hammered those logs into the ground as piles, layed wooden beams across the top of those piers, and then built the rest of the house atop the beams. Pier and beam foundations are pretty common, at least in my neck of the woods, and they have an added benefit of creating a crawlspace below the structure in which utilities like plumbing, drains, and electric lines can be installed and maintained. However, all these individual, unconnected points of contact with the earth leave quite a bit of room for differential movement.

Another basic type of shallow foundation is the strip footing, which generally consists of a ribbon or strip of concrete upon which walls can sit. In some cases the floor is isolated from the walls and sits directly on concrete slab atop the subgrade, but strip footings can also support floor joists, making room for a crawlspace below. For sites with strong soils, this is a great option because it’s simple and cheap, but if the subgrade soils are poor, strip footings can still allow differential movement because all the walls aren’t rigidly connected together. In that case, it makes sense to use a raft foundation - a completely solid concrete slab that extends across the entire structure. Raft foundations are typically concrete slabs placed directly on the ground (usually with some thickened areas to provide extra rigidity). They distribute the loads across a larger area, reducing the pressure on the subgrade, and they can accommodate some movement of the ground without transferring the movement into a structure, essentially riding the waves of the earth like a raft on the ocean (hence the name). However, they don’t have a crawlspace which makes plumbing repairs much more challenging.

One issue with all shallow foundations is that you still need to install them below the frost line - that is the maximum depth to which water in the soil might freeze during the harshest part of the winter - in order to avoid frost heaving. In some parts of the contiguous United States, the frost line can be upwards of 8 feet or nearly two-and-a-half meters. If you’re going to dig that deep to install a foundation anyway, you might as well just add an extra floor to your structure below the ground. That’s usually called a basement, and it can be considered a building’s foundation (although the walls are usually constructed on a raft or strip footings as described above).

As a structure’s size increases, so do the loads it imposes on the ground, and eventually it becomes infeasible to rely only on soils near the surface of the earth. Tall buildings, elevated roadways, bridges, and coastal structures often rely on deep foundations for support. This is especially true when the soils at the surface are not as firm as the layers farther below the ground. Deep foundations almost always rely on piles, which are vertical structural elements that are driven or drilled into the earth, often down to a stronger layer of soil or bedrock, and there are way more types than I could ever cover in a single video. Piles not only transfer loads at the bottom (called end bearing), but they can also be supported along their length through a phenomenon called skin friction. This makes it possible for a foundation to resist much more significant loads - whether downward, upward or horizontal - within a given footprint of a structure.

One of the benefits of driven piles is that you install them in somewhat the same way that they’ll be loaded in their final configuration. There’s some efficiency there because you can just stop pushing the pile into the ground once it’s able to resist the design loads. There’s a problem with this though. Let me show you what I mean. This hydraulic press has more than enough power to push this steel rod into the ground. And at first, it does just that. But eventually, it reaches a point where the weight of the press is less than the bearing capacity of the pile, and it just lifts itself up. Easy… (you might think). Just add more weight. But consider that these piles might be designed to support the weight of an entire structure. It’s not feasible to bring in or build some massive weight just to react against to drive a pile into the ground. Instead, we usually use hammers, which can deliver significantly more force to drive a pile with only a relatively small weight.

The problem with hammered piles is that the dynamic loading they undergo during installation is different from the static loading they see once in service. In other words, buildings don’t usually hammer on their foundations. For example, if a pile can withstand the force of a 5-ton weight dropped from 16 feet or 5 meters without moving, what’s the equivalent static load it can withstand? That turns out to be a pretty complicated question, and even though there are published equivalencies between static and dynamic loads, their accuracy can vary widely depending on soil conditions. That’s especially true for long piles where the pressure wave generated by a hammer might not even travel fast enough to load the entire member at the same moment in time. Static tests are more reliable, but also much more expensive because you either have to bring in a ton (or thousands of tons) of weight to put on top, or you have to build additional piles with a beam across them to give the test rig something to react against.

One interesting solution to this problem is called statnamic testing of piles. In this method, a mass is accelerated upward using explosives, creating an equal and opposite force on the pile to be tested. It’s kind of like a reverse hammer, except unlike a hammer where the force on the pile lasts only for a few milliseconds, the duration of loading in a statnamic test is often upwards of 100 or 200 milliseconds. That makes it much more similar to a static force on the pile without having to bring in tons and tons of weight or build expensive reaction piers just to conduct a test.

I’m only scratching the surface (or subsurface) of a topic that fills hundreds of engineering textbooks and the careers of thousands of contractors and engineers. If all the earth was solid rock, life would be a lot simpler, but maybe a lot less interesting too. If there are topics in foundations that you’d like to learn more about, add a comment or send me an email, and I’ll try to address it in a future post , but I hope this one gives you some appreciation of those innocuous bits of structural and geotechnical engineering below our feet.


January 04, 2022 /Wesley Crump

Rebuilding the Oroville Dam Spillways

December 21, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In February 2017, the world watched as the main spillway on one of the largest dams in the world suffered a catastrophic failure, prompting a series of events that led to the evacuation of nearly 200,000 people downstream and hundreds of millions of dollars of damage to critical water infrastructure. I talked about the failure of the Oroville Dam spillway in California after the independent forensic team released their conclusions about why the structure failed, summarizing their 600-page report. Then, I got flooded with requests to cover the repairs, and I love a good construction project as much as anyone else. So how do you rebuild one of the biggest spillways in the world after a catastrophic failure knowing that the next winter flood season is right around the corner? The answer might surprise you. I’m Grady, and this is Practical Engineering. Today, we’re talking about rebuilding the Oroville Dam spillways.

Oroville Dam in northern California is the tallest dam in the United States. It was built in the 1960s, creating one of California’s keystone reservoirs to smooth out the tremendous variability in rain and snowfall from their climate of hot, dry summers and flood-prone winters. The dam itself is a massive earthen embankment. To the northwest is the main spillway, also known as the Flood Control Outlet or FCO spillway. At the top are radial gates to control the flow. They release water into the enormous concrete chute before it passes through gigantic dentates that disperse the flow as it crashes into the Feather River below. It’s nearly impossible to convey the scale of this structure, which could fit eight American football fields with room to spare or more than 150 tennis courts. Beyond is the emergency spillway, a concrete weir set a foot above the maximum operating level to provide a backup path for water to leave the reservoir during extreme flood events.

If you want more detail about the failure, I encourage you to go back and read my previous post after this. I do want to summarize the damages here because you can’t really grasp the magnitude of the reconstruction project without an appreciation for how profoundly ruined this event left the spillways of Oroville Dam. Just about all but the upper section of the main spillway chute was wholly destroyed. The flows that broke free from the chute scoured the hillside around and below the structure, washing away concrete and eroding an enormous chasm as deep as 100 feet or 30 meters in some places. At the emergency spillway, overflows had similarly scoured the hillside, creating erosional head cuts that traveled upstream, threatening the safety and stability of the structure and ultimately leading to the downstream evacuation. In total, more than a million cubic meters of soil and rock were stripped away, much of which was deposited into the Feather River below the dam. Both spillways were rendered totally incapable of safely discharging future flood flows from Lake Oroville.

Even before the event was over, the California Department of Water Resources, or DWR, was planning for the next flood season, which was right around the corner. Having the tallest dam in the United States sitting crippled and unable to pass flood flows safely with the rainy season only six months away just wasn’t an option. As soon as the extent of the situation was revealed, DWR began assembling a team and plotting the course for recovery. Rather than try to handle all the work internally, DWR contracted with a wide range of consultants from engineering firms across the country and partnered with federal agencies, namely the Corps of Engineers and Bureau of Reclamation, who both have significant knowledge and experience with major water resources projects. 

In March (less than a month after the incident started and well before it was close to over), DWR held an all-day workshop with the design and management teams to collaborate on alternatives for restoring the dam’s spillways, focusing on the main spillway. They were facing some significant challenges. With the next flood season quickly approaching, they had limited time for design, regulatory reviews, and construction. Steps that would typically take months or years needed to be compressed into weeks. On top of that, they were still in the midst of the spillway failure without a complete understanding of what had gone wrong, making it difficult to propose solutions that would avoid a similar catastrophe in the future. Although they had a laundry list of ideas, most fell into three categories nicknamed by the design team as “Use the Hole,” “Bridge the Hole,” or “Fill the Hole.”

“Use the hole” alternatives involved taking advantage of the scour hole and channels carved by the uncontrolled flows from the spillway. If they could protect the soil and rock from further erosion, these new landscape features could serve as the new path for water exiting the reservoir, eliminating the need for a replacement to the massive and expensive concrete chute. The engineering team built a scale model of the spillway at Utah State University as a design tool for providing hydraulic information. They constructed an alternative with a modified scour hole to see how it would perform when subjected to significant releases from the spillway. Sadly the model showed enormous standing waves under peak flows, so this alternative was discarded as infeasible.

“Bridge the hole” alternatives involved constructing the spillway chute above grade. In other words, instead of placing the structure on the damaged soil and rock foundation, they could span the eroded valleys using aqueduct-style bridges. However, given the complexity of engineering such a unique spillway, the design team also ruled this option out. The time it would take for structural design just wouldn’t leave enough time for construction.

“Fill the hole” alternatives centered around replacing the eroded foundation material and returning the main spillway to its original configuration. There were a lot of advantages to this approach. It had the least amount of risk and the fewest unknowns about hydraulic performance, which had been proven through more than 50 years of service. This option also provided a place to reuse the scoured rock that had washed into the Feather River. Next, it had the lowest environmental impacts because no new areas of the site would be permanently impacted. And finally, it was straightforward construction - not anything too complicated - giving the design team confidence that contractors could accomplish the work within the available time frame.

Once a solution had been selected, the design team started developing the plans and specifications for construction. Over a hundred engineers, geologists, and other professionals were involved in designing repairs to the two spillways, many working 12-plus hour days, 6 to 7 days a week, on-site in portable trailers near the emergency spillway. Because many of the problems with the original spillways resulted from the poor conditions of underlying soil and rock, the design phase included an extensive geotechnical investigation of the site. At its peak, there were ten drill rigs taking borings of the foundation materials. The samples were tested in laboratories to support the engineering of the spillway replacements.

The design team elected to fill the scoured holes with roller-compacted concrete, a unique blend of the same essential ingredients of conventional concrete but with a lot less water. Instead of flowing into forms, roller compacted concrete, or RCC, is placed using paving equipment and compacted into place with vibratory rollers. The benefit of RCC was that it could be made on-site using materials mined near the dam and those recovered from the Feather River. It also cures quickly, reaching its full strength faster and with less heat buildup, allowing crews to place massive amounts of it on an aggressive schedule without worrying about it cracking apart from thermal effects. RCC is really the hero of this entire project. The design engineers worked hard to develop a mix that was as inexpensive as possible, using the rock and aggregates available on the site, while still being strong enough to carry the weight of the new spillway.

In the interest of time, California DWR brought on a contractor early to start building access roads and staging areas for the main construction project. They also began stabilizing the steep slopes created by the erosion to make the site safer for the construction crews that would follow. The main construction project was bid at the end of March with plans only 30% complete. This allowed the contractors to get started early to mobilize the enormous quantity of equipment, materials, and workers required for this massive undertaking. Having a contractor on the project early also allowed the design team to collaborate with the construction team, making it easier to assess the impact of design changes on the project’s costs and schedule.

Because the original spillway failed catastrophically, DWR knew that the entire main spillway would need to be rebuilt to modern standards. However, they didn’t have the time to do the whole thing before the upcoming flood season. DWR had developed an operations plan for Lake Oroville to keep the reservoir low and minimize the chance of spillway flows while the facilities were out-of-service for construction, but they couldn’t just empty the lake entirely. They still had to balance the purposes of the reservoir, including flood protection, hydropower generation, environmental flows, and the rights of water users downstream. The winter flood season was approaching rapidly, and there was still a possibility of a flood filling the reservoir and requiring releases. DWR needed a spillway that could function before November 2017 (a little more than six months from when the contractor was hired), even if it couldn’t function at its total original capacity.

In collaboration with the contractor, the design team decided to break up the repair project into two phases. Phase 1 would rush to get an operational spillway in place before the 2017-2018 winter flood season. The remaining work to complete the spillway would be finished ahead of the following flood season at the end of 2018. In addition to the repairs at the main spillway, engineers also designed remediations to the emergency spillway, including a buttress to the existing concrete weir, an RCC apron to protect the vulnerable hillside soils, and a cutoff wall to keep erosion from progressing upstream. To speed up regulatory approval, which can often take months under normal conditions, the California Division of Safety of Dams and the Federal Energy Regulatory Commission both dedicated full-time staff to review designs as they were produced, working in the same trailers as the engineers. The project also required an independent board of consultants to review designs and provide feedback to the teams. This group of experts met regularly throughout design and construction, and their memos are available online for anyone to peruse.

Phase 1 of construction began as the damaged spillway continued to pass water to lower the reservoir throughout the month of May. The contractor started blasting and excavating the slopes around the site to stabilize them and provide access to more crews and equipment. At the same time, an army of excavators began to remove the soil and rock that was scoured from the hillside and deposited into the Feather River. The spillway gates were finally closed for the season at the end of May, allowing equipment to mobilize to all areas of the site. They quickly began demolition of the remaining concrete spillway. Blasting also continued to stabilize the slopes by reducing their steepness in preparation for RCC placement and break up the existing concrete to be hauled away or reused as aggregate.

By June, all the old concrete had been removed, and crews were working to clean the foundation materials of loose rock and soil. The contractor worked to ensure that the foundation was perfectly clean of loose soil and dust that could reduce the strength of the bond between concrete and rock.

In July and August, crews made progress on the upper and lower sections of the spillway that hadn’t been significantly undermined. Because they didn’t have to fill in a gigantic scour hole in this area, crews could use conventional concrete to level and smooth the foundation, ensuring that the new structural spillway slab would be a consistent thickness across its entire width and length. Of course, I have to point out that the chute was not simply being replaced in kind. Deficiencies in the original design were a significant part of why the spillway failed in the first place. The new design of the structural concrete included an increase in the thickness of the slab, more steel reinforcement with an epoxy coating to protect against corrosion, flexible waterstops at the joints in the concrete to prevent water from flowing through the gaps, steel anchors drilled deep into the bedrock to hold the slabs tightly against their foundation, and an extensive drainage system. These drains are intended to relieve water pressure from underneath the structure and filter any water seeping below the slab so it can’t wash away soil and undermine the structure.

As the new reinforced concrete slabs and training walls were going up on the lower section of the chute, RCC was being placed in lifts into the scour hole at the center of the chute. This central scour hole was the most time-sensitive part of the project because there was just so much volume to replace. Instead of filling the scour hole AND building the new spillway slabs and walls on top during Phase 1, the designers elected to use the RCC as a temporary stand-in for the central portion of the chute during the upcoming flood season. The designs called for RCC to be placed up to the level of the spillway chute with formed walls, not quite tall enough for the total original capacity, but enough to manage a major flood if one were to occur.

By September, crews had truly hit their stride, producing and placing colossal amounts of concrete each day, slowly reconnecting the upper and lower sections of the chute across the chasm of eroded rock. Reinforced concrete slabs and walls continued to go up on both the upper and lower sections of the chute. With only a month before the critical deadline of November 1, the contractor worked around the clock to produce and place both conventional and roller-compacted concrete across the project site. By the end of the day on November 1st, Phase 1 of the massive reconstruction was completed on schedule and without a single injury. The spillway was ready to handle releases for the winter flood season if needed. Luckily, it wasn’t, and the work didn’t stop at Oroville dam.

Phase 2 began immediately, with the contractor starting to work on the parts of the project that wouldn’t compromise the dam’s ability to release flows during the flood season. That mainly involved a focus on the emergency spillway. Crews first rebuilt a part of the original concrete weir, making it stronger and more capable of withstanding hydraulic forces. They also installed a secant pile cutoff wall in the hillside well below the spillway. A secant pile wall involves drilling overlapping concrete piers deep into the bedrock. The purpose of the cutoff wall was to prevent erosion from traveling upstream and threatening the spillway structure. A concrete cap was added to the secant piles to tie them all together at the surface. Finally, roller compacted concrete was placed between the secant wall and the spillway to serve as a splash pad, protecting the vulnerable hillside from erosion if the emergency spillway were ever to be used in the future.

Once the flood season was over in May, DWR gave the contractor the go-ahead to start work back on the main spillway. There were two main parts of the project remaining. First, they needed to completely remove and replace the uppermost section of the chute and training walls. Except for the dentates at the downstream end, this was the only section of the original chute remaining after Phase 1. 

At the RCC section of the spillway, crews first removed the temporary training walls that were installed to allow the spillway to function at a reduced capacity during the prior flood season. They never even got to see a single drop of water, but at least the material was reused in batches of concrete for the final structure. Next, the contractor milled the top layer of RCC to make room for the structural concrete slab. They trenched drains across the RCC to match the rest of the spillway, and finally, they built the structural concrete slabs and walls to complete the structure. All this work continued through the summer and fall of 2018. On November 1st, construction hit a key milestone of having all the main spillway concrete placed ahead of the winter flood season. Although cleanup and backfill work would continue for the next several months, the spillway was substantially complete and ready to handle releases if it was needed. It’s a good thing too because a few months later, it was.

Crews continued cleaning up the site, working on the emergency spillway, and demobilizing equipment throughout the 2018-2019 flood season. In April 2019, heavy rain and snowfall filled Lake Oroville into the flood control zone, necessitating the opening of the spillway gates. For the first time since reconstruction, barely two years after this whole mess got started, the new spillway was tested. And it performed beautifully. I’m sure it was a tremendous relief and true joy for all of the engineers, project managers, construction workers, and the public to see that one of the most important reservoirs in the state was back in service. As of this writing, Oroville is just coming up from historically low levels resulting from a multi-year drought in California. It just goes to show the importance of engineering major water reservoirs like Oroville to smooth out the tremendous variability in rain and snowfall.

It’s easy to celebrate such an incredible engineering achievement of designing and constructing one of the largest spillway repair projects in the world without remembering what necessitated the project in the first place. The systemic failure of the dam owner and its regulators to recognize and address the structure’s inherent flaws came at a tremendous cost, both to those whose lives were put at risk and evacuated from their homes and to the taxpayers and ratepayers who will ultimately foot the more-than-a-billion dollars spent on these repairs. Dam owners and regulators across the world have hopefully learned a hard lesson from Oroville, thanks in large part to those who shared their knowledge and experience of the event. I’d like to give them a shout out here, because this wouldn’t have been possible without them.

California DWR’s commitment to transparency means we have tons of footage from the event and reconstruction. Engineers and project managers involved in the emergency and reconstruction shared their experiences in professional journals. Finally, my fellow YouTuber Juan Brown provided detailed and award-winning coverage of the project as a citizen journalist on his channel, Blancolirio, including regular overflights of Oroville Dam in his Mighty Luscombe. Go check out his playlist if you want to learn more. As I always say, this is only a summary, and it doesn’t include nearly the level of detail that Juan put into his reporting.

December 21, 2021 /Wesley Crump

Why Retaining Walls Collapse

December 07, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In March of 2021, a long-running construction project on a New Jersey highway interchange ground to halt when one of the retaining walls along the roadway collapsed. This project in Camden County, called the Direct Connection, was already 4 years behind schedule, and this failure set it back even further. As of this writing, the cause of the collapse is still under investigation, but the event brought into the spotlight a seemingly innocuous part of the constructed environment. I love innocuous parts of the constructed environment, and I promise by the end of this you’ll pay attention to infrastructure that you’ve never even noticed before. Why do we build walls to hold back soil, what are the different ways to do it, and why do they sometimes fall down? I’m Grady and this is Practical Engineering. Today, we’re talking about retaining walls.

The natural landscape is never ideally suited to construction as it stands. The earth is just too uneven. Before things get built, we almost always have to raise or lower areas of the ground first. We flatten building sites, we smooth paths for roads and railways, and we build ramps up to bridges and grade-separated interchanges. You might notice that these cuts and fills usually connect to the existing ground on a slope. Loose soil won’t stand on its own vertically. That’s just the nature of granular materials. The stability of a slope can vary significantly depending on the type of soil and the loading it needs to withstand. You can get many types of earth to hold a vertical slope temporarily, and it’s done all the time during construction, but over time the internal stresses will cause them to slump and settle into a more stable configuration. For long-term stability, engineers rarely trust anything steeper than 25 degrees. That means any time you want to raise or lower the earth, you need a slope that is twice as wide as it is tall, which can be a problem.

Don’t tell them I said this, but slopes are kind of a waste of space. Depending on the steepness, it’s either inconvenient, or entirely impossible to use sloped areas for building things, walking, driving, or even as open spaces like parks. In dense urban areas, real estate comes at a premium, so it doesn’t make sense to waste valuable land on slopes. Where space is limited, it often makes sense to avoid this disadvantage by using a retaining wall to support soil vertically.

When you see a retaining wall in the wild, the job of holding back soil looks effortless. But that’s usually only true because much of the wall’s structure is hidden from view. A retaining wall is essentially a dam, except instead of water, it holds back earth. Soil doesn’t flow as easily as water, but it is twice as heavy. The force exerted on a retaining wall from that soil, called the lateral earth pressure, can be enormous. But that’s just from the weight of the soil itself. Include the fact that we often apply additional forces from buildings, vehicles, or other structures, on top of the backfill behind the wall. We call these surcharge loads, and they can increase the forces on a retaining wall even further. Finally, water can flow through or even freeze in the soil behind a retaining wall, applying even more pressure to its face.

Estimating all these loads and designing a wall to withstand them can be a real challenge for a civil engineer. Unlike most structures where loads are vertical from gravity, most of the forces on a retaining wall are horizontal. There are a lot of different types of walls that have been developed to withstand these staggering sideways forces. Let’s walk through a few different designs.

The most basic retaining walls rely on gravity for their stability, often employing a footing along the base. The footing is a horizontal member that serves as a base to distribute the forces of the wall into the ground. Your first inclination might be to extend the footing on the outside of the wall to extend the lever arm like an outrigger on a crane. However, it’s actually more beneficial for the footing to extend inward into the retained soil. That’s because the earth behind the wall sits atop the footing, which acts as a lever to keep the wall upright against lateral forces. Retaining walls that rely only on their own weight and the weight of the soil above them to remain stable are called gravity walls (for obvious reasons), and the ones that use a footing like this are called cantilever walls.

One common type of retaining wall involves tying a mass of soil together to act as its own wall, retaining the unreinforced soil beyond and this was actually the subject of one of the very first engineering posts. It’s accomplished during the fill operation by including reinforcement elements between layers of soil, a technique called mechanically stabilized earth. The reinforcing elements can be steel strips or fabric made from plastic fibers called geotextile or geogrid. It is remarkable how well this kind of reinforcement can hold soil together.

Gravity walls and mechanically stabilized earth are effective retaining walls when you’re building up or out. In other words, they’re constructed from the ground up. But, excavated slopes often need to be retained as well. Maybe you’re cutting out a path for a roadway through a hillside or constructing a building in a dense urban area starting at the basement level. In these cases, you need to install a retaining wall before or during excavation from the top down, and there are several ways to go about it. Just like reinforcements hold a soil mass together in mechanically stabilized earth, you can also stitch together earth from the outside using a technique called soil nailing. First, an angled hole is drilled in the face of the unstable slope. Then a steel bar is inserted into the hole, usually with plastic devices called spiders to keep it centered. Cement grout is added to the hole to bond the soil nail to the surrounding earth.

Both mechanically stabilized earth and soil nails are commonly used on roadway projects, so it’s easy to spot them if you’re a regular driver. But don’t examine too closely until you are safely stopped. These walls are often faced with concrete, but the facings are rarely supporting much of the load. Instead, their job is to protect the exposed soil from erosion due to wind or water. In temporary situations, the facing sometimes consists of shotcrete, a type of concrete that can be sprayed from a hose using compressed air. For permanent installations, engineers often use interlocking concrete panels with a decorative pattern. These panels not only look pretty, but they also allow for some movement over time and for water to drain through the joints.

One disadvantage of soil nails is that the soil has to settle a little bit before the strength of each one kicks in. The nails also have to be spaced closely together, requiring a lot of drilling. In some cases it makes more sense to use an active solution, usually called anchors or tiebacks. Just like soil nails, anchors are installed in drilled holes at regular spacing, but you usually need a lot fewer of them. Also unlike soil nails, they aren’t grouted along their entire length. Instead, part of the anchor is installed inside a sleeve filled with grease, so you end up with a bonded length and an unbonded length. That’s because, once the grout cures, a hydraulic jack is used to tension each one. The unbonded length of the anchor acts like a rubber band to store that tension force. Once the anchor is locked off, usually using a nut combined with a wedge-shaped washer, the tension in the unbonded length applies a force to the face of the wall, holding the soil back. Anchored walls often have plates, bearing blocks, or beams called walers to distribute the tension force across the length of the wall.

One final type of retaining wall uses piles. These are vertical members driven or drilled into the ground. Concrete shafts are installed with gigantic drill rigs like massive fence posts. When they are placed in a row touching each other, they’re called tangent piles. Sometimes they are overlapped, called secant piles, to make them more watertight. In this case, the primary piles are installed without steel reinforcement, and before they cure too hard, secondary piles are drilled partially through the primary ones. The secondary piles have reinforcing steel to provide most of the resistance to earth pressure. Alternatively, you can use interlocking steel shapes called sheet piling. These are driven into the earth using humongous hammers or vibratory rigs. Pile walls depend on the resistance from the soil below to cantilever up vertically and resist the lateral earth pressure. The deeper you go, the more resistance you can achieve. Pile walls are often used for temporary excavations during construction projects because the wall can be installed first before digging begins, ensuring that the excavated faces have support for the entirety of construction.

All these types of retaining walls perform perfectly if designed correctly, but retaining walls do fail, and there are a few reasons why. One reason is just under designing for lateral earth pressure. It’s not intuitive how much force soil can apply to a wall, especially because the slope is often holding itself up during construction. Earth pressure behind a wall can build gradually such that failure doesn’t even start until many years later. Lots of retaining walls are built without any involvement from an engineer, and it's easy to underestimate the loads if you’re not familiar with soil mechanics. Most cities require that anything taller than around 4 feet or 1.5 meters be designed by a professional engineer.

As I mentioned, soil loads aren’t the only forces applying to walls. Some fail when unanticipated surcharge loads are introduced like larger buildings or heavy vehicles driving too close to the edge. If you’re ever putting something heavy near a retaining wall, whether it’s building a new swimming pool or operating a crane, it’s usually best to have an engineer review beforehand. 

Water is another challenge with retaining walls. Not only does water pressure add to the earth pressure, in some climates it can freeze. When water freezes, it expands with a force that is nearly impossible to restrain, and you don’t want that happening to the face of a wall. Most large walls are built with drainage systems to prevent water from building up. Keep an eye out for holes through the face of the wall that can let water out, called weepholes, or pipes that collect and carry the water away.

Finally, soil can shear behind the wall, even completely bypassing the wall altogether. For tall retaining walls with poor soils, multiple tiers, or lots of groundwater, engineers perform a global stability analysis as a part of design. This involves using computer software that can compare the loads and strengths along a huge number of potential shearing planes to make sure that a wall won’t collapse. 

Look around and you’ll see retaining walls everywhere holding back slopes so we all have a little more space in our constructed environments. They might just look like a pretty concrete face on the outside, but now you know the important job they do and some of the engineering that makes it possible.


December 07, 2021 /Wesley Crump

What Really Happened at the Millennium Tower?

November 16, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The Millennium Tower is the tallest residential building in San Francisco, with 58 stories above the ground and 419 luxury condominium units. The tower opened to residents in 2009, but even before construction was finished, engineers could tell that the building was slowly sinking into the ground and tilting to one side. How do engineers predict how soils will behave under extreme loading conditions, and what do you do when a skyscraper’s foundation doesn’t perform the way it was designed? Let’s find out. I’m Grady, and this is Practical Engineering. Today, we’re talking about the Millennium Tower in San Francisco.

Skyscrapers are heavy. That might seem self-evident, but it can’t be overstated in a story like this. An average single-story residential home is designed to apply a pressure to the subsurface of maybe 100 pounds per square foot of building footprint. That’s about 5 kilopascals, the pressure at the bottom of a knee-deep pool of water. With its concrete skeleton, the Millenium Tower was designed to impose a load of 11,000 pounds per square foot or 530 kilopascals to its foundation (about 100 times more than an average house). It would be impossible for just the ground surface to bear that much weight, especially in this case where the ground surface is a weak layer of mud and rubble placed during the City’s infancy to reclaim land from the bay.

That tremendous pressure is why most tall buildings use deep foundation systems. The Millennium Tower’s foundation consists of a 10-foot or 3-meter-thick concrete slab supported by 950 concrete friction piles driven into the subsurface to a depth of about 80 feet or 24 meters. Friction piles spread out the load of the building vertically, allowing much more of the underlying soils to be used to support the structure without becoming overwhelmed. The piles also allow the foundation to bear on stronger soils than those at the surface.

Driving the piles so deep allowed the building to not sit on the surface layer of artificial fill, or even the soft underlying layer of mud but rather on the dense sandy soil of the Colma Formation below. This is a fairly common design in San Francisco, with more than a dozen tall buildings in the downtown area utilizing a similar foundation system, including some nearly as large as this one. However, it’s not the dense sands causing problems for the Millennium Tower, but what’s underneath. Below the Colma Formation is a thick layer of Ice Age mud locally known as the Old Bay Clay. Thanks to the geologists for that name. When the building was designed, the project geotechnical engineers predicted that it would settle 4 to 6 inches (10 to 15 centimeters) over the structure’s entire lifetime, mainly from this layer of Old Bay Clay below the bottom of the piles. But even before construction was complete, the building had already settled more than that.

The ground below your feet may seem firm and stable, but when subjected to increased loading - and especially when the load is extreme like that of a concrete skyscraper - soil can compress in a process called consolidation. Essentially, the soil is like a sponge filled with water. An increased load will slowly squeeze the water out, allowing the grains to compress into the empty space. Settlement is usually a gradual process because it takes time for the water to find a path out from the soil matrix. But some things can accelerate the process, even if they’re not intentional.

The Millenium Tower was already designed to put more stress on the underlying Old Bay Clay than any other building in the area. However, construction of the tower’s basement also required the contractor to pump water out of the subsurface to keep the site dry. This is often done using vertical wells similar to the ones used for drinking water but usually not as deep. This deliberate and continuous dewatering of foundation soils accelerated the settlement. Then other construction projects nearby began, including the adjacent Transbay Transit Center, which required their own deep excavations and groundwater drawdowns. All these factors added up to a lot more settlement than was initially anticipated by the project’s geotechnical engineers. The result was that, by 2016 (when the public first learned about the issue), the building had already sunk more than 16 inches or 41 centimeters, triple the movement that was anticipated for its entire lifetime. Unfortunately, that settlement wasn’t happening evenly. Instead, the northwest corner had sunk a little lower than the rest of the foundation, causing the tower to tilt several inches in that direction.

The media had a field day reporting on the leaning tower of San Francisco, and accusations started flying about who was to blame and whether the City had covered up details about the building’s movement. The developer continued insisting that the building was safe, reiterating that all buildings settle over time, and the Millennium Tower was no different. But it definitely was different, at least in magnitude. With so much attention to the building, the City commissioned a panel of experts in 2017 to assess its safety both for everyday use and in the event of a strong earthquake. By that time, the building had settled another inch and was out-of-plumb by more than a foot or 30 centimeters. That’s not something you could notice by eye and was probably only discernible to the most perceptive residents, but it’s well beyond the 6 inches allowed by the building code. Even so, the panel found that the building was completely safe, and the settlement had not compromised its ability to withstand strong earthquakes. However, they cautioned that the movement hadn’t stopped, and further tilting may affect the building’s safety.

At the same time, and despite engineering assessments confirming the building’s safety, the condominium prices were plummeting. No one wanted to live in a building that was sinking into the ground with no sign of slowing down. It didn’t take long for lawsuits to be filed. By the end of it, just about every person and organization related in any way to the Millennium Tower was involved in at least one lawsuit, including individual residents, the homeowners association, the building developer, the Transbay Joint Powers Authority, and many others. In total, there were nine separate lawsuits involving around 400 individual parties. After many years of complex litigation, a comprehensive settlement (of the legal kind) was eventually reached through private mediation. The result was that no one took the blame for the building’s excessive movement, condo owners would be compensated for the loss of property values, and, most importantly, the building would be fixed

During mediation, the retrofits to the building’s foundation to slow the sinking and “de-tilt” the tower were a big point of contention. One early plan was to install hundreds of micropiles (small diameter drilled piles) through the existing foundation down to bedrock. But the estimated cost for the repair was as much as 500-million-dollars, more than the original cost of the entire building. It turns out it’s a lot easier to drill foundation piles before the building is built than afterward. The challenges associated with working below the building, like access, vibrations, noise, and lack of space, drove up the price and the parties couldn’t agree to pay such a substantial cost. An unconventional alternative proposed by the developer’s engineer ended up resolving the dispute, and as of this writing, is currently under construction.

The proposed fix to the Millenium Tower is to install piles along two sides of the building’s perimeter. That may seem kind of simple, but there is a lot of clever engineering involved to make it work. Fifty-two piles will be drilled along the north and west sides of the tower all the way down to bedrock. Unlike the original plan, these piles will be installed outside the building below the adjacent sidewalks, saving a significant amount on the construction cost. An extension to the building’s existing concrete slab will be installed around each pile but not rigidly attached to them. Instead, each pile will be sleeved through and extended above the concrete slab so that the building can move independently. The slab will be equipped with steel beams centered above each pile and anchored deep within the concrete. Finally, hydraulic jacks will be installed between each of the fifty-two piles and beams.

Once everything is installed, the contractor will use the hydraulic jacks to lift the building’s foundation, transferring about 20 percent of the load onto the new perimeter piles. That means each one will be carrying around 800,000 pounds or 360,000 kilograms. The goal of the upgrade is to remove weight from the clay soils below the building, transferring it to the stronger bedrock further below and thus slowing down the settlement. The design requires that the holes be overdrilled so that no part of the new piles can come into contact with the Old Bay Clay and put any weight on this weak subsurface layer. The annular space between each pile and the clay will be filled with low-strength material only after the hydraulic jacking operation is complete. Once the building is safely supported, each pile will be enclosed in a concrete vault below the ground, everything will be backfilled, and the sidewalks will be replaced. If all goes according to plan, the settlement on the north and west sides of the building will be completely arrested. With less load on the original foundation, the sinking of the other two sides will gradually slow to a stop, straightening the building back to its original plumbness, but just a couple of feet lower than where it started.

Of course, expensive and innovative construction projects rarely do go according to plan, and this one is no different. The City of San Francisco and the design engineers were carefully monitoring the building’s movement as construction of the retrofit got started in May 2021. It didn’t take long to notice an issue. The vibrations and disturbance of drilling through the Old Bay Clay were making the settlement accelerate. The speed at which the building was tilting and sinking started to increase as the drilling continued. In August 2021, construction was halted to reassess the plan and find a solution to install the foundation retrofit safely. As of this writing, crews are testing some revised drilling procedures that they hope will reduce the disturbance to the clay layer so they can get those piers installed and the building supported as quickly as possible.

The story of the Millennium Tower is a fascinating case study in geotechnical engineering. Our ability to predict how soils will behave under new and extreme conditions isn’t perfect, especially when those soils are far below the surface, where we can only guess their properties and extents based on a few borehole samples. In addition, buildings don’t get built in a vacuum, and the tallest ones are usually at the center of dense urban areas. Soils don’t care about property lines, and you can end up with big problems by underestimating the impacts that adjacent projects can create. Most people will wonder why the building’s foundation didn’t just go to bedrock in the original design. The answer is the same reason my house doesn’t have piles to bedrock. No one likes to pay for things they don’t think are necessary. If those geotechnical and structural engineers could go back in time, I think they probably would go with a different foundation, but whether they could have reasonably predicted the performance of the original design with all the extra dewatering and adjacent construction is a more complicated question.

The Millennium Tower is also an interesting case study in the relationship between engineers and the media. The developer’s engineers and the City have shown that the building is perfectly safe through detailed modeling and investigation. And yet, the prices of those luxury condominiums plummeted with the frenzy of reporting about the settlement and tilting. Those prices depend not only on buyers’ confidence in the building’s safety but also their willingness to be associated with a building that is regularly in the news. The value in the multimillion-dollar repair project will be not just to slow down the settlement but also to slow down the articles, news segments, memes, and tourists from remembering this building as the leaning tower of San Francisco.

I know I say this at the end of all my blog posts, but this is not the whole story. I did my best to summarize the high points, but there are many more details to this saga. I definitely encourage you to seek out those details before drawing any hard conclusions. It’s an excellent example of the challenges and complexity involved in large-scale engineering projects, the limitations and uncertainty in engineering practice, and the interconnectedness of regulations, engineering, and the media. I’ll be keeping an eye on the progress of the foundation retrofit. Thank you, and let me know what you think!


November 16, 2021 /Wesley Crump

Why SpaceX Cares About Dirt

November 02, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Before the SpaceX South Texas launch facility on South Padre Island near Boca Chica supported crazy test launches of the Starship spaceflight program, it was just a pile of dirt. Contractors brought in truck after truck of soil, creating a massive mesa of more than 300,000 cubic yards or 230,000 cubic meters of earth. That’s a lot of olympic-sized swimming pools, not that you’d want to go swimming in it. After nearly two years, they hauled most of that soil back off the site for disposal. It might seem like a curious way to start a construction project, but foundations are critically important. That’s true for roads, bridges, pipelines, dams, skyscrapers, and even futuristic rocket launch facilities. The Texas coastline is not known for its excellent soil properties, so engineers had to specify some extra work before the buildings, tanks, and launchpads could be constructed. Building that giant dirt pile was a clever way to prevent these facilities from sinking into the ground over time. Why do some structures sink, and what can we do to keep it from happening? I’m Grady and this is Practical Engineering. Today, we’re talking about soil settlement.

The Earth’s gravity accelerates us, and everything else on our planet downward. To keep us from falling toward the center of the planet, we need an equal and opposite reaction to keep us in place. If you’re at the top of a skyscraper, your weight is supported by floor joists that transfer it to beams that transfer it to columns that transfer it downward into massive concrete piers, but eventually the force of you must be resisted by the earth. It’s ground all the way down. You might not think about the ground, and its critical role in holding stuff up, but the job of a geotechnical engineer is to make sure that when we build stuff, the earth below is capable and ready to support that stuff for its entire lifespan.

Every step you take when walking along the ground induces stress into the subsurface. And every rocket launch facility you build on the Texas coastline does the same thing. This isn’t always a big deal. When constructing on bedrock, there’s a lot less to worry about, but much of the earth’s landscape consists of soil: granular compositions of minerals. Stress does a funny thing to soils. I mean, it does some funny things to all of us, but to soils too. At first consideration, you might not think there’s really much difference between rock and soil. After all, soil particles are just tiny rocks, and many sedimentary rocks are made from accumulated soil particles anyway. But, soil isn’t just particles. In between all those tiny grains are empty spaces we call pores, and those pores are often filled with water. Just like squeezing a sponge forces water out, introducing stress to a soil layer can do the same thing.

Over time, water is forced  to exit the pore space of the soil and flow up and out. As the water departs, the soil compresses to take up the void left behind. This process is called consolidation. It’s not the only mechanism for settlement, but it is the main one, especially for soils that are made up of fine particles. Large-grained soils like sand and gravel interlock together and don’t really act like a sponge so much as a solid, porous object. To the extent they do consolidate, it happens almost immediately. You can squeeze and squeeze, but nothing happens. Fine-grained soils like clay and silt are different. Like sand or gravel, the particles themselves aren’t very compressible. However, unlike in coarse-grained soils, fine particles aren’t so much touching their neighbors as they are surrounded by a thin film of water. When you squish the soil, the tiny particles rearrange themselves to interlock, pressurizing the pore water and ultimately forcing it out. The more weight you add, the more stress goes into the subsurface, the more water is forced out of the pores, and thus the further the soil settles. Geotechnical laboratories perform these tests with much scientific rigor.

This may seem obvious, but when we build stuff, we don’t want it to move. We want the number on that dial to stay the same for all of eternity, or at least until the structure is at the end of its lifespan. That idea - that when you build something, it stays put - is essentially all of geotechnical engineering in a nutshell. It encompasses the entirety of foundation design, from the simplest slabs of concrete for residential houses, to the highly sophisticated substructures of modern bridges and skyscrapers. The way movement occurs also matters. It’s actually not such a big deal if settlement happens uniformly. After all, in many cases the movement is nearly imperceptible. I’m using a special instrument just so you can see it on camera. Many buildings can take a little movement without much trouble. But often, settlement doesn’t happen uniformly.

For one, structures don’t usually impose uniform loads. If everything we built was uniform in size and density, we might be okay, but that’s never the case. No matter what you’re constructing, you almost always have some heavy parts and other light parts that stress the soil differently. On top of that, the underlying geology isn’t uniform either. Take a look at any road cut to see this. The designers of the bell tower at the Pisa Cathedral in Italy famously learned this lesson the hard way. Small differences in the soils on either side of the tower caused uneven settlement. Geotechnical engineering didn’t exist as a profession in the 1100s, and the architects would have had no way of knowing that the sand layer below the tower was a little bit thinner on the south side than the north. It didn’t take long after construction started for the tower to begin its iconic lean. I should point out that there’s another soil effect that can cause the opposite problem. Certain types of soils expand when exposed to increased moisture, introducing further complications to a geotechnical engineer. I have a separate post on that topic, so check it out after this if you want to learn more.

Settlement made the tower of Pisa famous, but in most cases it just causes problems and costs a lot of money to fix. One of the most famous modern examples is the Millennium Tower in San Francisco, California. The 58-story building was constructed atop the soft, compressible fill and mud underlying much of the Bay Area. Engineers used a foundation of piles driven deep below the building to a layer of firmer sand, but it wasn’t enough. Only 10 years after construction, the northwest corner of the building had sunk more than 18 inches or 46 centimeters into the earth, causing the building to tilt. Over time, some of the building's elements were damaged or broken, including the basement and pavement surrounding the structure. As you would expect, there were enough lawsuits to fill an olympic sized swimming pool. The repairs to the building are in progress at an estimated cost of 100 million dollars, not to mention the who-knows-how-much in legal fees.

One of the most reliable ways to deal with settlement is just to make sure it happens during construction instead of afterwards. As you build, you can account for minor deviations as they occur. Unfortunately, consolidation isn’t always a speedy process. The voids in clay soils are extremely small, so the path that water has to take in order to exit the soil matrix is long and windy. We call this windiness sinuosity. Depending on the soils and loads applied, the consolidation process can take years to complete.

It’s not a good idea to build a structure that will settle unevenly over the next several years. Hopefully it’s obvious that that’s bad design. So, we have a few options. One is to use a concrete slab that is stiff enough to distribute all the forces of the structure evenly and provide support no matter how nonuniformly the settlement occurs. These slabs are sometimes called raft foundations because they ride the soil like a raft in the ocean. Another option is to sink deep piles down to a firmer geologic layer or bedrock so that loads get transferred to material more capable of handling them. But both of those options can be quite expensive. A third option is simply to accelerate the consolidation process so that it’s complete by the end of construction.

One way to speed up consolidation in clay soils is to introduce a drainage system. Settlement is mainly a function of how quickly water can exit the soil. In a clay layer, particularly a very thick layer or one underlain by rock, the only way for water to leave is at the surface. That means water well below the ground has to travel a long distance to get out. We can shorten the distance required to exit the soil by introducing drains. This is often done using prefabricated vertical drains, called PVDs or wick drains. These plastic strips have grooves in which water can travel, and they can be installed by forcing them directly into the subsurface using heavy machinery. An anchor plate is attached, the drain is pressed into the soil to the required depth, the mandrel is pulled out, and the material is cut. It all happens in quick succession, allowing close spacing of drains across a large area. The tighter the spacing, the less distance water has to exit. One of the other benefits here is that water often travels through soils horizontally faster than it does vertically, since geologic layers are usually horizontal. That speeds up consolidation even more. Plotting the displacement over time, the benefit of vertical drains is unmistakable.


The second way we speed up consolidation is surcharge loading. This is applying stress to the foundation soils before construction to force the water out quickly. Like I described in the intro at SpaceX South Texas, it’s usually as simple as hauling in a huge volume of earth to be temporarily placed on site. The way this works is as straightforward as squeezing a sponge harder. It’s the equivalent of adding more weight to my acrylic oedometer, but it’s simpler just to show a graph. Let’s say you’re going to build a structure that will impose a stress on the subsurface. That stress corresponds to a consolidation at this red line. If you load the foundation soils with something heavier than your structure, that weight will be associated with a greater consolidation. It’s going to take about the same time to reach a certain percentage of consolidation in both cases, but you’re going to hit the target consolidation (the red line) much faster. In many cases, engineers will specify both wick drains and surcharging to consolidate the soil as quickly as possible so that construction can begin. Once you get rid of all the extra soil you brought in, you can start building on your foundation knowing that it’s not going to settle further over time.

November 02, 2021 /Wesley Crump

What Really Happened At Edenville and Sanford Dams?

October 19, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On May 18th, 2020, heavy rainfall in Michigan raised the level of Wixom Lake - a man-made reservoir impounded by Edenville Dam - higher than it had ever gone before. As the reservoir continued to rise the following day, the dam suddenly broke, sending a wall of water downstream. As it traveled along the Tittabawassee River, the flood wave reached and quickly overpowered the Sanford Dam downstream. The catastrophic failure of the two dams impacted more than 2,500 structures and caused more than 200-million-dollars in damage. The independent forensic team charged with investigating the event released an interim report on the failures in September 2021. The conclusions of the report include a discussion of a relatively rare phenomenon in earthen dams. Let’s walk through the investigation to try and understand what happened. I’m Grady, and this is Practical Engineering. Today, we’re talking about the failures of Edenville and Sanford Dams.

Edenville and Sanford Dams were two of four dams owned by Boyce Hydro Power along the Tittabawassee River in Michigan. The dams were built in the 1920s to generate hydroelectricity. Edenville Dam was constructed just upstream of the confluence with the Tobacco River. It was an earthfill embankment dam with two spillways and a powerhouse. The water impounded by the dam formed a reservoir called Wixom Lake, nearly the entire perimeter of which was surrounded by waterfront homes. State highway 30 bisected the dam along a causeway, splitting the lake between the two rivers with a small bridge to allow water to flow between the two sections of the reservoir. Sanford Dam downstream was a similar structure as Edenville, but not nearly as long. It consisted of an earthen embankment, a gated spillway, an emergency spillway, and a powerhouse for the turbines, generators, and other hydroelectric equipment.

Edenville Dam, in particular, had a long history of acrimony and disputes between the dam owner and regulatory agencies. Most dams that generate hydroelectricity in the US are subject to oversight by the Federal Energy Regulatory Commission (or FERC). But, Edenville Dam had its license to generate hydropower revoked in 2018 when the owner failed to comply with FERC’s safety regulations. Their report listed seven concerns, the most significant of which was that the dam didn’t have enough spillway capacity. As a result, if a severe storm were to come, the dam wouldn’t be able to release enough water to prevent the reservoir level from climbing above the top of the structure, overtopping it and likely causing it to fail. After losing the license to generate hydropower, jurisdiction over the dam fell to the State of Michigan, where disagreements about its structural condition, spillway capacity, and water levels in Wixom Lake continued.

The days before the failure had already been somewhat rainy, with small storms moving through the area. But heavy rain was in the forecast for May 18th. The deluge arrived early that morning, and it didn’t take long for the water levels in Wixom Lake to begin to rise. By 7 AM, operators at the dam had started opening gates on both spillways to release some of the floodwaters downstream. Gate operations continued throughout the day as the reservoir continued rising. At 3:30 PM, all six gates (three at each spillway) were fully opened. From then on, there was nothing more operators could do to get the floodwater out faster, and the level in Wixom Lake continued to creep upwards. That night, the lake reached the highest level in its history, only about 4 feet or 1.3 meters below the top of the earthen dam.

At daybreak on May 19th, it was already clear that Edenville Dam was struggling from the enormous forces of the flood. Operators noticed severe erosion from the quickly flowing water in the reservoir near the east spillway along the embankment. Regulators and dam personnel met to review the damage, and a contractor was brought in to deploy erosion control measures. And still, the water kept rising.

By 5 PM, Wixom Lake had risen to within around a foot (or 30 centimeters) from the top of the dam. As crews worked to mitigate the erosion problems in other places, eyewitnesses noticed a new area of depression on the far eastern end of the dam. This part of the embankment hadn’t been a significant point of focus during the flood because it wasn’t experiencing visible erosion, but it was apparent something serious had happened. Photos from a few hours earlier didn’t show anything unusual, but now the top of the embankment sank down nearly to the reservoir level. Eyewitnesses moved to the nearby electrical substation to get a better look at this part of the dam. Within only a few moments, the embankment failed. Lynn Coleman, a Michigander and one of the bystanders, caught the whole thing on camera. 

Over the next two hours, all of Wixom Lake drained through the breach in the dam. Water rushing through the narrow gap in the causeway washed out the highway bridge, and all of the waterfront homes and docks around the entire perimeter of the lake were left high and dry. As the floodwaters rushed through the breach into the river, the level downstream in Sanford Lake rose rapidly. By 7:45, the reservoir was above the dam’s crest, quickly eroding and breaching the structure. With the combined volumes of Wixom and Sanford Lakes surging uncontrolled down the Tittabawassee River, downstream communities including Sanford, Midland, and Saginaw were quickly inundated. Google Earth shows aerial imagery before, during, and after the flood, so you can really grasp the magnitude of the event. More than 10,000 people were evacuated, and flooding damaged more than 2,500 structures. Amazingly, no major injuries or fatalities were reported.

In their interim report on the event, the independent forensic team considered a broad range of potential explanations for what happened at Edenville Dam. Although the spillway for the dam was undersized per state regulations, this storm event didn’t completely overwhelm the structure. The level in Wixom Lake never actually went higher than the top of the embankment, so overtopping (one of the most common causes of dam failure, including the cascading loss of the downstream Sanford Dam) was eliminated as a possible cause of failure for Edenville Dam.

The team also looked at internal erosion, a phenomenon I’ve covered before that has resulted in many significant dam failures. Internal erosion involves water seeping through the soil and washing it away from the inside. However, this type of erosion usually happens over a longer time period than what was witnessed at Edenville Dam. No water seepage exiting the downstream face of the embankment or eroding soil was evident in the time leading up to the breach, ruling this mechanism out as the main cause of failure.

The forensic team determined that the actual cause of the failure was static liquefaction, a relatively unusual mechanism for an earthen dam. Soils are kind of weird but don’t tell that to geotechnical engineers. Because they are composed of many tiny particles, they can behave like solids in some cases and liquids in others. Of course, most of our constructed environment depends on the fact that soils mainly behave like solids, providing support to the things we build on top of them.

Liquefaction happens when soil experiences an applied stress, like an earthquake, that causes it to behave like a liquid, and it mostly happens in cohesionless soils - those where the grains don’t stick together, such as sand. When a body of cohesionless soil is saturated, water fills the pore spaces between each particle. When a load is applied, the water pressure within the soil increases, and if it can’t flow out fast enough, it forces the particles of soil away from each other. A soil’s strength is derived entirely from the friction between the interlocking particles. So, when those grains no longer interlock, the ground loses its strength. Some of the most severe damage from earthquakes comes from the near-instant transformation of underlying soils from solid to liquid. Buildings sink into their foundations, sewer lines float to the surface, and roads crumble without underlying support.

Liquefaction typically requires cyclical loading, like during an earthquake or extreme, sudden displacements to trigger the flow. Gradual increases in loading will only cause the water within the soil to flow out, equalizing the pore water pressure. But, some soils can reach a point of instability and liquefy under sustained or gradually increasing loading conditions in certain circumstances. This phenomenon is known as static liquefaction. A good analogy is the difference between glass and steel. Both materials have a linear stress-strain curve at first. In simple terms, the harder you push, the harder they push back. But both reach a point of peak strength, beyond which a soil will fail or deform. Well-compacted sand is like steel. It fails with ductile behavior. If you stress it beyond its strength, it deforms, but the strength is still there. In other words, if you want to keep deforming it, you have to keep applying a force at its peak strength. On the other hand, loose sand is like glass. If you push it beyond its peak strength, it fails with brittle behavior, suddenly losing most of its strength.

The independent forensic team took samples of the soils within the Edenville Dam embankment and subjected them to testing to see if they were liquefiable. The tests showed brittle collapse behavior necessary for static liquefaction. They also reviewed construction records and photographs where no compaction equipment was seen. The team concluded that as the level of Wixom Lake rose that fateful May evening, it increased the hydraulic load on the embankment, putting more stress on the earthen structure than it had ever been asked to withstand. In addition, the higher levels may have introduced water from the reservoir to permeable layers of the upper embankment (as evidenced by the depression that formed before the failure), increasing seepage and thus increasing the pore water pressure of saturated, uncompacted, sandy soils within the structure. Eventually, the peak strength of the embankment soil was surpassed, and a brittle collapse resulted, liquefying enough soil to breach a downstream section of the dam. A few seconds later, lacking support from the rest of the structure, the dam’s upstream face collapsed, and all of Wixom Lake began rushing through.

Edenville Dam was built in the 1920s before most of our current understanding of geotechnical engineering and modern dam safety standards existed. Most dams are earthen embankment dams, but modern ones are built much differently than this one was. Embankments are constructed slowly from the bottom up in individual layers called lifts. This lets you compact and densify every layer before moving upward, rather than just piling up heaps of loose material. We use gentle slopes on embankments to increase long-term stability since soils are naturally unstable on steep slopes. We have strict control over the type of soil used to construct the embankment, constantly testing to ensure the properties match or exceed the assumptions used during design. We often build an embankment of multiple zones. The core is made of clay soils that are highly impermeable to seepage, while the outer shells have less stringent specifications. We include rock riprap or other armoring on the upstream face so that waves and swift water in the reservoir can’t erode the vulnerable embankment. And, we include drains that both relieve pressure so it can’t build up within the soil and filter the seepage to prevent it from washing away soil particles from inside or below the structure. Edenville Dam actually did have a primitive internal drainage system made from clay tiles, but many of the drains in the area of the failure appeared to be missing in a recent inspection.

Although it seems like an outlier, the story of Edenville and Sanford Dams is not an unusual one. There are a lot of small, old dams across the United States built to generate hydropower in a time before everyone was interconnected with power grids. Over time, the revenue that comes from hydropower generation gradually declines as the maintenance costs for the facility and the danger the dam poses to the public both increase. However, the reservoir created by the dam is now a fixture of the landscape, elevating property values, creating communities and tourism, and serving as habitat for wildlife. You end up with a mismatch of value where most of the dam’s benefits are borne by those who don’t incur any responsibility for its upkeep or liability for the threat it poses to downstream communities. Even owners with the best intentions find themselves utterly incapable of good stewardship. Combine all that with the fact that the regulatory authorities are often underfunded and lack the resources to keep a good eye on every dam under their purview, and you get a recipe for disaster. After all, there’s only so much you can do to compel an owner to embark on a multimillion-dollar rehabilitation project for an aging dam when they don’t have the money to do it and won’t derive any of the benefits as a result.

Since the failure, the dam owner Boyce Hydro filed for bankruptcy protection, and the counties took control of the dams with a nonprofit coalition of community members and experts to manage repair and restoration efforts. Of course, there’s a lot more to this story than just the technical cause of the failure, and the final Independent Forensic Team report will have a deeper dive into all the human factors that contributed to the failure. They expect that report to be released later in 2021. Dams are inherently risky structures, and it’s unfortunate that we have to keep learning that lesson the hard way. Thank you for reading, and let me know what you think!

October 19, 2021 /Wesley Crump

Why SpaceX Cares About Concrete

October 05, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In November of 2020, the rocket company SpaceX was just starting to make some progress in the testing program for their new vehicle, Starship, one of the most ambitious rocket projects in history. One of the prototypes, serial number 8, was on the pad to test-fire the engines for the very first time as a fully stacked vehicle. Almost as soon as the engine lit up, it was clear that something was wrong. A shower of sparks exploded into the dusky sky, and the engine abruptly stopped. The sparks looked innocuous at a distance without a reference for scale, but in reality, they consisted of massive, glowing chunks of the launchpad below the rocket. One of these chunks was blasted into the engine bay, severing an essential cable and severely damaging the rocket. The event brought into the spotlight what is probably the most humble piece of engineering of the entire rocket industry: the pad. How do we build structures that can withstand such insane conditions, what happens when they don’t work, and how might we solve these challenges on other planets? I’m Grady and this is Practical Engineering. Today, we’re talking about launch pads and refractory concrete.

Rocket launch pads are subject to conditions that aren’t very similar to typical infrastructure. There are a lot of creative ways to manage the extremely high-temperature exhaust gases barrelling out of a rocket engine at incredible speeds during a launch. With the Space Shuttle and the in-progress SLS, the launch facilities incorporate a flame trench. This is a structure used to deflect the exhaust gases of a rocket away from the vehicle itself and all the delicate support structures, fuel and power lines, et cetera. But, a launch isn’t the only time that rockets and their fiery engines get close to the ground. SpaceX and other launch providers are now landing rockets propulsively (in other words, with engines). And in most cases, the coming down has a lot less precision than the going up. It isn’t feasible to pinpoint a rocket landing atop a fancy flame diversion structure, at least not yet. Instead, they usually just land on a slab of concrete. But, it’s not just regular concrete. The relationship between heat and that omnipresent gray durable substance is pretty complex, and I have a few demonstrations set up here in my garage so we can learn more.

Concrete is a relatively fire-resistant material. That’s one of the reasons we use so much of it in our buildings and infrastructure: it doesn’t burn. It can provide protection like around the stairwells of buildings. It can also withstand exposure to risky conditions that we wouldn’t allow for other materials, like in warehouses and factories where there’s potential for sparks. Because it is so durable and incombustible, there is a lot of science around the topic of concrete and fire. Engineers have to consider how to design structures that can withstand it. And, if a fire has occurred, we need engineers to inspect structures to figure out whether they’ve been damaged beyond repair or are still safe to use. That can be pretty obvious in some cases, but concrete can be damaged in ways that aren’t immediately clear to the naked eye.

When the damage is obvious, it’s probably because of moisture. Concrete is a porous material, and it can absorb water from the air. But, it’s not super porous. After all, we build dams out of concrete. Moisture can take years to get in after it’s cured. If that water gets too hot, it can turn to steam, expanding in volume within the interstitial spaces of the concrete. And if that steam can’t get out fast enough, it will build up pressure to the point where the concrete breaks. This is known as moisture clog spalling because the water in the pores of the concrete blocks the steam from getting out. Actually, I did try to simulate this effect, but my heat wasn’t enough or my sample was too small and gave the steam too many easy paths to exit. What I really want to show you is how concrete heat damage can be more subtle and insidious.

I’m making a bunch of cylinders of concrete and we’re going to test their strength after exposure to extreme heat. These samples are just made with regular old portland cement concrete from a ready-mix bag purchased from a home center. Just for fun, I’m also making equivalent samples from a specialty concrete that uses materials resistant to deterioration from high heat (also known as refractory concrete). I’m testing three different scenarios: controls left at room temperature with no heat, samples warmed in my oven to 500 degrees F, 260 C., and samples blasted using a gas torch. Two types of concrete times three different temperatures times two samples means I have 12 cylinders in all (but I made a few more just in case something went wrong - they come in handy sometimes). Once they’ve all been heated except the controls, I let them sit in my garage for a week. Now it’s time to break them.

Using a hydraulic press to crush a concrete cylinder isn’t just a lot of fun. It’s the time-tested and industry-approved way of figuring out how strong the concrete is. On almost all construction projects that use concrete, samples of the mix are taken to a laboratory, cured in cylindrical molds, and crushed on a press to verify the concrete was as strong as required. We’re doing the same thing here to see if the heat affected the strength of these samples.

The regular concrete control cylinders broke at 3000 psi or 20 MPa. Unfortunately, the refractory concrete control cylinders maxed out my little press here at 10 tons without breaking. That’s 6,400 psi or 44 MPa. This stuff has small fibers in it to provide some insulation against heat and reduce cracking, and they also help make it much stronger. A fair comparison isn’t going to be possible, but I still think this demo is illuminative - if you’ll pardon the pun. Now I’ll break the heated samples. The ones that went into the oven spent about an hour there to make sure they were fully heated. The portland cement cylinders broke at an average of 2200 psi or 15 MPa. That means they lost about 25% of their compressive strength compared to the unheated samples. We’ll talk about why in a minute. The refractory concrete samples out of the oven still wouldn’t break. They may have lost some strength, but it wasn’t enough to break in my 10-ton press.

The samples that got the blow torch were next, and the effect was dramatic on the portland cement concrete. Both samples broke at around 1300 psi or 9 MPa, losing more than half their original strength. The refractory cylinders did break this time, although it was still at nearly the maximum pressure I could deliver. The lesson here is pretty simple: concrete exposed to high temperatures might look fine even when it has lost a significant amount of strength. But why?

The biggest culprit is microcracking caused by thermal expansion. Concrete is a composite material, after all. It’s made from a mixture of large and small aggregates and cement paste. Most materials change volume according to temperature, expanding when hot and shrinking when cooled. But the materials that make up concrete have slight differences in the way they behave when subjected to changes in temperature. Those differences aren’t so critical when the temperature swings are small. But, when subjected to extremes - like under the heat of a massive rocket engine - microfractures occur at the interfaces between the different components as they expand and shrink at the different rates. I used these waxes that melt at different temperatures to try and estimate the temperature of the blow torch samples. They probably didn’t get much hotter than the oven samples in most places, but directly in line with the flame was scorching, probably over 1000 degrees F, 500 C. That type of uneven heating from a small, incredibly hot source, exacerbates this type of damage. The tiny cracks grow over time, weakening the concrete as they do, and they aren’t usually visible to the naked eye.

Interestingly, once the concrete is broken, it sometimes does carry a sign that it got too hot. Many of the aggregates used in concrete will turn pink after exposure to extreme heat.

Refractory concrete isn’t a single material, but really a general name for concretes designed to withstand high temperatures. Every manufacturer has their special blend of herbs and spices. Usually, they use cement that includes oxides which absorb heat less readily and have reduced thermal expansion. So they’re less prone to deterioration when subjected to extreme temperatures. They also often have embedded fibers that provide insulation and tensile reinforcement similar to the way rebar holds macroscope cracks from growing. These extremely useful properties are taken advantage of in a variety of industrial processes like furnaces, kilns, incinerators, and even nuclear reactors. 

Even refractory concrete is subject to damage due to heating. We don’t know what the original strength was, but we do know it dropped below the capacity of the press after being blasted by the blow torch. That potential for damage is especially present in the case of launch pads where concrete is not just exposed to heat but also corrosive gases moving at incredible speeds and sometimes carrying solid airborne particulates capable of eroding even extremely durable materials. Many launch pads use a ceramic epoxy material to repair damaged areas of refractory concrete launch pads or just to provide an extra layer of thermal insulation. It was actually a chunk of this epoxy (called Martyte) that damaged the Starship engine during the static test fire.

This demonstration highlights the difficulties that launch providers face. Landing pads are extremely important. Without them, rocket engines cause extensive erosion, blasting the loose soil atop the planet (called regolith) away at incredible speeds. This is one of the reasons the two recent Mars rovers used a complicated sky crane system for landing. The rovers themselves were lowered onto the planet via cables while the rocket thruster nozzles stayed high above the surface. Once the wheels were safely on the ground, the cables were cut and the crane flew off to crash well away from the rover. It was all to reduce the potential for damage from those rocket engine plumes.

In fact, when you land a rocket on the moon, the exhaust gases are moving faster than the planetary escape velocity. That means, not only can the flying dust threaten the vehicle itself, the engines also send a plume of ejecta flying out like a swarm of microscopic bullets with no atmosphere and not enough gravity to slow them down. If an orbiting spacecraft were to fly through this plume, it would almost certainly be damaged. So, moon landings have to be timed to prevent collisions between orbiting spacecraft and these sheets of ejected regolith.

That’s a lot of complexity that could be solved with a simple square of concrete. But, what seems simple on earth has some interplanetary complications, one more important than others: Concrete is heavy. That’s one of its main features. Concrete structures mostly stay put because their weight pins them to the ground. But that weight is a huge disadvantage if you have to carry the raw materials to another planet. Reducing mass is everything when it comes to launch payloads, and the weight of an entire rocket is often less than that of the pad it takes off from. In other words, we won’t be bringing concrete launch or landing pad assembly kits to the moon, Mars, or elsewhere anytime soon.


There are some creative ideas for building launchpads on other planets that take advantage of local materials, and we’ve even made some lunar concrete using samples brought back to earth. But like almost all tasks that happen outside of earth’s comfort, it’s never as easy as it seems at first glance. The stakes are high, as we saw during the static test of SpaceX’s SN8. When a launch or landing pad fails, it can be worse than if it wasn’t there at all, creating high-speed projectiles that jeopardize the safety of the vehicle and its support equipment, not to mention its crew. It’s a nice reminder that even the humblest provision here on earth - a solid, flat, and durable surface - is an absolute luxury on another world and of the importance of infrastructure in our interplanetary quests.

October 05, 2021 /Wesley Crump

Repairing Underground Power Cables Is Nearly Impossible

September 21, 2021 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On an autumn evening in 1989, Tom McMahon noticed some unusual construction getting started in his Los Angeles neighborhood. As more and more trucks began showing up with bizarre power tools, test equipment, and tanks of liquid nitrogen, his curiosity got the better of him and he had to take a look. He learned that a high voltage underground transmission line had experienced a fault, costing the City tens of thousands of dollars per hour in lost capacity and downtime. Over the next few months, he got more acquainted with the project manager for the repair, and he shared all the fascinating details of what he learned in a series of messages on his company’s mailing list. Those messages spread like wildfire across various bulletin boards, lists, and forums of the early internet.

I don’t remember exactly how old I was when I came across this story, but I do know that it was one of the very first times that I realized how awesome infrastructure and engineering could be. I figure if it had such a big impact on me, that it’s a story worth retelling, especially because there’s a recent update at the end. Maybe it will inspire others to be more interested and engaged in their constructed environments like it did for me (which is basically my entire goal with these videos). I’m Grady, and this is Practical Engineering. Today, we’re discussing the Scattergood-Olympic Underground Transmission Line.

How do you get electricity from where it’s generated to where it’s used? That’s the job of high voltage transmission lines. Electrical power is related to the product of the voltage and current in a transmission line. If you increase the voltage of the electricity, you need less current to deliver the same amount of power, so that's exactly what we do. Transformers at power plants boost the voltage before sending electricity on its way (usually in three separate lines, called phases), reducing the current, and thus minimizing energy wasted from the resistance of conductors. High voltages make electrical transmission more efficient, but they create a new set of challenges. High voltage electricity is not only extremely dangerous, but it also tends to arc through the air (which is not a great insulator) to the other phases or grounded objects. The conventional solution is to string these lines overhead on towers. This keeps them high enough to avoid contact with trees and human activities, but the towers serve a second purpose. They keep enough distance between each line so that electrical arcs can’t form between them. 

Unfortunately, stringing high voltage lines overhead isn’t always feasible or popular with the local residents, especially in dense urban areas. That was true in the 1970s when engineers in LA were deciding how to expand their transmission system and deliver power from the Scattergood power plant to the Olympic substation near Santa Monica. So, they tried something that was relatively new and innovative for the time: they ran the line underground. Three 230 kilovolt lines, one for each phase, would deliver enormous amounts of electricity over the approximately 10 mile or 16 kilometer distance in West Los Angeles, powering hundreds of thousands of homes and businesses. However, putting high voltage lines below the ground created a whole new set of challenges.

When strung across towers, the conductors at this voltage each require around 10 feet, or 3 meters of clearance to avoid arcs. The air is the insulator doing the job of keeping electricity constrained within the conductors. So how do you take those three high voltage phases and cram them into a single, small pipe running underground? Well, you need a better insulator than just air. One of the more popular options of the time was to use high pressure, fluid filled cables. This design starts with installation of a steel pipe below the ground with access vaults spaced along the way. Copper conductors are surrounded with many layers of paper insulation. Next, a protective layer of wire called skid wire is spiralled around each one to protect the paper from damage and allow for easy sliding along the pipe during installation. The conductors are pulled through the steel pipe using massive winches and then spliced together at each vault. Once the steel pipe is fully welded closed, it’s slowly filled with a non-conductive oil known as liquid dielectric.

This oil impregnates the paper insulation around each conductor to create a highly insulative layer that prevents arcs from forming, even with the conductors sitting mere inches apart from one another and the surrounding steel pipe. At the same time, the oil works as a heat sink to carry away heat generated from losses in the conductors. It is critical that the oil completely saturates the paper insulation and fills every nook and cranny within the pipe. Just like a hole in the plastic insulation around an extension cord, even a tiny bubble in the oil can create a place for arcs to form because of the extreme voltages. So, the oil inside the pipe is pressurized (usually around 14 times normal atmospheric pressure or over 200 PSI) to ensure that no bubbles can form.

The rating of a transmission line (in other words, how much power it can deliver) is almost entirely based on temperature. All conductors (with rare exceptions) have some resistance to the flow of electric current, and that creates heat which will eventually damage the conductors and insulation if it builds up. The more heat you can remove, the more power you can push through the line. That’s a major benefit of pipe-type oil-filled cables: they’re surrounded by a gigantic liquid heat sink that can be circulated to keep the temperature down and prevent hot spots from forming in the lines. At each end of the transmission line is a plant filled with pumps and tanks to pressurize - and often to circulate - the dielectric oil in the pipe.

This particular transmission line in LA circulated the oil in six-hour cycles. At the end of each cycle, the pumps reverse to move the fluid in the opposite direction through the pipe. Some systems are different, but for the Scattergood line, this pumping is a slow process. You’re not trying to pump all the fluid from one end of the line to the other, but rather simply get it to move a short distance along the line to average out the temperatures and minimize the possibility of any single section from overheating. However, even at that slow speed, you can’t just switch the flow direction in an instant.

I have a post all about a phenomenon called fluid hammer, and you can check that out if you want to learn more after this, but I’ll summarize here. Moving fluid has momentum, and rapidly changing its velocity can create dangerous spikes in pressure. Water hammer can be a problem in residential homes when taps or valves within washing machines close too quickly. You might hear a pipe knocking against the wall, or in worse cases, you might completely rupture a line. However, in large pipelines that can contain enormous volumes of fluid, reversing a pump can be the equivalent of slamming a freight train into a brick wall. To avoid spikes in pressure which could damage equipment or rupture the pipe, the pumps at either end of the Scattergood-Olympic line would spend the last hour in the six-hour cycle slowing the oil down, providing a smooth transition to flow in the opposite direction for the next cycle.

Circulating the dielectric oil helps to keep the temperature within the pipe consistent along the line, but can’t control how that average temperature changes over time. Transmission lines don’t deliver a constant current. Rather the current depends on the instantaneous electricity demand which changes on a minute by minute basis depending on the devices and equipment being turned on or off. When demands fluctuate, the current in a transmission line changes, and so the amount of heat in the line increases or decreases accordingly. As you might know, many materials expand or contract with changes in temperature, and that’s true for the copper conductors used in underground transmission lines. When these lines expand within the outer pipe, they often move and flex in a process called thermal mechanical bending or TMB. If not carefully designed, these bends can become tighter than the minimum bending radius of the cable, exceeding the allowable stresses within the material. Over hundreds or thousands of cycles of TMB, the paper insulation around each conductor can begin to soften or tear, eventually leading to a dielectric breakdown (in other words, arcs and short circuits). TMD can also pull larger diameter splices into narrower sections of the pipe, causing them to rub and abrade.

That’s what happened in 1989 to the Scattergood-Olympic line. But before the LA Department of Water and Power could repair the fault, first they had to find it. Locating a fault in an underground line is half-art/half-science, and there are many interesting types of equipment that can be used. They tried to use ground-penetrating radar along the line, but they couldn’t identify the fault. They also tried time-domain reflectometry - a method of transmitting a waveform through the cable and measuring the reflections - but the results weren’t conclusive. They also used a device called a thumper which introduces impulses of high voltage into the cable. When this impulse reaches the fault, it causes an electrical arc which can be heard as a thump above the ground, usually aided by a handheld detector with a microphone and digital filters. Going from one extreme in technology to the opposite, the crews used car batteries and voltmeters to take measurements of the conductor’s resistance between tap points to precisely identify the location of the fault within Mr. McMahon’s neighborhood.

Once found, the challenge of repairing the faulted cable could begin. How do you fix an insulated conductor inside a steel pipe bathed in high-pressure oil? With liquid nitrogen, of course. Pumping all the oil out of the pipe before the repair wasn’t feasible. It couldn’t be stored and reused after the project because that process would introduce contaminants that would reduce the oil’s insulative properties. They also couldn’t dispose of it and replace it with new oil, because the stuff’s expensive and it would take a long time to get in such an incredible quantity, potentially extending the very expensive downtime. Even more importantly, relieving the oil pressure from the rest of the pipe could allow gas bubbles to form inside the layers of paper insulation, potentially damaging them and creating new places for faults to form. The clever solution they used was to freeze the oil using liquid nitrogen, which is usually around -200C or -320F, creating solid plugs on either end of the section to be repaired. This allowed the rest of the pipe to remain under pressure.

Losing these plugs would be a catastrophe, creating an eruption of high-pressure oil and spilling huge quantities of it into the environment, so the repair crew had liquid nitrogen companies on call across California as contingency to ensure that the oil could be kept frozen for the duration of the fix.

Unfortunately, after taking x-rays along the entire length of the line, they realized that many of the cables’ splices were in danger of experiencing a similar fault due to thermomechanical bending. After coming to the conclusion that this wasn’t going to be a quick fix, the Department of Water and Power decided to drain the entire line of oil and implement preventative measures while it was already down for repairs. Aluminum collars were installed at key locations along the pipe to constrain the thermal movement of the cable. This was done in a semi-clean environment with air handling and cleanliness requirements to prevent contaminants from finding their way into the pipe. After many months, and tens of millions of dollars worth of downtime, the trucks and crews finally pulled out of Tom’s neighborhood, and the underground transmission line was finally brought back online.

There’s an update to Tom’s story to bring us to modern times. The Scattergood-Olympic line’s troubles didn’t end with the work in 1989. LA’s routine testing showed that the insulation was continuing to degrade, and outages on the line were significantly disrupting the reliability of their transmission network across the city. In 2008, the Department of Water and Power began developing a replacement project, this time using newer cable insulated with polyethylene instead of high pressure oil. After 10 years of planning, environmental permits, public meetings, design, and construction, the project was completed in 2018. The original transmission line is still in place and can be used as a backup if it's ever needed. 

As a part of my research for this story, I spoke to Tom on the phone. He told me that shortly after his writeup spread across the early internet, it was sent to a teletype machine within one of the offices of the LA Department of Water and Power, providing some higher-up within the organization a neatly printed version that may or may not still be hanging on a wall somewhere downtown. Huge thanks to Tom for taking the time all those years ago to share his enthusiasm for large-scale infrastructure, thanks to Jamie Zawinski for preserving the story on his blog, and thank you for reading. Let me know what you think.

September 21, 2021 /Wesley Crump

Why Things Fall Off Cranes

September 07, 2021 by Wesley Crump

We talked about crane failures in a previous video, but you might be surprised to learn that things can and still go wrong with heavy lifts even when the crane is perfectly safe and sound. All cranes use a hook as a connection to the load, and yet, few things we need to lift have an attachment that fits nicely over a gigantic steel hook. “Rigging” is the term used to describe all the steps we go through to attach a load to a crane so it can be suspended and moved. And, like all human endeavors, rigging is prone to error. Some of the most serious crane failures in history had nothing to do with the crane itself but were actually a result of poor rigging. One of the worst construction accidents in U.S. history happened in New York in 2008 when a large metal component of a crane was improperly rigged. The overloaded slings failed, dropping the collar directly onto its attachment points to the building under construction, causing it to detach and collapse. Six workers and one civilian were killed in the incident, and many more were seriously injured. There’s a lot that can go wrong below the hook, so today, we’re going to take a look at a few of the fundamentals in attaching and securing a load and some of the hidden hazards that can pop up if not done properly and carefully. I’m Grady, and this is Practical Engineering. Today, we’re talking about rigging.


You’ve probably heard the phrase that if your only tool is a hammer, every problem starts to look like a nail. It’s a lighthearted way to warn about over-reliance on a familiar tool. But, if you’re a rigger whose job is to secure loads to cranes for lifting, you really do have just one tool. You don’t use a piece of old rope out of your pickup bed. You don’t use a ratchet strap from the big box store down the road. And you definitely never use the crane’s hoist line to wrap around a load. You have one option: a sling. Of course, slings come in a wide variety of sizes and types and materials, and you also have hardware like hooks and eyes and shackles and pulleys but, yeah, one main tool. And here’s why. Slings have a rated capacity. That rating is a guarantee from the manufacturer. More importantly, it’s a big responsibility taken off the shoulders of a rigger to know and trust that each connection to a crane or hoist can carry the right amount of load. So what do the tags mean?


It’s actually pretty straightforward. The tag shows how much weight you can put on the sling using the three basic hitches. If your load has a hook or a shackle, you can use the vertical hitch: one eye over the load and one eye over the hook of the crane or hoist. It’s the most straightforward configuration for a sling and takes full advantage of its load capacity. If there is no attachment point on your load, you might instead use a basket hitch. In this configuration, the load is cradled by the sling, and both eyes are on the hook. One benefit of the basket hitch is that it doubles the sling’s load capacity since you have two legs holding instead of just one. But, it only works if the load is balanced and easy to control since it’s only cradled from the bottom. If you need a snug grasp on the load, you might use the third basic option: a choker hitch, where the sling passes through one eye and attaches to the crane hook on the other. The choke point has extra stress when used in this configuration, so the load rating for a choker hitch is less than that of the vertical or basket hitch.


If you’re using a sling to lift something heavy and don’t see a load rating tag, just stop. Every sling rated for rigging has to have a tag, whether it’s a synthetic sling like this, a wire rope, or a chain. Even so, the vast majority of rigging failures happen because a sling was overloaded. You might be wondering why that’s the case when the load rating is spelled out right there on the tag. But those three numbers hide quite a bit of complexity involved in rigging, and I have a few examples set up here in my garage to give you a glimpse into those intricacies. Even if you aren’t planning to connect a 20-ton beam to a crawler crane any time soon, this information applies to lifting just about anything.


The first rigging pitfall is center-of-gravity. Not all loads are evenly distributed or equally balanced, and that can cause some serious issues if the rigging doesn’t take it into account. For example, if you’re using multiple slings to lift something, your first inclination might be to simply divide the total weight by the number of slings to estimate the load each one will carry. But if the slings aren’t all attached at the same horizontal distance from the load’s center of gravity, the total weight won’t be distributed evenly between them. That may seem obvious, but many many loads have been dropped because of misunderstandings with center-of-gravity.


For just one example, loads often show up to a site in a crate where it’s not quite so easy to see how the weight is distributed. In a worst-case scenario, incorrectly estimating the force on each sling may cause one or more to overload and fail. But even if the sling doesn’t give out completely, it might stretch just enough to cause a load to shift. And if it shifts such that the center of gravity moves to the wrong side of the attachment points, there’s a chance it will tip and fall. You can’t push a rope, after all, so slings only provide resistance in one direction. So when lifting a load that isn’t equally balanced between attachment locations (and especially for the big lifts that use more than one crane), you have to calculate the load share between the slings and make sure each one can handle its portion. The formula is super simple as long as you know the center of gravity. And, if there’s a chance a load could slide if one sling stretches more than the others, it’s got to be secured before the lift.


The next potential rigging pitfall is the sling angle. Let me give you an example: Say you have a balanced load, and you need two slings to attach it to a crane, but your slings are kind of short. So, when you get everything hooked up, the connections make a 30-degree angle from the horizontal. Is each sling carrying half the weight of the load? Would I even be asking if the answer was yes? In fact, at a 30-degree angle, each sling is subject to double the force that it would otherwise feel if it were perfectly vertical from the load. Why is this?


Slings can only pull in one direction. For simplicity, we sometimes divide the force in the sling into its vertical and horizontal components. If the sling is perfectly vertical, it has no horizontal part. But as the angle of the sling changes, the horizontal component becomes a greater and greater proportion of its total load. This may not need to be said, but we don’t need a horizontal force to lift something. We need a vertical one. In fact, the horizontal force isn’t just unnecessary, but it also has to be canceled out by an equal and opposite force in the other sling. So, the shallower the angle of the sling, the harder you’ll have to pull on it to get enough vertical force to lift the load.


When slings are vertical, each one holds half the weight. But if you bring the tops of the slings toward the center until they touch, there’s an increase of about 50 percent. I’m sure you can imagine what would happen if you incorrectly divided the weight of the load by two and assumed that to be the force in the slings. You’d be underestimating by quite a lot. So, when using slings that aren’t vertical, you have to apply a reduction in capacity based on the angle. Again, the formula is simple, but you have to know how and when to use it.


Shallow horizontal angles aren’t just an issue with sling tension, though. Those horizontal components of force that I mentioned have another disadvantage related to the third and final rigging pitfall I want to discuss: abrasion. As I said, slings can be made froma few materials, including chain and wire rope, but one of the most common materials is woven synthetic fibers like nylon and polyester. These synthetic slings have a lot of advantages. They’re lightweight and easy to move around. They don’t create sparks that can be dangerous in industrial environments. And, they’re soft, so they won’t scrape or damage whatever they’re connected to. But, they have disadvantages too - mainly that synthetic slings are more susceptible to abrasion.


Those horizontal forces I mentioned earlier don’t just increase the sling tension beyond the weight of the load. They can also cause a sling to slide. Obviously, that’s an issue if they slide so far that the load falls. But even if they don’t, the friction with the load can lead to abrasion and even failure of the sling. Synthetic materials are much easier to cut than wire rope or chain, so they have to be protected from sharp edges, corners, and burrs. Synthetic fibers can also melt. You might not think that a little sliding would generate much heat, but consider that friction is a function of the contact pressure between the two surfaces. These slings are pretty small compared to the weight they carry, meaning the pressure they exert can be enormous. Even a small amount of sliding under so much pressure can create enough heat to melt the fibers. One way to avoid the possibility of sliding is to use a spreader bar - a device that helps distribute the singular lifting force of the hook among attachment points that can be further apart. This kind of device lets you reduce the angle of your slings, giving them more capacity and reducing the possibility of them sliding and abrading.


I’ve been referring to “you” a lot, putting you in the shoes of a rigger learning the ropes. But I just want to clarify that this post is not for training. If this is your first exposure to the topic, I hope you’ll agree that you’re not ready. Rigging is a vital but dangerous job, so if you’re going to be involved in any heavy lifts, there is a lot more to learn than my examples here. Finally, if you enjoyed this, check out the companion post about crane failures and what can go wrong above the hook.


September 07, 2021 /Wesley Crump

Surfside Condo Collapse: What We Know So Far

August 17, 2021 by Wesley Crump

On June 24, 2021, a portion of Champlain Towers South, a 12-story condominium in Surfside, Florida, near Miami Beach, collapsed around 1:30 am. It was one of the most deadly structural collapses in U.S. history, with nearly 100 fatalities. Forensic investigations of the event will likely take years to complete, but there’s a lot we already know about the collapse. I want to summarize the events of this unthinkable tragedy, talk about a few of the structural engineering issues that may have played a part, and finally explain the process of forensic structural investigations and what might result from learning the technical cause of this catastrophe. I’m Grady, and this is Practical Engineering. Today’s blog is on the collapse of the Champlain Towers South building.


Champlain Towers South was built in 1981 along with a nearly identical North structure just up the street. The 12-story oceanfront tower was constructed of steel-reinforced concrete in the small Miami suburb of Surfside. The building sat atop an underground parking garage that extended below the adjacent common area that residents called the pool deck. The building had 136 condominium units.Unlike an apartment property with a single owner, Champlain Towers South was collectively maintained by those 136 condo owners through an association with board members.


In the early hours of June 24th, we know that there was a failure of the pool deck adjacent to the building. Tourists at a nearby hotel were swimming when they heard a crash. They walked over to the Champlain Towers to see that part of the pool deck had collapsed into the parking garage below. About 7 minutes later, the building began to fall. A security camera nearby caught the entire event on camera. The building collapsed in three sections: first a south portion of the building, immediately followed by a north portion, and finally, the east section. The entire western half of the building remained standing.


The search and rescue operation started immediately to get residents out of the damaged building and the rubble of the demolished area. Crews worked 24/7 to sift through debris for survivors. The western part of the structure, which didn’t collapse, posed a hazard to the rescue and recovery crews, especially with the threat of Tropical Storm Elsa potentially bringing high winds to the area. Town officials made the difficult decision to demolish the remaining part of the building on July 4th to safeguard the crews and avoid the possibility of it falling onto the existing search and rescue zone. As of the recording of this video, 97 people have been confirmed deceased. There were 126 people who were in the building during the partial collapse and survived.


Of course, the most critical question of the collapse is why it happened. Unfortunately it’s a difficult one to answer. The Town of Surfside put all of their records and correspondence about the building online in the interest of public transparency. It was in the process of being recertified, a requirement in Miami-Dade County for all buildings when they reach 40 years in age, and every 10 years afterwards. That process starts with a detailed inspection by a structural engineer. For Champlain Towers South, the inspection was performed in 2018. The findings of that structural inspection have been the focus of most of the early conjecture about the cause of the building’s collapse.


Among the items of concern identified during the inspection, one of the most important was the pool deck. The large concrete slab adjacent to the building also served as a ceiling to the underground parking garage. The inspection report noted major structural damage to this concrete slab, mainly as a result of poor drainage and failed waterproofing. The issue was that rainwater on the pool deck could filter through the pavers above the concrete slab, and then it had nowhere to go. So, instead of flowing along a properly sloped slab to drains, it simply pooled above the slab like a bathtub. Unlike a bathtub, this system wasn’t watertight. Runoff was leaking into the concrete below through joints, cracks, and into the pores of the slab itself. This wasn’t a surprise. It was a well-known problem at the condominium, and there were even plastic gutters installed in various locations along the ceiling of the parking garage to divert these leaks away from cars and walkways. But the water wasn’t just a nuisance to residents. It was also slowly deteriorating the reinforced concrete structure itself.


Reinforced concrete is an extraordinarily versatile building material because it is strong, durable, relatively inexpensive, and can be cast into just about any shape. For better or worse, it is one of the most ubiquitous building materials of the modern world. But like all building materials, it has its weaknesses and is subject to deterioration over time. One of the most prevalent of those weaknesses is the corrosion of steel reinforcement. Embedded steel is usually safeguarded against corrosion by the impermeable covering of concrete and its alkalinity, which creates a protective oxide layer around the steel. However, over time, water flowing through concrete can leach certain constituents out, making the concrete more porous and less alkaline. That makes the steel more subject to corrosion. This is especially true in coastal areas where salt laden air from the sea can carry chloride ions toward inland structures. When these chloride ions saturate the concrete, they accelerate the degradation of the protective oxide coating around the steel.


Corrosion doesn’t just weaken steel, it also causes it to expand in volume, creating pressure within a reinforced concrete structure. Eventually the corrosion can reach a point where the pressure is too much. The surrounding concrete breaks away, leading to cracks, spalls (which are small areas of flaked off concrete), or delamination, where parts of the concrete along mats of reinforcing steel are completely separated. Once the steel is no longer surrounded and protected by concrete, the corrosion can progress much more quickly and may eventually lead to a structural failure.


The engineering and construction industries have made huge improvements in design and construction of concrete structures in the past 30 years thanks in large part to the Federal Highway Administration and the International Concrete Repair Institute. However, Champlain Towers South was designed and built before modern building codes included best practices for concrete structures in harsh coastal environments. The engineer who inspected the tower pointed out the problem with the pool deck in strong language, stating that “failure to replace the waterproofing in the near future will cause the extent of the concrete deterioration to expand exponentially.”


Keeping water, especially salty water, away from reinforced concrete is vital. If inadequate waterproofing turns out to be the cause of the failure, it won’t have been the first time. In 2012, the roof of the Algo Center Mall in Elliot Lake, Ontario collapsed, killing two people. The cause of the collapse was corrosion of the building's steel framework instigated by leaks through the improperly waterproofed rooftop parking deck. The prevailing theories about the Champlain Tower collapse from most of the current investigative journalism centers around the pool deck as a trigger or at least a major factor in the building’s demise, especially because the pool deck failure preceded the collapse. However, there is less certainty about what role in the collapse the failure of the deck slab had, since it does not provide any support to the building itself. What we see in the surveillance video would have required failure of one or more columns below the structure. One possibility is that the deck slab punched through intermediate columns such that it was hanging like a sheet from the columns below the building, sometimes called catenary action. The forces from the hanging slab could have loaded the columns below the building in a way they weren’t designed to withstand, causing them to buckle. However, the exact mechanism in which those columns failed is still unknown.


Tragedies like this are usually the result of many separate factors coinciding, and there are several circumstances that may have contributed to the collapse. A research team studying changes in land and sea levels in the area in 2020 measured some unusual settlement of a few millimeters per year in the area of this building. Although many areas experience significant long-term and large-scale settlement, also called subsidence, it’s possible that if different parts of the site were settling at different rates, the tower’s foundation could experience additional structural stresses. Also, some photos of the rubble appeared to show less reinforcing steel than was called for in the original design drawings, particularly at the column-to-slab connections. There were also regular intrusions of groundwater into the parking garage, the recent construction of an adjacent high-rise building, and ongoing construction to the building’s roof to consider.


All of these factors and many more will be reviewed by the forensics teams who are already investigating the cause of the failure. Many of these investigators have been on site during the recovery and cleanup operation to make sure rubble and debris that may offer clues into the cause of the collapse are documented and preserved. Major parts of the building are being preserved as evidence, so rubble was sorted on site and taken to a nearby warehouse for cataloguing.


It’s important to keep in mind that each one of these forensic teams is trying to answer a slightly different question. The Town of Surfside hired its own investigator, Allyn Kilsheimer, to begin looking into the collapse. Surfside has a number of high-rise condos under their purview, so presumably they felt the need to conduct their own investigation for the safety of their citizens. Another critical service that Mr. Kilsheimer is providing is to satisfy the public’s need for information, which is why you see him doing interviews and talking on news shows on behalf of the Town of Surfside.


At the federal level, the National Institute of Standards and Technology announced that they would launch a full investigation into the collapse. Formerly called the Bureau of Standards, NIST does a lot of research and science around measurements, materials, manufacturing, and engineering. Since 2002, they also have a federal mandate to investigate the cause of failure when a building collapse results in substantial loss of life. Their investigation will likely be the most thorough, including laboratory testing of steel, concrete, soil specimens, and structural modeling. During the recovery operation, they were on site with sophisticated equipment to take detailed records as rubble was hauled away and performing non-destructive testing to locate reinforcing steel and determine properties of the concrete members. They will also review all the reports and photographs from professionals, survivors, and witnesses of the event. Their final report will probably take a year or two to complete. The primary purpose of that investigation will not be to find fault, but rather to make recommendations for improvements to the building code and industry practices in the fields of structural engineering and construction.


Insurance companies, victims, owners, and designers will also be involved in lawsuits to try and establish who is at fault in this tragedy and potentially award damages as a result. Those legal teams will hire their own experts who will be investigating the details of the collapse. However, their focus will be toward establishing professional and organizational culpability more than the technical causes of the failure.


Finally, the county called a grand jury to examine the building’s collapse. A grand jury is essentially a group of citizens used to administer justice in various forms. Most commonly, grand juries are used as a step between accusing a person of a crime and trying them in court. However, they can also conduct their own investigations as representatives of their community. If the grand jury finds serious negligence or wrongdoing, there may even be criminal investigations that result from the collapse.

 

Was it a poor design, a mistake made during construction, lack of proper maintenance, or a combination of all three? That’s the question the forensic teams will be trying to answer. And just to temper expectations a little, they may not find a final and clearcut cause of the collapse. The difficulty of forensic engineering is that you’re trying to piece together a sequence of events from small and disparate puzzle pieces. Unfortunately, in this case where the failure likely began at the bottom of the structure, most of those puzzle pieces were buried in a pile of rubble.


I want to emphasize that this type of event is extremely rare. The damage to the Champlain Tower South pool deck shown in the 2018 inspection report is severe, but it was not an indication of an imminent collapse of the adjacent building by itself. Although we don’t know exactly how much things worsened between then and the collapse, I think you’d be hard-pressed to find a structural engineer who would evacuate a building based only on the level of deterioration shown in that report. Nearly all buildings, even with moderate maintenance, will last much longer than 40 years without fear of something catastrophic. Modern building codes are designed to ensure that our structures are engineered with redundancies and factors of safety for exactly this reason. My heart goes out to the victims of this unspeakable tragedy. I hope that investigators can get to the bottom of the collapse and its cause so that something like this never happens again.


August 17, 2021 /Wesley Crump

Why Cranes Collapse

August 03, 2021 by Wesley Crump

Cranes are dangerous. Any time something goes up, there’s a chance it might fall down. Keep that in mind next time you climb a ladder. But lifting stuff up and getting it back down safely is pretty much a crane’s only job. So why do so many of them fall down? Let’s walk through some of the biggest crane disasters in modern history to try and understand. I’m Grady, and this is Practical Engineering. Today, we’re talking about crane failures.


Cranes are the backbone of just about every construction project. All construction can be boiled down to material handling: taking delivery, storing, moving, and placing all the pieces and parts of a project. Of course, you can do a lot of that with your own sweat and muscles, and there are even tools to provide a mechanical advantage, allowing one person to lift a lot. But, anyone working in the trades will tell you that there are plenty of jobs only a crane can accomplish. Heavy equipment amplifies the amount of work that can be done. That’s so true with cranes that the question at most construction sites is not if a crane will be mobilized, but when and what type.


Because they are so pervasive and they do such a dangerous job of lifting massive objects high into the air, occasionally cranes fail. I want to walk through some of the reasons these failures occur, using historical accidents as case studies. I’ve got some demos set up in my garage with toy cranes, but these accidents are no joke. Crane collapses lead to property damage, injuries, and fatalities every year. But, they’re almost always preventable, so I’ll do my best to explain what could have been done differently for the cases described. By the way, this is part one of a series on cranes. A future post will cover rigging and the things that can go wrong between the hook and the load.


The first reason cranes fail is improper assembly or disassembly. Most cranes on construction sites are temporary. They’re not staying when the job is done. That means they either arrived under their own power (called mobile cranes), or they were shipped on trucks and assembled on-site, a process that can take days or weeks depending on the size of the machine. A crane can be extremely vulnerable during assembly or disassembly since all the components aren’t fully bolted together. A recent collapse in Seattle happened when a team disassembling a tower crane prematurely removed the pins holding sections of the tower together. Presumably, this was done to speed up disassembly. However, when winds picked up that day, major components of the crane were completely disconnected, being held together just by gravity and loose sliding connections between members. It didn’t take long for the crane to collapse, killing four people and injuring many more. 


In most cases, these cranes are expertly engineered for worst-case conditions. But the manufacturers are rarely the ones installing them on site. So, they provide detailed manuals for the crews assembling and disassembling them. Unfortunately, those manuals aren’t always followed to the letter. One of the worst crane disasters in modern history happened in 2008 in New York City. Crews were working to attach a tower crane to a building when a major component of the attachment hardware, a heavy steel collar, suddenly fell. As it fell, it crashed into the attachment points below, breaking them from the building and allowing the whole crane to overturn. The cause of the accident was simple: the crew assembling the crane didn’t follow the manual. Specifically, they didn’t attach that collar to the crane according to the manufacturer’s guidelines while it was bolted together and attached to the building. Seven people died as a result. Another collapse in Battersea, England, in 2006 also happened because the operator was using the wrong manual to assemble a crane. Instead of an eight-ton counterweight, they used 12 tons, putting the crane way out of balance. Eventually, the bolts holding the slew ring failed, and the entire boom broke free, killing the operator and another person who was just fixing his car on the street nearby.


Once a crane is set up, the challenges in keeping it that way aren’t over. Many crane failures happen during everyday operations, and one of the biggest causes is overloading. Every crane has limitations in how much weight it can handle, but it’s not as simple as something like a bridge that has a single load limit. Not only can most cranes have a wide variety of different configurations - like different counterweights, jib lengths, and boom sizes - they also, by the very nature of their job, move. They slew, luff, telescope, traverse, boom up or down, etc. And more importantly, their load limitations depend on these movements.


Every crane has a tipping line - that’s the line at which the machine will tip if overloaded. Any increase in weight outside the tipping line destabilizes the crane if not balanced on the other side. The further away the load is from the tipping line, the greater the moment or torque on the crane. This is easy to demonstrate with my model. Using a spring scale, I can estimate the load required to tip the crane at different distances from the tipping line. Plotting distance and force, we get roughly a straight line. That’s because torque is the product of length and force. As the distance from the crane goes up, the force required to tip it over goes down proportionally.


Cranes have a few tools to keep the load in balance. One is counterweights. These weights oppose the torque by balancing it on the opposite side of the crane. Another tool is outriggers. When there’s enough room to use them, these arms extend the tipping line, bringing it closer to the load and thus reducing the length of the lever. The crane can hold several hundred grams with the outriggers extended, but it can barely even hold up the spring scale without them.


All those factors add up to a lot more than any operator can be expected to keep track of, which is why cranes have load charts. Reading these charts is pretty simple, as long as you’re looking at the one that matches the configuration of your crane. Look at the furthest radius your hook will be from the centerline of the crane during the lift, and you’ll see the maximum allowable load the crane can handle. Of course, most modern cranes have sensors and electronics that can help an operator keep track of this on the fly. Load moment indicators tell the crane operator when they are getting close to the maximum. Many cranes will even lock out specific movements to prevent the crane from tipping or sustaining damage from overloading. That doesn’t mean it doesn’t happen, though.


In 2016 in Manhattan, a crawler crane fell over as it was being laid down due to high winds. The boom and jib of this particular crane could be set down on the ground if things got too windy. But, even with nothing on the hook, the load chart doesn’t allow the boom to be lowered below 75 degrees. Unfortunately, the operator dropped the boom below this level, and the crane fell, killing one person and injuring several others.


Even if the crane can handle the load and do it stably, that weight doesn’t just stop at the base. It has to be transferred to the ground, and surprisingly, sometimes the ground can fail. The geotechnical engineers call this a vertical deformation, but you can just say the ground moved when it wasn’t supposed to. And this can happen in a couple of ways. The first is settlement. That’s when soil particles compress together when subjected to a load. It’s usually a slow process, but it can create issues at a construction site over time. Settlement can be solved in one of two ways. Sometimes compacting the subgrade before using it as a foundation is enough to make sure that it won’t compress further over time. For clay soils that are difficult to compact, it’s usually best to replace the top layer with something more stable like crushed rock.


The other type of vertical deformation is called a bearing capacity failure. In this case, the soil particles actually slide against each other in a shearing motion. The particles below the base get forced downward while the adjacent particles bulge up on the sides. Here’s an example using a tower crane. With the hook at the end of the boom, the crane can hold 150 grams without tipping on a stable surface. But, when set on top of loose sand, things aren’t quite so static. The soil isn’t able to support the two feet in compression. Instead, it gives way, allowing the crane to topple.


In 2012, a crane fell over while lifting a part of a ship in Vietnam. The cause is obvious: the ground wasn’t strong enough to withstand the load. This accident killed five people. Geotechnical engineers can estimate the bearing capacity using simple tests, so there’s no good reason this should ever happen. If the ground can’t handle what’s required for the crane, the solution is simple: distribute the load over a larger area. This is often done using steel plates or wooden constructions called crane mats.


Water also affects bearing capacity. The strength of a soil is primarily a function of friction between soil particles. If water gets into the space between those particles, it pushes them away from each other, reducing this friction and weakening the soil. If you’ve ever stepped in the mud, you have some intuition about this. In 2013, a massive crane helping with the  construction on the Brazil World Cup Stadium collapsed while lifting a roof section into place. The cause of the collapse was a bearing capacity failure of the soil beneath the crane exacerbated by several days of heavy rainfall. So, keeping the site well-drained from water is essential, and again replacing crummy soil with a stable, free-draining material is usually best practice.


The final cause of crane collapses that I want to discuss is wind, or rather, neglecting the power of the wind to overload. In 2019, a severe thunderstorm led to the collapse of a tower crane in Dallas, killing one person and injuring several others. The fault for the accident is still being litigated, but it brings up one thing many don’t realize. Most tower cranes are designed to withstand very high winds, but only under certain conditions. When winds are high, operators have to disengage the clutch so the crane can freely point into the wind. This is called “weathervaning.” If the boom is locked, it can end up broadside to the wind, significantly increasing the forces on the crane. In the video of the 2019 collapse, you can see two cranes pointing in different directions, and only one of them failed. That suggests that an operator may have forgotten to secure the crane properly or the crane malfunctioned and couldn’t weathervane into the wind.


In September 2017, three cranes at separate construction sites in Florida all collapsed on the same day due to winds from Hurricane Irma. All three cranes were the same make and model, and they were all secured to weathervane as required. Luckily, no one was injured at any of the sites since they were shut down due to weather. The winds during the storm were more than 125 miles per hour or 200 kilometers per hour in some places. That’s far more than the maximum design wind speed for the cranes, which was 95 miles per hour or about 150 kilometers per hour. As the saying goes, there’s always a bigger storm, and in this case, Hurricane Irma just delivered winds that were well above the requirements and codes that cranes are required to meet. However, since only this single type of crane failed during the storm, investigators recommended some changes to the design that may make them safer in the future.


Situations like what happened in Florida are the exception, not the rule, though. As I mentioned, nearly every crane accident is easily preventable. In 1999, one of the largest cranes in the world collapsed during the construction of Miller Park in Milwaukee, now called American Family Field. The crane was lifting a 510-ton roof section of the stadium at 97% of its rated capacity, but it wasn’t the just weight of the structural members bearing on the hook.


Steel assemblies can appear pretty slender, but that doesn’t mean they can’t catch the wind. A little bit of wind goes a long way. By my estimation, the wind is adding about 15% to the load on the hook. It’s more than the margin of error when you’re operating at maximum capacity, and that doesn’t even consider the huge horizontal loads affecting the crane. These machines are rarely capable of withstanding much force in any direction other than straight down, so their load charts generally disallow operation when winds are high.


On that fateful day in 1999, the crane operator neglected to include the additional force from wind loading when assessing the crane’s capacity, and so it ended up overloaded. A safety inspector happened to catch the entire collapse on camera. Three construction workers were killed during this incident, and a sculpture of them still stands at the stadium in their memory.


As you may have noticed, there are technical reasons that cranes fall over, but there are also underlying human factors as well. That’s why it’s best practice to create a lift plan any time a crane is used. That involves taking the opportunity before the load is in the air to consider all the aspects of the lift and what could go wrong: weight, dimensions, the center of gravity, and lifting points of the load, the path it will travel during the lift, the capabilities of the crane, outside factors like wind, and communications during the whole process. For complicated lifts, these plans can take days or weeks to prepare with detailed engineering reviews to ensure that nothing goes wrong once the load is on the hook.


August 03, 2021 /Wesley Crump

What Really Happened at the Arecibo Telescope?

July 20, 2021 by Wesley Crump

 In December of 2020, the Arecibo telescope - one of the largest and most iconic astronomical instruments in the world - collapsed. This 57-year-old megastructure not only made many incredible scientific discoveries over its lifetime, it was also an emblem of humanity’s interest and curiosity about our place in the universe. Its loss was felt across the world. The National Science Foundation, who owns the observatory, recently released their report to congress on the cause of the failure and the events leading up to it. Why was this telescope so important, how did it work, and why did it fail? I’m Grady, and this is Practical Engineering. Today, we’re discussing the Arecibo telescope collapse.


The same way we observe visible light from celestial objects using our eyes and optical telescopes, we can also take advantage of the other parts of the electromagnetic spectrum in astronomy. Most of the gamma rays, x-rays, ultraviolet, and infrared portions of the spectrum are blocked out by the atmosphere. But long-wavelength radio waves are not. A radio telescope is basically an antenna that can tune in to some frequencies of electromagnetic radiation that emanate from celestial objects. These radio waves can be quite faint, complicating the task of separating them from the background noise. You essentially have two options to get high-quality radio astronomy data: more time or more space. The longer you focus on an object, the more resolution you get. But, there’s only so much time. To speed up observations, you can also gather radio waves from a larger area and focus them into a clearer signal. Arecibo took that strategy to the extreme with it’s 305-meter (or 1,000-foot) diameter dish - the largest in the world until China’s half-kilometer FAST scope took the title in 2016. 


Located on the Caribbean island of Puerto Rico, the Arecibo Observatory was designed and constructed in the 1950s and 60s as a department of defense project to detect nuclear warheads in the upper atmosphere. The National Science Foundation took over the facility in 1969 to use it for more peaceful endeavors, with help from a few managing partners over the years. A big part of Arecibo’s mission is education and outreach programs to engage the public’s interest in astronomy and atmospheric sciences. If you grew up in Puerto Rico, you almost certainly visited this incredible facility on a field trip or two or three. The most iconic part of the observatory was the massive radio telescope. Not only could it receive the faintest of radio signals, it could transmit them as well, allowing Arecibo to work as a celestial radar. It could send out radio signals and measure the returning echoes from nearby objects in space, including planets and asteroids. Arecibo facilitated some of the most exciting astronomical discoveries of our age, including the Nobel-prize winning observation of binary pulsars providing the first evidence of gravitational waves.


The telescope’s dish was constructed inside an enormous circular sinkhole. Although it looks solid from a distance, the reflector was a series of aluminum panels carefully suspended on steel cables. Because the dish was fixed to the earth, it was constrained to point at whatever part of the sky happened to be overhead. Radio telescopes can be used during the day and night - so there’s more sky to look at over the course of a day or year - but a telescope that can’t steer is still pretty useless. The designers of Arecibo had a pretty clever solution to the problem. Rather than using a parabolic shape for the dish that would focus everything to a single point, they chose a spherical curve. Spherical reflectors don’t perfectly focus all the incoming rays. That might sound like a bad thing since you want to gather and focus as much signal as possible across the entire dish. The beauty of a spherical reflector is that, by changing the position at which you measure the reflected waves above the dish, you’re measuring those waves from different parts of the sky. You can essentially steer the telescope by choosing where to receive radio waves above the dish, allowing you to focus on various objects and track them as the earth rotates.


Focusing those waves to a narrow area above the dish doesn’t do much good unless you have a receiver up there to collect and measure them. The Arecibo telescope was designed with a triangular platform suspended by steel cables above the dish to support the various instruments used to gather radio signals. To keep the platform aloft, three reinforced concrete towers, named for their positions on a clock of 4, 8, and 12, supported each group of cables. There were originally 4 cables for each corner of the platform, 3 inches (or 8 cm) in diameter. Big cables. Additional cables, called backstays, were connected to anchorages behind each tower to balance the horizontal forces, similar to the way suspension bridges work with their towers and abutment anchorages.


Initially the telescope used line feeds, elongated receivers that could gather signals within the focal line of the spherical dish. But, they could only measure signals within a narrow bandwidth, so line feeds would have to be swapped to change the frequency of the telescope. Upgrades in 1997 included the addition of the Gregorian dome that uses two additional reflectors to focus radio waves. This dome allowed telescope operators to observe a much wider range of radio frequencies. But this Gregorian dome didn’t just add capabilities. It also added weight - lots of weight - about 50 percent of the original platform. All of this extra load required some more support. So, two auxiliary cables from the platform to each tower were added, plus more backstays to balance the load. In addition to that, the dome was far more sensitive to tiny movements. You can imagine the stiffness and rigidity of a gigantic wind-catching dome suspended in the air by narrow steel cables - not an ideal structural arrangement for a sensitive instrument. To compensate, three tie-down cables were added, one for each corner of the platform, increasing the forces even further. A laser ranging system could communicate with hydraulic jacks to carefully adjust the tension in these tie-downs and keep the platform perfectly stable within the precision of a millimeter.


The telescope's last few years were pretty rough. The 2017 Atlantic hurricane season sent two massive storms - Irma and Maria - across Puerto Rico. Maria was one of the strongest storms ever to hit the island and caused nearly 3,000 fatalities and close to 100 billion dollars in damage. Arecibo wasn’t spared from that devastation. It suffered a broken line feed that fell from the instrument platform and crashed through the dish, among other damage. More consequential than hurricanes, though, Arecibo was slowly losing its funding. The National Science Foundation had been trying for years to divert Arecibo funds to newer projects. In 2018, the University of Central Florida stepped up to take over the management and funding of the observatory, not knowing what was soon to come.


Only a few years later, in August 2020, one of the newer auxiliary cables on Tower 4 broke free from its socket unexpectedly in the middle of the night. As the cable failed, it crashed through the reflector dish, tearing a gash through the aluminum panels. These sockets, called spelter sockets, used to attach the cables to the tower are a common way to terminate wire ropes and cables, but they have to be installed correctly. You have to broom the end of the cable, making sure that every strand is separated from the others, clean them meticulously, then carefully pour molten zinc into the socket to create a permanent wedge that only gets tighter with more tension. If done properly, the termination should be stronger than the cable itself. In other words, there’s no good reason a cable should ever pull out of a spelter socket. And yet, this was not the first incident of cables slipping in their sockets at Arecibo. Maintenance staff at the observatory had been concerned about the problem for years. This failure was the beginning of the end of the telescope, though we didn’t know it yet.


The damage from the failed cable was significant, but the engineers brought on to assess the structure believed it could be repaired. The suspended platform was designed with some redundancy, so losing a single cable wasn’t necessarily catastrophic. Managers put a temporary stop to the science at the facility while a remediation project could be installed. But first, it had to be designed. As a first step, engineers developed a structural computer model of the platform and towers to evaluate options for repair.


One nice aspect of cable-supported structures is that you can estimate the tension in each one just by looking at it. All cables sag under their own weight, following a curve called a catenary. The more tension in the cable, the tauter it becomes, and so the less it sags. If you know the weight of the cable, you can use the catenary equation with the measured sag distance to estimate cable force fairly accurately. A sag survey was conducted at Arecibo using lasers, and this is how the structural model was calibrated. To make sure the model could predict how changes in forces would affect the structure, the engineers performed some pretty clever validation as well. Since they had measurements of the instrument platform before and after the first cable failed, they could remove that cable in the model and compare the predicted behavior of the platform to what actually happened. When the first cable failed, that corner of the platform dropped by two-and-a-half feet or about a meter, and the model was able to predict this within a couple of inches.


While all this design was taking place, further trouble was just around the corner. In November, only 3 months after the first cable broke at Tower 4, a second one failed. This time it was one of the original cables installed in the 1960s. It didn’t pull from its socket but simply broke. And it broke at a force well below what it should have been able to handle (about 62% of its rated strength to be precise). The falling cable again damaged parts of the telescope, and again, the platform remained standing. However, optimism about the structure was quickly declining. The question went from, “how do we fix the telescope?” to “can we fix the telescope?” And there were differing opinions.


The engineers used the structural model to evaluate options that could relieve the tension in the remaining cables and reduce the possibility of a complete failure. They could cut the broken cables since they’re pretty heavy and not doing anything useful anymore. They could move the Gregorian dome so that the other towers were holding more of its weight. They could loosen the backstays, causing the towers to lean inward by 18 inches or half-a-meter. And of course, they can add some temporary cables at Tower 4 to take up some of the weight. All of these options showed reductions in the forces carried by the remaining cables, but the problem was figuring out how to do work safely. At this point, they had two failures, both well below the specified breaking strength of the cables. Not only had the telescope lost its structural redundancy, but the engineers also didn’t trust the strength of the remaining cables, and for good reason. Crews couldn’t access the site due to the risk of another cable failing, but nearly all the options to relieve the load on the cables would require having personnel on the platform and towers.


One of the engineering firms working on the problem suggested some last ditch efforts to save the structure. After that, they could perform proof tests remotely using the tie-down jacks to check if the remaining cables had at least 10% extra strength. If the engineers could gain some confidence in the strength of the remaining cables, crews might be able to enter the site and implement further measures to save the structure. However, no one could get comfortable with the risks of the helicopter work or proof testing the already distressed cables.


It is a tough thing to say, when such an important and iconic structure is still standing, that there’s no path forward to repair. This quote from the engineer tells the whole story. “It is unlikely any of these methods will yield sufficient reductions without placing crews in jeopardy...Although it saddens us to make this recommendation, we believe the structure should be demolished in a controlled way as soon as pragmatically possible.” They wouldn’t get the chance, though.


On the morning of December 1st, a third cable at Tower 4 broke, starting a chain of events that would quickly collapse the structure. Amazingly, one of the observatory staff was flying a drone at the top of the tower when it happened, capturing incredible footage of the event. It happens almost instantly. Two of the main cables are already clearly in distress when the video starts. All the chipped paint is from individual strands of the cable failing. Observatory staff could hear these breaks and knew what was likely imminent, which is why the drone was up there in the first place. The third cable snaps, and the remaining cables, forced to bear the additional load, quickly follow. The drone turns around to reveal the platform crashing into the side of the dish.


The observatory also had a Gopro set up in the control room that captured the failure. You can see the cables let go from Tower 4, the platform swinging downward, the support cables crashing through the suspended catwalk, and the top section of Tower 4 breaking off from the unbalanced force of the backstays. All three towers suffered failures, major portions of the dish were destroyed, and the platform and instruments it supported were a complete loss. Several buildings, including the visitor center, were damaged by falling debris. Thankfully, even though there were people on site during the collapse, the engineers had established safe zones away from the structure, and no one was injured.


Several forensic investigations are still underway to examine the causes of the failed cables. Those results could be years away, so we don’t yet know for certain why the first two cables gave way when they should have had more than enough strength to carry the load. Engineers involved during the event suggested the first cable to fail likely was not fabricated correctly. Whether the spelter sockets were installed in the field or a shop, there are a lot of details required to do it properly, and it’s certainly possible that something was missed. And once those sockets are installed, they are difficult to inspect. As for the second cable, the engineers suggested a likely failure mode to be corrosion of the steel. The cables were painted regularly and reportedly had a dehumidification system that could blow dry air between the strands (although those systems usually require an airtight sleeve around each cable). Even so, Arecibo sat nearly 60 years only a short distance from the northern coast of Puerto Rico, and exposure to that salty sea air could have accelerated the demise of the main cables.


Another element worthy of scrutiny is the factor of safety used in the original design. This is the quotient of a structure's demands and its strength. The whole point of a factor of safety is to accommodate uncertainty. We predict the demands on a structure. We compare them to the strength. We recognize that there might be extra forces or less strength than expected for a variety of reasons outside our control, so we give our structures some margin. The Arecibo suspended platform cables were designed to have a factor of safety of two, meaning they were twice as strong as the expected static loading from the platform.


That might seem like a lot, but consider that elevator cables use a factor of safety of 11, and many bridges use safety factors above 3. In aerospace engineering, where weight is critical, they do tons of modeling and testing to build enough confidence in designs to get their factors of safety down to around 1.5. Arecibo was a unique facility, unlike any other structure in the world. It didn’t go through a rigorous structural testing program. And, it was designed before computer modeling could be used to accurately characterize all the static and dynamic forces it could experience. I think it’s worth considering whether the structure should have been designed with a little extra margin, especially considering the possible circumstances in which that margin would be required. A cable failure is a violent event. All the load the failed cable was carrying doesn’t just redistribute itself to the other cables evenly and gently. The dynamic loads that occur as the structure shakes and vibrates can be significantly higher than the static loads. It wouldn’t be surprising at all to find out that the peak stress in the remaining cables on Tower 4 actually did exceed their rated strength when that first and second cable broke, even if only for an instant.


Despite the collapse, Arecibo Observatory is not closed and science continues at the other facilities on site. As of this writing, crews are still working to clean up the debris from the collapse, and the National Science Foundation is holding workshops to discuss the future of the site. I hope that eventually they can replace the telescope with an instrument as futuristic and forward-looking as the Arecibo telescope was when first conceived. It was an ambitious and inspiring structure, and we sure will miss it. Thank you, and let me know what you think!


July 20, 2021 /Wesley Crump

How Sewers Work

July 06, 2021 by Wesley Crump

A sewage collection system is not only a modern convenience but one also of the most critical pillars of public health in an urban area. Humans are kind of gross. We collectively create a constant stream of waste that threatens city-dwellers with plague and pestilence unless it is safely carried away. Sewers convert that figurative stream into a literal one that flows below ground away from public view (and hopefully public smell). There are a lot of technical challenges with getting so much poop from point A to point B, and the fact that we do it mostly out-of-mind, I think, is cause for celebration. So, this post is an ode to the grossest and probably most underappreciated pieces of public infrastructure. I’m Grady, and this is Practical Engineering. Today, we’re talking about sewers.


As easy as it sounds to slap a pipe in the ground and point it toward the nearest wastewater treatment plant, designing sanitary sewage lines - like a lot of things in engineering - is a more complex task than you would think. It is a disruptive and expensive ordeal to install subsurface pipes, especially because they are so intertwined with roadways and other underground utilities. If we’re going to go to the trouble and cost to install or replace them, we need to be sure that these lines will be there to stay, functioning effectively for many decades. And speaking of decades, sewers need to be designed not just for the present conditions, but also for the growth and changes to the city over time. More people usually means more wastewater, and sewers must be sized accordingly. Joseph Bazalgette, who designed London’s original sewer system, famously doubled the proposed sizes of the tunnels, saying, “We’re only going to do this once.” Although wantonly oversizing infrastructure isn’t usually the right economic decision, in that case, the upsizing was prescient. Finally, these lines carry some awful stuff that we do not want leaking into the ground or, heaven forbid, into the drinking water supply whose lines are almost always nearby. This all to say that the stakes are pretty high for the engineers, planners, and contractors who make our sewers work.


One of the first steps of designing a sewage collection system is understanding how much to expect. There are lots of published studies and guidelines for estimating average and peak wastewater flows based on population and land use. But, just counting the number of flushes doesn’t tell the whole story. Most sanitary systems are separated from storm drains which carry away rainfall and snowmelt. That doesn’t mean precipitation can’t make its way into the sewage system, though. Inflow and infiltration (referred to in the business as I&I) are the enemies of utility providers for one simple reason. Precipitation finding its way into sewers through loose manholes, cracks in pipes, and other means can overwhelm the capacity of the system during storms. The volume of the fabled “super flush” during the halftime of the Superbowl is usually a drop in the bucket compared to a big rainstorm. I&I can lead to overflows which create exposure to raw sewage and environmental problems. So utilities try to limit this I&I to the extent possible through system maintenance, and engineers designing sewers try to take it into account when choosing the system capacity.


Once you know how much sewage to expect, then you have to design pipes to handle it. It’s often said that a civil engineer’s only concerns are gravity and friction. I’ll let you take a guess at which one of those makes poop flow downhill. It’s true that almost all sewage collection systems rely mostly on gravity to do the work of collecting and transporting waste. This is convenient because we don’t have to pay a gravity bill - it comes entirely free. But, like most free things, it comes with an asterisk, mainly that gravity only works in one direction: down. This fact constrains the design and construction of modern sewer systems more than any other factor.


We need some control over the flow in a sewer pipe. It shouldn’t be too fast so as to damage the joints or walls of the pipe. But it can’t flow too slow, or you risk solids settling out of suspension and building up over time. We can’t adjust gravity up or down to reach this balance, and we also don’t have much control over the flow of wastewater. People flush when they flush. The only things engineers can control are the size of the sewer pipe and its slope. Take a look at what happens when the slope is too low. The water moves too slowly and allows solids to settle on the bottom. Over time, these solids build up and reduce the capacity of the pipe. They can even completely clog. Pipes without enough slope require frequent and costly maintenance from work crews to keep the lines clear. If you adjust the slope of the line without changing the flow rate, the velocity of the water increases. This not only allows solids to stay in suspension, but it also allows the water to scour away the solids that have already settled out. The minimum speed to make sure lines stay clear is known as the self-cleaning velocity. It can vary, but most cities require that flow in a sewer pipe be at least three feet or one meter per second. 


So far I’ve been talking abou sand to simulate the typical “solids” that could be found in a wastewater stream. But, you might be interested to know that we’re, thankfully and by design, only scratching the surface of synthetic human waste. Laboratories doing research on urban sanitation, wastewater treatment, and even life support systems in space often need a safe and realistic stand-in for excrement, of which there are many interesting recipes published in the academic literature. Miso (or soybean) paste is one of the more popular constituents. Feel free to take your own journey down the rabbit hole of simulated sewage after this. I mean that figuratively, of course.


The slope of a sewer pipe is not only constrained by the necessary range of flow velocities. It also needs to consider the slope of the ground above. If the slope is too shallow compared to the ground, the sewer can get too close to the surface, losing the protection of the overlying soil. If the slope is too steep compared to the ground, the sewer can eventually become too deep below the surface. Digging deep holes to install sewer pipes isn’t impossible or anything, but it is expensive. Above a certain depth, you need to lay back the slopes of the trench to avoid having it collapse. In urban areas where that’s not possible, you instead have to install temporary shoring to hold the walls open during construction. You can also use trenchless excavation like tunneling, but that’s a topic for another post. This all to say that choosing a slope for a sewer is a balance. Too shallow or too steep, and you’re creating extra problems. Another topographic challenge faced by sewer engineers is getting across a creek or river.


It is usually not cost-effective to lower an entire sewer line or increase its slope to stay below a natural channel. In these cases, we can install a structure called an inverted siphon. This allows for a portion of a line to dip below a depressed topographic feature like a river or creek and come back up on the other side. The hydraulic grade line, which is the imaginary line representing the surface of the fluid, comes up above the surface of the ground. But, the pipe contains the flow below the surface. The problem with inverted siphons is that, because they flow full, the velocity of the flow goes down. That means solids are more likely to settle out, something that is especially challenging on a structure with limited access for maintenance. This is similar to the p- or u-trap below your sink, that spot where everything seems to get stuck. Even though the pipe is the same size along the full length, settling only happens within the siphon. To combat this issue, inverted siphons often split the flow into multiple smaller pipes. This helps to keep the velocity up above the self-cleaning limit. A smaller pipe obviously means a lower capacity, which is partly why siphons often include two or three. Even though there’s some settling happens, it’s not increasing over time. The velocity of the flow in the smaller siphons is high enough to keep most of the solids in suspension.


The volume and hydraulics of wastewater flow aren’t the only challenges engineers face. Sewers are lawless places, by nature. There are no wastewater police monitoring what you flush down the toilet, thank goodness. However, that means sewers often end up conveying (or at least trying to convey) substances and objects for which they were not designed. For a long time, grease and oil were the most egregious of these interlopers since they congeal at room temperatures. However, the rising popularity of quote-unquote “flushable” wipes has only made things worse. Grease and fat combine with wet wipes in sewers to create unsettling but aptly named, “fatbergs,” disgusting conglomerates that, among other things, are not easily conveyed through sanitary sewer lines. Conveniently, most places in the world have services available to carry away your solid wastes so you don’t have to flush them. But they usually do it in trucks - not pipes.


Obviously, this issue is more complicated than my little experiment. The labeling of wipes has turned into a controversy that is too complex to get into here. My point though, and indeed the point of this whole post, is that your friendly neighborhood sewage collection system is not a magical place where gross stuff goes to disappear. It is a carefully-planned, thoroughly tested system designed to keep the stuff we don’t want to see - unseen. What happens to your flush once it reaches a wastewater treatment plant is a topic for another post, but I think the real treasure is the friends - sewers - it meets along the way.


July 06, 2021 /Wesley Crump

What Really Happened at the Hernando de Soto Bridge?

June 15, 2021 by Wesley Crump

In May of 2021, inspectors on the Hernando de Soto Bridge between Arkansas and Memphis, Tennessee discovered a crack in a major structural member. They immediately contacted emergency managers to shut down this key crossing over the Mississippi River to vehicle traffic above and maritime traffic below. How long had the crack been there and how close did this iconic bridge come to failing? I’m Grady and this is Practical Engineering. Today, we’re discussing the Memphis I-40 bridge incident.


The Hernando de Soto Bridge carries US Interstate 40 across the Mississippi River between West Memphis, Arkansas and Memphis, Tennessee. Opened for traffic in 1973, the bridge’s distinctive double arch design gives it the appearance of a bird gliding low above the muddy river. I-40 through Tennessee and Arkansas is one of the busiest freight corridors in the United States, so the Mississippi River bridge is a vital east-west link, carrying an average of 50,000 vehicles per day. Although it was built in the 70s, the bridge has had some major recent improvements. It’s located in a particularly earthquake-prone region called the New Madrid Seismic Zone. Starting in 2000 and continuing all the way through 2015, seismic retrofits were added to the bridge to help it withstand a major earthquake and serve as a post-earthquake lifeline link for emergency vehicles and the public. ARDOT and TDOT share the maintenance responsibilities for the structure, with ARDOT in charge of inspections.


On May 11, 2021, a climbing team from an outside engineering firm was performing a detailed inspection of the bridge's superstructure. During the inspection, they noted a major defect in one of the steel members below the bridge deck. The crack went through nearly the entire box beam with a significant offset between the two sides. Recognizing the severity of the finding, several of the engineers called 911 to alert local law enforcement agencies and shut the bridge down to travel above and below the structure. This decision to close the bridge snarled traffic, forcing cars and trucks to detour over the older and smaller I-55 bridge nearby. It also created a backup of hundreds of barges and ships needing to pass north and south on the Mississippi River below the bridge. Knowing how significant an impact closing the bridge would be on such a vital corridor, how did engineers know to act so quickly and decisively? In other words, how important is this structure member? To explain that, we need to do a quick lesson on arch bridges. There are so many ways to span a gap, all singular in function but remarkably different in form. One type of bridge takes advantage of a structural feature that’s been around for millennia: the arch.


Most materials are stronger against forces along their axis than those applied at right angles (called bending forces). That’s partly because bending forces introduce tension in structural members. Instead of beams that are loaded perpendicularly, arch bridges use a curved element to transfer the weight of the bridge to the substructure using almost entirely compressive forces. Many of the oldest bridges used arches because it was the only way to span a gap with materials available at the time (stone and mortar). The Caravan Bridge in Turkey was built nearly 3,000 years ago but is still in use today. Even now, with the convenience of modern steel and concrete, arches are a popular choice for bridges. When the arch is below the roadway, we call it a deck arch bridge. Vertical supports transfer the load of the deck onto the arch. If part or all the arch extends above the roadway with the deck suspended below, it’s a through-arch bridge like the Hernando de Soto.


Arches can be formed from many different materials, including steel beams, reinforced concrete, or even stone or brick masonry. The I-40 Mississippi River bridge has two arches made from a lattice of steel trusses. One result of compressing an arch is that it creates horizontal forces called thrusts. So, arch bridges normally need strong abutments at either side to push against that can withstand the extra horizontal loads. So why do the arches of this bridge sit on top of spindly piers? Just from looking at it, you can tell that this support was not designed for horizontal loading. That’s okay, because the Hernando de Soto uses tied arches. Instead of transferring the arch thrusts into an abutment, you can tie the two ends together with a horizontal chord. This tie works exactly like a bowstring, balancing the arch’s thrust forces with its resistance to tension. Tied arch bridges don’t transfer thrust forces to their supports, meaning they can sit atop piers designed primarily for vertical loads.


This tension member is the subject of our concern. The crack in the Hernando de Soto bridge went right through one of the two arch ties on the eastern span. It’s hard to understate the severity of the situation. These ties are considered fracture-critical members - those non-redundant structural elements subject to tension whose fracture would be expected to result in a collapse of the entire bridge. Obviously, this member did fracture without a collapse, so there may be a dispute about whether it truly qualifies as fracture-critical, but suffice it to say that losing the tie on a tied-arch bridge is not a minor issue. So why would a tension member like this crack?


Let me throw in a caveat here before continuing. Structural engineering is not an armchair activity. Forensic analysis of a failure requires a tremendous amount of information before arriving at a conclusion, including structural analysis, material testing, and review of historical information. Without such an investigation, the best we can do is speculate. A detailed forensic review will almost certainly be performed, and then we’ll know for sure. With all that said, there’s really only one reason that a steel member would crack like what’s shown in the photos of the I-40 bridge.


When steel fails, it is usually a ductile event. In other words, the material bends, deforms, and stretches. But, steel can experience brittle failures too, called fractures, where little deformation occurs. And the primary reason that a crack would initiate in a steel tension member of a bridge is fatigue. Fatigue in steel happens because of repeated cycles of loading. Over time, microscopic flaws in the material can grow into cracks that open a small amount with each loading cycle, even if those loading cycles are well below the metal’s yield strength. If not caught, a fatigue crack will eventually reach a critical size where it can propagate rapidly, leading to a fracture. Bridges are particularly susceptible to fatigue because traffic loads are so dynamic. This bridge sees an average of 50,000 vehicles per day. That is tens of millions of load cycles every year.


Fatigue is common on steel members that have been welded because welding has a tendency to introduce flaws in the material. When weld metal cools, it shrinks generating residual stress in the steel. These stress concentrations are where most fatigue cracks occur. And the box tie member at the I-40 bridge is a built-up section. That means it was fabricated by welding steel plates together. It’s a common way to get structural steel members in whatever shape the design requires. But, if not carefully performed, the welds have the potential to introduce flaws from which a fatigue crack can propagate.


Of course, these ties aren’t purely tension members holding the two sides of the arch together. If they were, the load cycles would probably be a lot less dynamic. The ties don’t support these lateral beams below the road deck - that’s done by the suspender cables hanging from the arch above - but they do have a rigid connection. That means when the deck moves, the tension ties move with it, potentially introducing stresses that could exacerbate the formation of a crack. Again, without a detailed structural model, it’s impossible to say how the dynamic cycles of traffic forces are distributed through each member. We can’t say whether the original design or the seismic retrofits had a flaw that could have been prevented. Fatigue and fractures are difficult to characterize, and in some cases inevitable given the construction materials and methods, even with a good design. That’s why inspections are so important. One of the biggest questions everyone is asking, and rightly so given the severity of the situation, is “how long has this structural member been cracked?”


National bridge standards require inspections for highway bridges every two years. Bridges with fracture-critical members, like this one, are usually inspected more frequently than that, and inspection of those members has to be hands-on. That means no drones or observations from a distance - a human person has to check every surface of the steel from, at minimum, an arm’s length away. Given those requirements, you would think that this crack, discovered in May of 2021 did not exist the year before. Unfortunately, ARDOT provided a drone inspection video from 2 years earlier, clearly showing the crack on the tie beam. Although it hadn’t yet grown to its eventual size, the crack is nearly impossible to miss. And it could have been there well before that video was shot. One amateur photographer who took a canoe trip below the bridge in 2016 shared a photo of the same spot, and it sure looks like there’s a crack.


Bridge inspections are not easy. Even on simple structures they often require special equipment - like snooper trucks - and closing down lanes of traffic. Complicated structures like the I-40 bridge require teams of structural engineers trained in rope access climbing to put eyes on every inch of steel. And even then, cracks are hard to identify visually and can be missed. Inspectors are humans, after all. But, none of that justifies this incident, especially given how large and obvious the fracture was. ARDOT announced that they fired an unnamed inspector who was presumably responsible for the annual inspections on this bridge. We don’t know many details of that situation, but I just want to clarify that it’s not a solution to the problem. If your ability to identify a major defect in a fracture-critical member of a bridge hinges on a single person, there’s something very wrong with your inspection process. Quality management is an absolutely essential part of all engineering activities. We know we’re human and capable of mistakes, so we build processes that reduce their probability and consequences.


That includes quality assurance which are the administrative activities of verifying that work is being performed correctly such as making sure that bridges are inspected by teams and that inspectors are properly trained. It also includes quality control, the checks and double-checks of work products like inspection reports. And, quality management should be commensurate with the level of risk. In other words, if an error would threaten public safety, you can’t just leave it up to a single person. Put simply and clearly, there is absolutely no excuse for this crack to have sat open on the bridge’s tie member for as long as it did.


This story is ongoing. As of this video’s writing the bridge is closed to traffic indefinitely. But, that doesn’t mean the incident is over. There’s a chance that, as the forces in the bridge redistributed with the damage to this vital member, other structural elements became overloaded. The second tension tie may have taken up much of its partner's stress and the pier supporting the arch may have been subject to a lot more horizontal force than it was designed to withstand. In addition, bridges are full of repetitive details. If this crack could happen in one place, there’s a good chance similar cracks may exist elsewhere. The Federal Highway Administration recommends that, when a fatigue crack is found, a special, in-depth inspection be performed to look for more. That will involve hands-on checking of practically every square inch of steel on the bridge, and probably non-destructive tests that can identify defects like using x-rays, magnetic particles or dyes that make cracks more apparent.


The repair plan for the bridge is already in progress. Phase 1 was to temporarily reattach the tie using steel plates to make the bridge safe for contractors. The design for Phase 2 will depend entirely on the findings of detailed structural analysis and forensic investigation. In the meantime, it’s clear that ARDOT and TDOT have some work ahead of them. Most importantly, they need to do some reckoning with their bridge inspection procedures, and thank their lucky stars that this fracture didn’t end in catastrophe. There’s no clear end in sight for the inconvenienced motorists needing to cross the Mississippi River, but I’m thankful that they’re all still around to be inconvenienced. Thank you, and let me know what you think.

June 15, 2021 /Wesley Crump

The Fluid Effects That Kill Pumps

June 01, 2021 by Wesley Crump

The West Closure Complex is a billion-dollar piece of infrastructure that protects parts of New Orleans from flooding during tropical storms. Constructed partly as a result of Hurricane Katrina, it features one of the largest pumping stations in the world, capable of lifting the equivalent of a fully-loaded Boeing 747 every second. When storm surge threatens to raise the levels of the sea above developed areas on the west bank of the Mississippi River, this facility’s job is to hold it back. The gates close and the pumps move rainwater and drainage from the City’s canals back into the Mississippi River and out to the gulf. This pump station may be the largest of its kind, but its job is hardly unique. We collectively move incredible volumes of fresh water, drainage, and wastewater into, out of, and around our cities every day. And, we mostly do it using pumps. I love pumps. But, even though they are critical for the safety, health, and well-being of huge populations of people, there are a lot of things that can go wrong if not properly designed and operated. I’m Grady, and this is Practical Engineering. Today, we’re exploring some of the problems that can happen with pumps.


The first of the common pitfalls that pumps can face is priming. Although liquids and gases are both fluids, not all pumps can move them equally. Most types of pumps that move liquids cannot move air. It’s less dense and more compressible, so it’s often just unaffected by impellers designed for liquids. That has a big implication, though. It means if you’re starting a pump dry - that is when the intake line and the housing are not already full of water, like I’m doing here - nothing happens. The pump can run and run, but because it can’t draw air out of the intake line, no water ever flows. This is why many pumps need to be primed before starting up. Priming just means filling the pump with liquid to displace the air out of housing and sometimes the intake pipe. When you raise the discharge line to let water flow backwards into the pump, it happens quickly. As soon as the air is displaced from the housing, the pump is primed and water starts to flow. There are a lot of creative ways to accomplish this for large pumps. Some even have small priming pumps to do this very job. “But what primes the priming pumps?” Well, there are some kinds of pumps that are self-priming. One is submersible pumps that are always below the water where air can’t find its way in. Another is positive displacement pumps that can create a vacuum and draw air through. They may not be as efficient or convenient to use as the main pump, but they work just fine for the smaller application of priming.


However a pump is primed, it’s critical that it stays that way. If air finds its way into the suction line of a pump, it can lose its prime and stop working altogether. When you lift a pump out of water, the prime is lost. And if you put the pump back down into the water, it doesn’t start back up. This can be a big problem if it goes unnoticed, not just because the pump isn’t working, but also because running a pump dry often leads to damage. Many pumps depend on the fluid in the housing for cooling, so without it, they overheat. In addition, the seals around the shaft that keep water from intruding on the motor depend on the fluid to function properly. If the seals dry out, they get damaged and require replacement which can be a big job.


The next problem with pumps is also related to the suction side. Pumps work by creating a difference in pressure between the inlet and outlet. In very simple terms, one side sucks and one side blows. A problem comes when the pressure gets too low on the suction side. You might know that the phase of many substances depends not just on their temperature, but also on the ambient pressure. That’s why the higher you are in elevation, the lower the temperature needed to boil water. If you continue that trend into lower and lower pressures, eventually some liquids (including water) will boil at normal temperatures without any added heat. It’s a pretty cool effect as a science demonstration, but it’s not something you want happening spontaneously inside your pump. Just like they don’t work with air, most pumps don’t work very well with steam either. But, the major problem comes when those bubbles of stream collapse back into a liquid. Liquids aren’t very compressible so these collapsing bubbles send powerful shockwaves that can damage pump components. This phenomenon is called cavitation, and I have a blog covering it in a lot more detail that you can check out after this one to learn more. It usually doesn’t lead to immediate failure, but cavitation will definitely shorten the life of a pump significantly if not addressed.


The solution to this problem at pumps is known as Net Positive Suction Head, and with a name like that, you know it’s important. Manufacturers of large pumps will tell you the required Net Positive Suction Head (or NPSH), which is the minimum pressure needed at a pump inlet to avoid cavitation. The engineer’s job is to make sure that a pump system is designed to provide at least this minimum pressure. That NPSH depends on the vertical distance between the sump and inlet, the frictional losses in the intake pipe, the temperature of the fluid, and the ambient air pressure. Here’s an example: With a valve wide open, the suction pressure at the inlet is about 20 kPa or 5 inches of mercury. Now, when you move the pump to the height of a ladder, but leave the bucket on the ground, the suction pressure just about doubles. A constriction in the line also decreases the available NPSH. If you close the valve on the intake side of a pump, you immediately see the pressure in the line becoming more negative (in other words, a stronger vacuum). This pump isn’t strong enough to cavitate, but it will make a bad sound when there isn’t enough Positive Suction Head at the inlet. I think it easily demonstrates how a poor intake design can dramatically affect the pressure in the intake line and quickly lead to failure of a pump.


The last problem that can occur at pumps is also the most interesting: vortices. You’ve probably seen a vortex form when you drain a sink or bathtub. These vortices occur when the water accelerates in a circular pattern around an outlet. If the vortex is strong enough, the water is flung to the outside, allowing air to dip below the surface. This is a problem for pumps if that air is allowed to enter the suction line. We talked a little about what happens when a pump runs dry in the discussion about priming, but air is a problem even if it’s mixed with water. That’s because it takes up space. A bubble of air in the impeller reduces the pump’s efficiency since the full surface of the blades can’t act on the water. This causes the pump to run at reduced performance and may cause it to lose prime, creating further damage.


The easiest solution to vortexing is submergence - just getting the intake pipe as far as possible below the surface of the water. The deeper it is, the larger and longer a vortex would have to be before air could find its way into the line. This is achieved by making the sump - that is the structure that guides the water toward the intake - deeper. That solution seems simple enough, except that these sumps are often major structural elements of a pump station that are very costly to construct. You can’t just indiscriminately oversize them. But how deep is deep enough? 


It turns out that’s a pretty complicated question because a vortex is hard to predict. Even sophisticated computational fluid dynamics models have trouble accurately characterizing when and if a vortex will form. That’s an issue because you don’t want to design and construct a multi-million-dollar pumping facility just to find out it doesn’t work. And there aren’t really off-the-shelf designs. Just about every pumping station is a custom-designed facility meant for a specific application, whether it’s delivering raw water from a reservoir or river to a treatment plant, sending fresh water out to customers, lifting sewage to be treated at a wastewater plant, pumping rainwater out of a low area, or any number of other reasons to move large volumes of water. So if you’re a designer, you have some options.


First, you can just be conservative. We know through lots of testing that vortices occur mostly due to non-uniform flow in the sump. Any obstructions, sharp turns, and even vertical walls can lead to flow patterns that evolve into vortices. Organizations like the Hydraulic Institute have come up with detailed design standards that can guide engineers through the process of designing a pump station to make sure many of these pitfalls are avoided. Things like reducing the velocity of the flow and maintaining clearance between the walls and the suction line can reduce the probability of a vortex forming. There are also lots of geometric elements that can be added to a sump or intake pipe to suppress the formation of vortices.


The second option for an engineer is to build a scale model. Civil engineering is a little bit unique from other fields because there aren’t as many opportunities for testing and prototyping. Infrastructure is so large and costly, you usually only have one shot to get the design right. But, some things can be tested at scale, including hydraulic phenomena. In fact, there are many laboratories across the world that can assemble and test scale models of pump stations, pipelines, spillways, and other water-handling infrastructure to make sure they work correctly before spending those millions (or billions) of dollars on construction. They give engineers a chance to try out different configurations, gain confidence in the performance of a hydraulic structure, and avoid the pitfalls like loss of prime, cavitation, and vortices at pump stations.

June 01, 2021 /Wesley Crump

What Really Happened at the Oroville Dam Spillway?

May 18, 2021 by Wesley Crump

In February 2017, concrete slabs in the spillway at Oroville Dam failed during releases from the floodgates, starting a chain of events that prompted the evacuation of nearly 200,000 people downstream. The dam didn’t fail, but it came too close for comfort, especially for the tallest structure of its kind in the United States. Oroville Dam falls under the purview of the Federal Energy Regulatory Commission, in a state with a progressive dam safety program and regular inspections and evaluations by the most competent engineers in the industry. So how could a failure mode like this slip through the cracks, both figuratively and literally? Luckily, an independent forensic team got deep in the weeds and prepared a 600 page report to try and find out. This is a summary of that. I’m Grady and this is Practical Engineering. Today, we’re talking about the Oroville Dam Crisis.


Oroville Dam, located in northern California, is the tallest dam in the United States at 770 feet or 235 meters high. Completed in 1968, and owned and operated by the California Department of Water Resources, every part of Oroville Dam is massive. The facility consists of an earthen embankment which forms the dam itself, a hydropower generation plant that can be reversed to create pumped storage, a service spillway with 8 radial floodgates, and an emergency overflow spillway. The reservoir created by the dam, Lake Oroville, is also immense - the second biggest in the state. It’s part of the California State Water Project, one of the largest water storage and delivery systems in the U.S. that supplies water to more than 20 million people and hundreds of thousands of acres of irrigated farmland. The reservoir is also used to generate electricity with over 800 megawatts of capacity. Finally, the dam also keeps a reserve volume empty during the wet season. In case of major flooding upstream, it can store floodwaters and release them gradually over time, reducing the potential damage downstream.


No dam is built to hold all the water that could ever flow into the reservoir at once. And yet, having water overtop an unprotected embankment will almost certainly cause a breach and failure. So, all dams need spillways to safely release excess inflows and maintain the level of the reservoir once it’s full. Spillways are often the most complex and expensive components of a dam, and that is definitely true at Oroville. The service spillway has a chute that is 180 feet or 55 meters wide and 3,000 feet long. That’s nearly a kilometer for the metric folks. Radial gates control how much water is released and massive concrete blocks at the bottom of the chute, called dentates, disperse the flow to reduce erosion as it crashes into the Feather River. This spillway is capable of releasing nearly 300,000 cubic feet or 8,000 cubic meters of water per second. That’s roughly an olympic-sized swimming pool every other second, which I know is not that helpful in conceptualizing this incredible volume. If you somehow put that much flow through a standard garden hose, it would travel at 15% of the speed of light, reaching the moon in about 9 seconds. How’s that for a flow rate equivalency? But even that is not enough to protect the embankment.


Large dams have to be able to withstand extraordinary flooding. In most cases, their design is based on a synthetic (or made up) storm called the Probable Maximum Flood, which is essentially an approximation of the most rain that could ever physically fall out of the sky. It usually doesn’t make sense to design the primary spillway to handle this event, since such a magnitude of flooding is unlikely to ever happen during the lifetime of the structure. Instead, many dams have a second spillway, much simpler in design - and thus less expensive to construct - to increase their ability to discharge huge volumes of water during rare but extreme events. At Oroville, the emergency spillway consists of a concrete weir set one foot above the maximum operating level. If the reservoir gets too high and the service spillway can’t release water fast enough, this structure overflows, preventing the reservoir from reaching and overtopping the crest of the dam.


Early 2017 was one of northern California’s wettest winters in history with several major flood events across the state. One of those storms happened in February upstream of Oroville Dam. As the reservoir filled, it became clear to operators that the spillway gates would need to be opened to release excess inflows. On February 7, early during the releases, they noticed an unusual flow pattern about halfway down the chute. The issue was worrying enough that they decided to close the gates and pause the flood releases in order to get a better look. What they saw when the water stopped was harrowing. Several large concrete slabs were completely missing and a gigantic hole had eroded below the chute.


There was a lot more inflow to the reservoir in the forecast, so the operators knew they didn’t have much time to keep the gates closed while they inspected the damage, and no chance to try and make repairs. They knew they would have to keep operating the crippled spillway. So, they started opening gates incrementally to test how quickly the erosion would progress. Meanwhile, more rain was falling upstream, contributing to inflows and raising the level of the reservoir faster and faster. It wasn’t long before the operators were faced with an extremely difficult decision: open more gates on the service spillway which would further damage the structure or let the reservoir rise above the untested emergency spillway and cascade down the adjacent hillside.


Several issues made this decision even more complicated. On one hand, the service spillway was in bad shape, and there was the possibility of the erosion progressing upstream toward the headworks which could result in an uncontrolled release of the reservoir. Also, debris from the damaged spillway was piling up in the Feather River, raising its level and threatening to flood out the power plant. Finally, electrical transmission lines connecting the power plant to the grid were being threatened by the erosion along the service spillway. Losing these lines or flooding the hydropower facility would hamstring the dam’s only backup for making releases from the reservoir. Operators knew that repairing the spillway would be nearly impossible until the power plant could be restored. These factors pointed towards closing the spillway gates and allowing the reservoir to rise.


On the other hand, the emergency spillway had never been tested, and operators weren’t confident that it could safely release so much water, especially after witnessing how quickly and aggressively the erosion happened on the service spillway nearby. Also, its use would almost certainly strip at least the top layer of soil and vegetation from the entire hillside, threatening adjacent electrical transmission towers. A huge contingent of engineers and operations personnel were all hands on deck, running analyses, forecasting weather, reviewing geologic records and original design reports trying to decide the best course of action. Of course, this is all happening over the course of only a couple of days with conditions constantly changing and no one having slept, further complicating the decision making process. Operators worked to find a sweet spot in managing these risks, limiting releases from the service spillway as much as possible while still trying to keep the reservoir from overtopping the emergency spillway. But, every new forecast just showed more rain and more inflows.


Eventually it became clear to operators that they would have to pick a lesser evil: Increase discharges and flood the powerhouse or let the reservoir rise above the emergency spillway. They decided to let the reservoir come up. The morning of February 11, about four days after the damage was initially noticed, Lake Oroville rose above the crest of the emergency spillway for the first time in the facility’s history. Almost immediately, it was clear that things were not going to go smoothly.


As it flowed across and down the natural hillside, water from the emergency spillway began to channelize and concentrate. This quickly accelerated erosion of the soil and rock, creating features called headcuts, which are a sign of unstable and incising waterways. Headcuts are vertical drops in the topography eroded by flowing water, and they always move upstream oftentimes aggressively. In this case, upstream meant toward the emergency spillway structure, threatening its stability. This hillside was a zone many had assumed to be solid, competent bedrock. It only took a modest flow through the emergency spillway to reveal the true geologic conditions: the hillside was composed almost entirely of highly erodible soil and weathered rock. If the headcuts were to reach the concrete structure upstream, it would almost certainly fail, releasing a wall of water from Oroville Lake that would devastate downstream communities. Authorities knew they had to act quickly.


On February 12, only about a day and half after flow over the emergency spillway began, an evacuation order was issued for downstream residents, displacing nearly 200,000 people to higher ground. At the same time, operators elected to open the service spillway gates to double the flow rate and accelerate the lowering of the reservoir. The level dropped below the emergency spillway crest that night, stopping the flow and easing fears about an imminent failure. Two days later, on Valentine’s Day, the evacuation order was changed to a warning, allowing people to return to their homes. But there was still more rain in the forecast, and the emergency spillway was in poor condition to handle additional flow if the reservoir were to rise again. California DWR continued discharging through the crippled service spillway to lower the reservoir by 50 feet or 15 meters in order to create enough storage that the spillway could be taken out of service for evaluation and repairs. The gates stayed open until February 27th, nearly three weeks after the whole mess started, revealing the havoc to the dam’s right abutment. Water that started its journey as tiny drops of rain in a heavy storm - funneled and concentrated by the earth’s topography and turbulently released through massive human-made structures - had carved harrowing scars through the hillside. But, how did it happen?


Like all major catastrophes, there were a host of problems and issues that coincided to cause the failure of the concrete chute. One of the most fundamental issues was geologic. Although it was well-understood that some areas of the spillway’s foundation were not good stuff (in other words, weathered rock and soil), the spillway was designed and maintained as if the entire structure was sitting on hard bedrock.


That mischaracterization had profound consequences that I’ll discuss. As for how the spillway damage started, the issue was uplift forces. How do concrete structures stay put? Mostly by being heavy. Their weight pins them to the ground so they can resist other forces that may cause them to move. But, water complicates the issue. You might think that adding water to the top of a slab just adds to the weight, making things more stable. And that would be true without cracks and joints. The problem with the Oroville Dam service spillway chute was that it had lots of cracks and joints, for reasons I’ll discuss in a moment. These cracks allowed water to get underneath the slabs, essentially submerging the concrete on all sides. Here’s the issue with that: structures weigh less under water, or more accurately, their weight is counteracted by the buoyant force of the water they displace. So, being underwater already starts to destabilize them, because it adds an uplift force. But, concrete still sinks underwater, right? The net force is still down, holding the structure in place. That’s true in static conditions, but when the water is moving, things change.


We talk about Bernoulli’s principle a lot, and he’s got something to say about the flow of water in a spillway. In this case, the issue was what happens to a fast-moving fluid when it suddenly stops. Cracks and joints in a concrete spillway have an effect on the flow inside. Any protrusion into the stream redirects the flow. If a joint or crack is offset, that redirection can happen underneath the slab. When this happens, all the kinetic energy of the fluid is converted into potential energy, in other words, pressure. When it’s 100% of the kinetic energy being converted, we call it the stagnation pressure. When you direct the end of a tube into the flowing water, you see how the level rises?. The equation for stagnation pressure is a function of velocity squared. So, if you double the speed of flow, you get four times the resulting pressure and thus four times the height the water rises in the tube. And the water in the Oroville spillway is moving a lot faster than this. When this stagnation pressure acts on the bottom of a concrete slab, it creates an additional uplift force. If all the uplift forces exceed the weight of the slab, it’s going to move. That’s exactly what happened at Oroville. And once one slab goes, it’s just a chain reaction. More of the foundation is exposed to the fast moving water, and more of that water can inject itself below the slabs, causing a runaway failure.


Of course, we try to design around this problem. The service spillway had drains consisting of perforated pipes to relieve the pressure of water flowing beneath the slabs. Unfortunately, the design of these drains was a major reason for the cracking chute. Instead of trenching them into the foundation below the slabs, they reduced the thickness of the concrete to make room for the drains. The crack pattern on the chute essentially matched the layout of the drains beneath perfectly. So, in this case the drains inadvertently let more water below the slab than they let out from underneath it. The chute also included anchors, steel rods tying the concrete to the foundation material below. Unfortunately those anchors were designed for strong rock and their design wasn’t modified when the actual foundation conditions were revealed during construction.


The root cause wasn’t just a bad design, though. There are plenty of human factors that played into the lack of recognition and failure to address the inherent weaknesses in the structure. Large dams are regularly inspected, and their designs periodically compared to the state of current practice in dam engineering. Put simply, we’ve built bigger structures on worse foundations than this. Modern spillway designs have lots of features that help to avoid what happened at Oroville. Multiple layers of reinforcement keep cracks from getting too wide. Flexible waterstops are embedded into joints to keep water from migrating below the concrete. Joints are also keyed so individual slabs can’t separate from one another easily. Lateral cutoffs help resist sliding and keep water from migrating beneath one slab to another. Anchors add uplift resistance by holding the slabs down against their foundation. Even the surface of the joints is offset to avoid the possibility of a protrusion into the high velocity flow. All these are things that the Oroville Spillway either didn’t have or weren’t done properly. Periodic reviews of the structure’s design, required by regulators, should have recognized the deterioration and inherent weaknesses and addressed them before they could turn into such a consequential chain of tribulations.


As for the emergency spillway, the fundamental cause of the problem was similar: a mischaracterization of the foundation material during and after design. Emergency spillways are just that: intended for use only during a rare event where it’s ok to sustain some damage. But, it’s never acceptable for the structure to fail, or even come close enough to failing that the residents downstream have to be evacuated. That means engineers have to be able to make conservative estimates of how much erosion will occur when an emergency spillway engages. Predicting the amount and extent of erosion caused by flowing water is a notoriously difficult problem in civil engineering. It takes sophisticated analysis in the best of times, and even then, the uncertainty is still significant. It is practically impossible to do under the severe pressure of an emergency. The operators of the dam chose to allow the reservoir to rise above the crest of the emergency spillway rather than increase discharges through the debilitated service spillway, trusting the original designer that it could withstand the flows. It’s a decision I think most people (in hindsight) would not have made.


The powerhouse was further from flooding and the transmission lines further from failing than initially thought, and they eventually ramped up discharges from the service spillway anyway, after realizing the magnitude of the erosion happening at the emergency spillway. But, it’s difficult to pass blame too strongly. The operators making decisions during the heat of the emergency did not have the benefit of hindsight. They were stuck with the many small but consequential decisions made over a very long period of time that eventually led to the initial failure, not to mention the limitations of professional engineering practice’s ability to shine a light down multiple paths and choose the perfect one.


The forensic team’s report outlines many lessons to be learned from the event by the owner of the dam and the engineering community at large, and it’s worth a read if you’re interested in more detail. But, I think the most important lesson is about professional responsibility. The people downstream of Oroville Dam, and indeed any large dam across the world, probably chose their home or workplace without considering too carefully the consequences of a failure and breach. We rarely have the luxury to make decisions with such esoteric priorities. That means, whether they realized it or not, they put their trust in the engineers, operators, and regulators in charge of that dam to keep them safe and sound against disaster. In this case, that trust was broken. It’s a good reminder to anyone whose work can affect public safety. The repairs and rebuilding of the spillways at Oroville Dam are a whole other fascinating story. Maybe I’ll cover that in a future post. Thank you, and let me know what you think!


May 18, 2021 /Wesley Crump
  • Newer
  • Older