Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

Was Starship’s Stage Zero a Bad Pad?

June 20, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On April 20, 2023, SpaceX launched it’s first orbital test flight of its Starship spacecraft from Boca Chica on the gulf coast of Texas. You probably saw this, if not live, at least in the stunning videos that followed. Thanks to NASA Space Flight for giving me permission to use their footage in this video. Starship launched aboard the Super Heavy first stage booster, and was the tallest and most powerful rocket ever launched at the time. There was no payload; this was a test flight with the goal of gathering data not just on the rocket, but all the various systems involved. The rocket itself was exciting to watch: some of the engines failed to ignite, and a few more flamed out early during the launch. About 40 kilometers above the ground, the rocket lost steering control and the flight termination system was triggered, eventually blowing up the whole thing.

But a lot of the real excitement was on the ground. Those Raptor engines put out about twice the thrust of the Saturn V rocket used in the Apollo Program. And, all that thrust, for several moments, was directed straight into the concrete base of the launch pad, or as SpaceX calls it, Stage Zero. And that concrete base wasn’t really up for the challenge. Huge chunks of earth and concrete can be seen flying hundreds of meters through the air during the launch, peppering the gulf more than 500 meters away. A fine rain of debris fell over the surrounding area, and the damage seen after the road opened back up was surprising. Tanks were bent up. Debris was strewn across the facility. And the launch pad itself now featured an enormous crater below.

Although the FAA’s mishap report hasn’t been released yet, there’s plenty of information available to discuss. Rocket scientists and aeronautical engineers get a lot of well-deserved attention on youtube and around the nerdy content-sphere, but when it comes to the design and construction of launchpads like stage zero, that’s when civil engineers get to shine! What happened with Stage Zero and how do engineers design structures to withstand some of the most extreme conditions humans have ever created? I’m Grady, and this is Practical Engineering. Today we’re talking about launch pads.

Humans have been launching spacecraft for over 65 years now. And so far, pretty much the only way we have to propel a payload to the incredibly high speeds and altitudes that task requires, is rockets. Rockets produce enormous amounts of thrust by burning fuel and oxidizer in what amounts to a carefully (or not so carefully) controlled explosion. By throwing all that mass out the back, they’re able to accelerate forwards. But what happens to that mass once it’s expelled? When the rocket is flying through the sky, the gasses from its engines eventually slow and dissipate into the atmosphere. But, most rockets (especially the big ones), don’t get to start in the sky. Instead, they’re launched from the earth’s surface, and the small part of the earth’s surface directly below them can take a heck of a beating. Hot and corrosive gasses move at incredible speeds and often carry abrasive particles along with them. To call a rocket launch “thunderous” is often an understatement, because the sound waves generated are louder than a lightning strike and they last longer too.

Dealing with these extreme loading conditions isn’t your typical engineering task. It’s niche work. You’re not going to find a college course or textbook covering the basics of launch pad design. Instead, engineers who design these structures work from multiple directions. They use first principles to try and bracket the physics of a launch. They look at what’s worked and what hasn’t worked in the past. They use computational fluid dynamics, in other words, simulations, that help characterize the velocities and temperatures and sound pressures so that they can design structures to withstand them. But eventually, you have to use tests to see if your intuitions and estimations hold up in the real world.

It’s no surprise that one of the world leaders in successful launchpad engineering is NASA. And their historic Launchpad 39A in Cape Canaveral, Florida is a perfect case study. This pad, and it’s sister 39B, were originally built for the enormous Saturn V rocket, the cornerstone of the Apollo Space Program that first sent astronauts to the moon. Just like the SpaceX facility in Boca Chica, 39A is situated on a coast with the water out to the east. Most rockets launch in that direction to take advantage of earth’s rotation. The earth itself is already rotating to the east, so it makes sense to go ahead and take advantage of that built-in momentum. But some rockets blow up before they make it into space. So it’s best to choose a launch location with a huge stretch of unpopulated area to the east, like an ocean!

Launch Complex 39 was constructed on Merrit Island, a barrier island east of Orlando. NASA decided early on that water was the best way to move the first and second stages of the Saturn V rocket, so several miles of canals were dredged out. Over three quarters of a million cubic yards of sand and shells were produced by this dredging and used as fill for construction. Some of that material was used to build a special road called a crawlerway connecting the Vehicle Assembly Building to the launchpad. But a lot of it was used to construct a flat topped pyramid 80 feet or 24 meters tall. This structure would ultimately become the launchpad. If you’re a fan of the channel, you might already be thinking what I’m thinking. Huge piles of material like this settle over time, and I have a video all about that you can check out after this! NASA engineers let this structure sit before the rest of the launchpad was built. It’s a good thing too, because it settled about 4 feet, well over a meter!

Why did NASA bother building such a massive hill when they could have simply built the pad on the existing ground? It was all about the flame deflector: a curved steel structure that would redirect the tremendous plume of rocket exhaust exiting the Saturn V during launch into a monumental concrete trench. This would keep the plume from damaging the sensitive support structures around the pad or undermining its foundation.

But why not put the trench into the existing ground rather than building a massive artificial hill? The answer is groundwater. Siting a launch pad so near to the coast comes with the challenge of being basically at sea level. If you’ve ever dug a hole at the beach, you know the exact problem the launchpad engineers were facing. Imagine trying to install expensive and delicate technology inside that hole. Of course we build structures below the water table all the time, and I have a video about that topic too. But with the cost and complexity of dewatering the subsurface, especially considering the extreme environment in which pumps and pipes would be required to operate, it just made more sense to build up. On top of that gigantic hill, thousands of tons of concrete and steel were installed to bear the loads of the launch support structures, the weight of the rocket itself while filled with thousands of pounds of fuel and oxidizer, and of course the dynamic forces during a full scale launch.

But that’s not all. Along with the enormous flame trench, and the associated flame diverters, which have gone through various upgrades throughout the years, NASA employed a water deluge system. This is a test of the current system on pad 39B. During a launch, huge volumes of water are released through sprayers to absorb the heat and acoustic energy of the blast, further reducing the damage it causes on the surrounding facility. Check out this incredible historical slow-motion footage of a Space Shuttle Launch. You’ll notice a copious flow of water both under the main engines on the right, and under the enormous solid rocket boosters on the left. In fact, a lot of the billowing white clouds you see during launches are from the deluge system as water’s rapidly boiled off by the extreme temperatures.

39A has seen a lot of launches over the years, more than 150. The first launch was the unmanned Apollo 4 in 1967, the first ever launch of the Saturn V. The bulk of the moon missions and space shuttles launched from 39A, and more recently SpaceX themselves have launched dozens of their Falcon 9 and several Falcon Heavy rockets from the historic pad! But when you compare it to the Stage Zero structure in Boca Chica, at least its configuration during the first orbital test, the differences are obvious: No flame diverter; no water deluge system; just the world’s most powerful rocket pointed square at a concrete slab on the ground. And, I think the results came as a surprise to no one who pays attention to these things. Elon himself tweeted in 2020 that leaving out the flame diverter could turn out to be a mistake.

That concrete, by the way, isn’t just the ready-mix stuff you buy off the shelf at the hardware store. I have a whole video about refractory concrete that’s used to withstand the incredible heat of furnaces, kilns, and rocket launches. This concrete has to be strong, erosion resistant, insulating, resistant to thermal shock, and immune to exposure from saltwater since launchpads are usually near the coast. NASA used a product called Fondu Fyre at 39A and SpaceX uses Fondag. But even that fancy concrete was no match for those raptor engines. Even during the static test fire, there was some damage to the concrete pad, and that was only at about half power. The orbital test and the full force of the rocket completely disintegrated the protective pad and cratered the underlying soil, spraying debris particles for miles.

In a call after the launch, Elon said that, although things looked bad on the surface, the damage to the launch pad could be repaired pretty quickly, noting that the outcome of the test was about what he expected. And even though many might have expected the extensive damage to the pad and surrounding area, it sure wasn’t mentioned in the Environmental Assessment required before SpaceX could get a license to perform the test, whose sole purpose was to document all the environmental impacts that would be associated with building the facility and launching rockets there. Nowhere in the nearly 200-page report is a discussion of the enormous debris field that resulted from the test, and yet there are actually quite a few laws against stuff like this.

For just one example, there are federal rules about filling in wetlands, of which there are many surrounding the launch facility. If you can’t do it with a bulldozer, you probably can’t do it with a rocket, and spraying significant volumes of soil and concrete into the surrounding area likely has the regulator’s attention for that reason alone, not to mention the public safety aspects of the showering debris. The launch also caused a fire in the nearby state park. The FAA has effectively grounded Starship pending their mishap investigation, and several environmental groups have already sued the FAA over the fallout of the launchpad’s destruction.

Even if the FAA comes back with no required changes moving forward, SpaceX themselves aren’t planning to do that again, and they’ve already shared their plans for the future. An enormous, watercooled steel plate design is already well under construction as of this writing. This design is, again, very different than what we see at other launch pads, basically an upside-down shower head directly below the vehicle. That’s the nature of SpaceX and why many find them so exciting. Unlike NASA that spends years in planning and engineering, SpaceX uses rapid development cycles and full-scale tests to work toward their eventual goals. They push their hardware to the limit to learn as much as possible, and we get to follow along. They’re betting it will pay off to develop fast instead of carefully. But this wasn’t just a test of the hardware. It was also a test of federal regulations and the good graces of the people who live, work, play, and care about the Boca Chica area. And, SpaceX definitely pushed those limits as well with their first orbital test. It’s still yet to be seen what they’ll learn from that.

June 20, 2023 /Wesley Crump

How Flood Tunnels Work

June 06, 2023 by Wesley Crump


[Note that this article is a transcript of the video embedded above.]

This is Waterloo Park in downtown Austin, Texas, just a couple of blocks away from the state capitol building. It’s got walking trails, an ampitheater, Waller Creek runs right through the center, and it has this strange semicircular structure right on the water. And this is Ladybird Lake, formerly Town Lake, about a mile away. Right where Waller Creek flows into the lake, there’s another strange structure. You saw the title of this video, so you know what I’m getting at here. It turns out these two peculiar projects are linked, not just by the creek that runs through downtown Austin, but also by a tunnel, a big tunnel. The Waller Creek Tunnel is about 26 feet (or 8 meters in diameter) and runs about 70 feet or 21 meters below downtown Austin. It’s not meant for cars or trains or bikes or buses or even high voltage oil filled cables, and it’s not even meant to carry fresh water or sewage. Its singular goal is to quickly get water out of this narrow downtown area during a flood. It’s designed with a peak flow rate of 8,500 cubic feet per second. That’s 240 cubic meters per second, or enough to fill a cubic olympic sized swimming pool in about 10 seconds. And the way it works is pretty fascinating.

Most major cities use underground pipes as drains to get rid of stormwater runoff so it doesn’t flood streets and inundate populated areas. But, a storm drain only has so much capacity, and a lot of places across the world have taken the idea a few steps further in scale. As I always say, the only thing cooler than a huge tunnel is a huge tunnel that carries lots of water and protects us from floods. And I built a model flood tunnel from acrylic, so you can see how these structures work and learn just a few of the engineering challenges that come with a project like this. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about flood tunnels.

Floods are natural occurrences on earth, and in fact, in many places they are beneficial to the environment by creating habitat and carrying nutrient rich sediments into the floodplain, the area surrounding a creek or river that is most vulnerable to inundation. But, floods are not beneficial to cities. They are among the most disruptive and expensive natural disasters worldwide. If a flood swells a creek or river in a scattered residential neighborhood, it’s not ideal for the few homeowners who are impacted, but if a flood strikes the dense urban core of a major city, the consequences can be catastrophic with millions of dollars of damage and entire systems shutdown. What that means in practice is that we’re often willing to spend millions of dollars on flood infrastructure to protect densely populated areas, opening the door to more creative solutions. And heavily developed downtown areas demand resourceful thinking because they lack the space for traditional protection projects and they often predate modern urban drainage practices.

We can’t change the amount of water that falls during a flood, so we’re forced to develop ways to manage that water once it’s on the ground. The main way we mitigate flooding is just to avoid development within the floodplain. Don’t build in the areas of land most at risk of inundation during heavy storms. Seems simple, but it’s not an option for most downtown areas that have been developed since well before the advent of modern flood risk management. Another way we manage flooding is storing the water in large reservoirs behind dams, allowing it to be released slowly over time. Again, not an option in downtown areas where creating a reservoir could mean demolishing swaths of expensive property. A third flood management strategy is bypassing - sending the water around developed areas where it will cause fewer impacts. Once again, not an option in downtown areas where there is no alternative path for the water to go… unless you start thinking in the third dimension. Tunnels allow us to break free from the confines of the earth’s surface and utilize subterranean space to allow floodwaters past developed areas to be released further downstream. Let me show you how this works.

This is my model downtown business district. It’s got buildings, landscaping, and a beautiful river running right through the center. I have a flow meter and valve to control how much water is moving through that beautiful river, and here on the downstream side is a little dam to create some depth. Take a look at many major cities that have rivers running through them, and you’ll often see a weir or dam just like this to maintain some control over the upstream level, keeping water deep enough for boats or in some cases, just for beauty like the RiverWalk in downtown San Antonio. I put some blue dye and mica powder in the water to make it easier to visualize the flow.

I also have a big clear pipe with an inlet upstream of the developed area and an outlet just below the dam. Looking at this model, it might seem like a flood bypass tunnel is as simple as slapping a big pipe to where you want the flood waters to go, but here’s the thing about floods: most of the time, they’re not happening. In fact, almost all of the time, there isn’t a flood. And if you’re the owner of a flood bypass tunnel, that means almost all of the time you’re responsible for a gigantic pipe full of water below your city that has no real job except to wait. Watch what happens when I turn down the flow rate in my model to something you might see on a typical day. If we just leave the city like it is, all the flow goes into the tunnel, draining the channel like a bathtub and leaving the water along the downtown corridor to stagnate.

Standing water creates an environmental hazard. Without motion, the water doesn’t mix, and so it loses dissolved oxygen that is needed for fish and bacteria that eat organic material. Without dissolved oxygen, rivers become dead zones with little aquatic life and full of smelly, rotting organic material. Stagnant water also creates a breeding ground for mosquitoes, and is just unpleasant to be around. It’s not something you want in an urban core. The answer to this issue is gates, a topic I have a whole other video about. I can show how this works in my model. If you equip your gigantic flood bypass tunnel with gates on the inlet, you can control how much water goes into the tunnel versus what continues in the river. I just used this piece of foam to close off most of the tunnel entrance. I still have some water moving through there, but most continues in the river, keeping it from getting stagnant. This is why, on many flood bypass tunnels, you’ll see interesting structures at the inlets. Here’s the one in Austin again, and here’s the one just down the road in San Antonio. In addition to screening for trash and debris (and keeping people out) the main purpose of these structures is to regulate how much water goes into the tunnel.

But, some creeks and rivers don’t just have low flows during dry times, they have no flows. Intermittent streams only flow at certain times of the year and ephemeral streams only flow after it rains. Take a look at the stream gage for Waller Creek in Austin. Except for the days with rain, the flow in the creek is essentially zero. But, if you’re worried about stagnant water and lack of habitat on the surface, you want more water running in the river. You definitely don’t want to divert any of the scarce flows available into the tunnel. But you can’t just close the tunnel off completely, because then the water inside the tunnel will stagnate instead. You might think, “So what? It’s down there below the ground where we don’t have to worry about it.” Well, as soon as the next big flood comes and you open the gates to your tunnel, you’re going to push a massive slug of disgusting stagnant water out the other end, creating an environmental hazard downstream. So, in addition to gates on the upstream end, some flood tunnels, including the one in Austin, are equipped with pumps to recirculate water back upstream. I put a little pump in the model to show how this works. The pump pulls water from the river downstream and delivers it back upstream of the tunnel entrance. This allows you to double dip on benefits during low flows: you keep water moving in the tunnel so it doesn’t stagnate and you actually increase the flow in the river, improving its quality.

That’s 99 percent of managing a flood bypass tunnel: maintaining the infrastructure during normal flows. But of course, all that trouble is worth it the moment a big flood comes. Let’s turn the model all the way up and see how it performs. You can see the tunnel collecting flows, moving them downstream, and delivering them below the dam away from the developed area. The tunnel is adding capacity to the river, allowing a good proportion of the flood flows to completely bypass the downtown area. Of course, the river still rose during the flood, but it hasn’t overtopped the banks, so the city was protected. Let’s plug the tunnel and see what would happen without it. Turning up the model to full blast causes the stream to go over the bank and flood downtown. In this case, it’s not a huge difference, but even a few inches of floodwaters backing up into buildings is enough to create enormous damages and huge costs for repairs. Without any margin for increased flows, a big peak in rainfall can even wash buildings and cars away.

So, comparing flood levels between the two alternatives flowing at the same rate, it’s easy to see the benefits of a flood bypass tunnel. It resculpts the floodplain, lowering peak levels and pulling property and buildings out of the most vulnerable areas, making it possible to develop more densely in urban areas, not to mention creating habitat, improving water quality, and maintaining a constant flow in the river during dry times. Of course, a tunnel is an enormous project itself, and flood bypass tunnels are truly one of the most complicated and expensive ways to mitigate flood risks, but they’re also one of the only ways to manage flood risks in heavily populated areas.

I’ve been referencing projects in central Texas because that’s where I live, but despite their immense cost and complexity, flood bypass tunnels have been built across the world. One of the most famous is the Tokyo Metropolitan Area Outer Underground Discharge Channel that features this enormous cathedral of a subsurface tank. Unlike my model that works by gravity alone, the Tokyo tunnel needs huge pumps to get the water back out and into the Edogawa River. And some tunnels aren’t just for stormwater. Many older cities don’t have separated sewers for stormwater and wastewater, so everything flows to the treatment plants. That means when it rains, these plants see enormous influxes of water that must be treated before it can be released into rivers or the ocean. One of the largest civil engineering projects on earth has been in design and construction in Chicago since the 1970s and isn’t scheduled for completion until 2029. The Tunnel and Reservoir Plan (or TARP) includes four separate tunnel systems that combine with a number of storage reservoirs to keep Chicago’s sewers from overflowing into and polluting local waterways. And we keep finding value in tunnels where other projects wouldn’t be feasible. After record breaking floods from Hurricane Harvey in 2017, Houston started looking into the viability of using tunnels to reduce the impacts from future downpours. A 2.5-million-dollar engineering study was finished in 2022 suggesting that a system of tunnels might be a feasible solution to remove tens of thousands of structures from the floodplain. If they do move forward with any of the eight tunnels evaluated, that will complete the superfecta of major metropolitan areas in Texas with large flood bypass tunnels, but represent just one more of the many cities across the world that that have maximized the use of valuable land on earth’s surface by taking advantage of the space underneath.

June 06, 2023 /Wesley Crump

Merrimack Valley Gas Explosions: What Really Happened?

May 16, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On September 13, 2018, a pipeline crew in the Merrimack Valley in Massachusetts was hard at work replacing an aging cast iron natural gas line with a new polyethylene pipe. Located just north of Boston, the original cast iron system was installed in the early 1900s and due for replacement. To maintain service during the project, the crew installed a small bypass line to deliver natural gas into the downstream pipe while it was cut and connected to the new plastic main line. By 4:00 pm, the new polyethylene main had been connected and the old cast iron pipe capped off. The last step of the job was to abandon the cast iron line. The valves on each end of the bypass were closed, the bypass line was cut, and the old cast iron pipe was completely isolated from the system. But it was immediately clear that something was wrong.

Within minutes of closing those valves, the pressure readings on the new natural gas line spiked. One of the fittings on the new line blew off into a worker's hand. And as they were trying to plug the leak, the crew heard emergency sirens in the distance. They looked up and saw plumes of smoke rising above the horizon. By the end of the day, over a hundred structures would be damaged by fire and explosions, several homes would be completely destroyed, 22 people (including three firefighters) would be injured, and one person would be dead in one of the worst natural gas disasters in American history. The NTSB did a detailed investigation of the event that lasted about a year. So let’s talk about what actually happened, and the ways this disaster changed pipeline engineering so that hopefully something like it never happens again. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the 2018 Merrimack Valley natural gas explosions.

Like many parts of the world, natural gas is an important source of energy in homes and businesses in the United States. It’s a fossil fuel composed mostly of methane gas extracted from geologic formations using drilled wells. The US has an enormous system of natural gas pipelines that essentially interconnect the entire lower 48 states. Very generally, gathering lines connect lots of individual wells to processing plants, transmission lines connect those plants to cities, and then the pipes spread back out again for distribution. Compressor stations and regulators control the pressure of the gas as needed throughout the system. Most cities in the US have distribution systems that can deliver natural gas directly to individual customers for heating, cooking, hot water, laundry, and more. It’s an energy system that is in many ways very similar to the power grid, but in many ways quite different, as we’ll see.

Just like a grid uses different voltages to balance the efficiency of transport with the complexity of the equipment, a natural gas network uses different pressures. In transmission lines, compressor stations boost the pressure to maximize flow within the pipes. That’s appropriate for individual pipelines where it’s worth the costs for higher pressure ratings and more frequent inspections, but it’s a bad idea for the walls of homes and businesses to contain pipes full of high-pressure explosive gas. So, where safety is critical, the pressure is lowered using regulators.

Just a quick note on units before we get too far. There are quite a few ways we talk about system pressures in natural gas lines. Low pressure systems often use inches or millimeters of water column as a measure of pressure. For example, a typical residential natural gas pressure is around 12 inches (or 300 millimeters) of water, basically the pressure at which you would have to blow into a vertical tube to get water to raise that distance: roughly half a psi or 30 millibar. You also sometimes see pressure units with a “g” at the end, like “psig.” That “g” stands for gauge, and it just means that the measurement excludes atmospheric pressure. Most pressure readings you encounter in life are “gauge” values that ignore the pressure from earth’s atmosphere, but natural gas engineers prefer to be specific, since it can make a big difference in low pressure systems.

The natural gas main line in the Merrimack Valley being replaced had a nominal pressure of 75 psi or about 5 bar, although that pressure could vary depending on flows in the system. Just for comparison, that’s 173 feet or more than 50 meters of water column. But, the distribution system, the network of underground pipes feeding individual homes and businesses, needed a consistent half a psi or 30 millibar, no matter how many people were using the system. The device that made this possible was a regulator. There are lots of different types of regulators used in natural gas systems, but the ones in the Merrimack valley use pilot-operated devices, which are pretty ingenious. It’s basically a thermostat, but for pressure instead of temperature. The pilot is a small pressure regulating valve that supports the opening or closing of the larger primary valve. If the pilot senses an increase or decrease in pressure from the set point, it changes the pressure in the main valve diaphragm, causing it to open or close. This all works without any source of outside power just using the pressure of the main gas line.

Columbia Gas’s Winthrop station was just a short distance south of where the tie-in work was being done on the day of the event. Inside, a pair of regulators in series was used to control the pressure in the distribution system. One of these regulators, known as the worker, was the primary regulator that maintained gas pressure. A second device, called the monitor, added a layer of redundancy to the system. The monitor regulator was normally open with a setpoint a little higher than the worker so it could kick in if the worker ever failed, and, at least in theory, make sure that the low-pressure system never got above its maximum operating level of about 14 inches of water column or 35 millibar. But, in this worker/monitor configuration, the pilots on the two regulators can’t use the downstream pressure right at the main valve. For one, the reading at the worker would be affected by any changes in the downstream monitor. And for two, measuring pressure right at the valve can be inaccurate because of flow turbulence generated by the valve itself. It would be kind of like putting your thermostat right in front of a register; it wouldn’t be getting an accurate reading. So, the pilots were connected to sensing lines that could monitor the pressure in the distribution system a little ways downstream of the regulator station.

The worker and monitor regulators were both functioning as designed on September 13, and yet, they allowed high pressure gas to flood the system, leading to a catastrophe. How could that happen? The NTSB’s report is pretty clear. Tying a natural gas line while it’s still in service, called a hot tie-in, is a pretty tricky job that requires strict procedures. Here are the basic steps: First a bypass line was installed across the upstream and downstream parts of the main line. Then balloons were inserted into the main to block gas from flowing into the section to be cut. Once the gas was purged from the central section, it was cut out and removed while the bypass line kept gas flowing from upstream to downstream. The line to be abandoned got a cap, and the new plastic tie in was attached to the downstream main. Once the tie-in was complete, the crew switched the upstream gas service from the old cast iron line over to the new plastic line and deflated the last balloon so that gas could flow. The upstream cast iron line was still pressurized, since it was still connected to the in-service line through the bypass. But, as soon as the crew closed the valves on the bypass, the old cast iron line was fully isolated, and the pressure inside the line started to drop, as planned.

What that crew didn’t know is that when that plastic main line was installed 2 years back, a critical error had been made. The main discharge line at the regulator station had been attached to the new polyethylene pipe, but the sensing lines had been left on the old cast iron main. It hadn’t been an issue for the previous 2 years, since both lines were being used together, but this tie-in job was the first of the entire project that would abandon part of the original piping. Within minutes of isolating the old cast iron pipe, its pressure began to drop. To a regulator, there’s no difference between a pressure drop from high demands on the gas system and a pressure drop from an abandoned line, and they respond the same way in both cases: open the valves. In a normal situation, the increased gas flow would result in higher pressure in the sensing lines, creating a feedback loop. But this was not a normal situation. It’s the equivalent of putting your thermostat in the freezer. Even as pressure in the distribution system rose, the pressure in the sensing lines continued to drop with the abandoned line. The regulators, not knowing any better, kept opening wider and wider, eventually flooding the distribution system with gas at pressures well above its maximum rating.

By the time things went sideways, the crew at the tie-in had taken most of their equipment out of the excavation. But as one worker was removing the last valve, it blew off into his hand as gas erupted from the hole. The crew heard firefighters racing throughout the neighborhood and saw the smoke from fires across the horizon. The overpressure event had started a chain of explosions, mostly from home appliances that weren’t designed for such enormous pressures. The emergency response to the fires and explosions strained the resources of local officials. Within minutes, the fire departments of Lawrence, Andover, and North Andover had deployed well over 200 firefighters to the scenes of multiple explosions and fires, and help from

neighboring districts in Massachusetts, New Hampshire, and Maine would quickly follow. The Massachusetts Emergency Management Agency activated the statewide fire mobilization plan, which brought in over a dozen task forces in the state, 180 fire departments, and 140 law enforcement agencies. The electricity was shut off to the area to limit sources of ignition to help prevent further fires, and of course, natural gas service was shut off to just under 11,000 customers.

By the end of the day, one person was dead, 22 were injured, and over 50,000 people were evacuated from the area. And while they were allowed back into their homes after three days, many were uninhabitable. Even those lucky enough to escape immediate fire damage were faced with a lack of gas service as miles of pipelines and appliances had to be replaced. That process ended up taking months, leaving residents without stoves, hot water, and heaters in the chilly late fall in New England.

NTSB had several recommendations stem from their investigation. At the time of the disaster, gas companies were exempt from state rules that required the stamp of a licensed professional engineer on project designs. Less than three months after NTSB recommended the exemption be lifted, a bill was passed requiring a PE stamp on all designs for natural gas systems, providing the public with better assurance that competent and qualified engineers would be taking responsibility for these inherently dangerous projects. And actually, NTSB issued the same recommendation and sent letters to the governors of 31 states with PE license exemptions, but most of those states still don’t require a PE stamp on natural gas projects today. There were recommendations about emergency response as well, since this event put the area’s firefighters through a stress test beyond what they had ever experienced.

NTSB also addressed the lack of robustness of low pressure gas systems where the only protection against overpressurization is sensing lines on regulators. It’s easy to see in this disaster how a single action of isolating a gas line could get past the redundancy of having two regulators in series and quickly lead to an overpressure event. This situation of having multiple system components fail in the same way at the same time is called a common mode failure, and you obviously never want that to happen on critical and dangerous infrastructure like natural gas lines. Interestingly and somewhat counterintuitively, one solution to this problem is to convert the low-pressure distribution system to one that uses high pressure. Because, in this kind of system, every customer has their own regulator, essentially eliminating the chance of a common mode failure and widespread overpressure event.

Most importantly, the NTSB did not mince words on who they found at fault for the disaster. They were clear that the training and qualification of the construction crew, or the condition of the equipment at the Winthrop Avenue regulator station were NOT factors in the event. Rather, they found that the probable cause was Columbia Gas of Massachusetts’ weak engineering management that did not adequately plan, review, sequence, and oversee the project.

To put it simply, they just forgot to include moving the sensing lines when they were designing the pipeline replacement project, and the error wasn’t caught during quality control or constructability reviews. NiSource, the parent company of Columbia Gas (of Massachusetts), estimated claims related to the disaster exceeded $1 billion, an incredible cost for weak engineering management. Ultimately, Columbia Gas pleaded guilty to violating federal pipeline safety laws and sold their distribution operations in the state to another utility. They also did a complete overhaul of their engineering program and quality control methods.

All those customers hooked up to natural gas lines didn’t have a say in how their gas company was managed; they didn’t have a choice but to trust that those lines were safe; and they probably didn’t even understand the possibility that those lines could overpressurize and create a dangerous and deadly condition in the place where they should have felt most safe: their own homes. The event underscored the crucial responsibility of engineers and (more importantly) the catastrophic results when engineering systems lack rigorous standards for public safety.

May 16, 2023 /Wesley Crump

Why Bridges Need Sensors (and other structures too)

May 02, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Almost immediately after I started making videos about engineering, people started asking me to play video games on the channel. Apparently there’s roughly a billion people who watch online gaming these days, and some of them watch silly engineering videos too! And there’s one game that I get recommended even more than minecraft: Polybridge. So I finally broke down one evening after the kids went to bed and gave it a try. I’m really not much of a gamer, but I have to admit that I got a little addicted to this game (hashtag not-an-ad). I admit too that there really is a lot of engineering involved. You have different materials that give your structure different properties. The physics are RELATIVELY accurate. You get a budget to spend on each project. And your score is based on the efficiency of your design. But there’s one way this game is not like real structural engineering at all: if your bridge collapses, you get to try again!

In the real world, we can’t design a dam, a building, a transmission line pylon, or a bridge, spend all that money to build it, watch how it performs, tear it down, and build it back better if we’re not happy with the first iteration. Structures have to work perfectly on the first try. Of course we have structural design software that can simulate different scenarios, but it’s only as powerful as your inputs, which are often just educated guesses. We don’t know all the loads, all the soil conditions, or all the ways materials and connections will change over time from corrosion, weathering, damage, or loading conditions. There are always going to be differences between what we expect a structure to do and what actually happens when it gets built. Hopefully engineers use factors of safety to account for all that uncertainty, but you don’t have to dig too deep into the history books to find examples where an engineer neglected something that turned out to matter a lot, sometimes to the detriment of public safety. So what do you do?

We can’t build a project then watch the cars and trucks drive over with the pretty green and red colors on each structural member to see how they’re performing in real time… except you kind of can, with sensors. It turns out that plenty of types of infrastructure, especially those that have serious implications for public safety, are equipped with instruments to track their performance over time and even save lives by providing an early warning if something is going wrong.

I love sensors. To me, it’s like a superpower to be able to measure something about the world that you can’t detect with just your human senses. Plus I’m always looking for an opportunity to exercise my inalienable right to take measurements of stuff and make cool graphs of the data. So I have a bunch of demonstrations set up to show you how engineers employ these sensors to compare the predicted and actual performance of structures, not just for the sake of delightful data visualization, but sometimes even to save lives. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about infrastructure instrumentation.

And what better place to start than with a big steel beam? In fact, this is the biggest steel beam that my local metals distributor would willingly load on top of my tiny car. One of the biggest questions in polybridge and real world engineering is this: How much stress is each structural member experiencing? Of course, this is something we can estimate relatively quickly. So let’s do the engineer thing and predict it first. Beam deflection calculations are structural engineering 101, so we can do some quick recreational math to predict how much this thing flexes under different amounts of weight. And we can use my weight as an example: about 180 pounds or 82 kilograms. The calculation is relatively simple. You can choose your preferred unit system and pause here if you want to go through them. Standing at the beam’s center, I should deflect it by about 2 thousandths of an inch or about 60 microns, around the diameter of the average human hair. In other words, I am a fly on the wall of this beam (or really a fly on the flange). I’m barely perceptible. In fact, it would take more than 100 of me to deflect this beam beyond what would normally be allowed in the structural code. And it would take a lot more than that to permanently bend it. But 2 thousandths of an inch isn’t nothing, so, let’s check our math.

I put my dial indicator underneath the beam, and added some weight. I started with 45 pound or 20 kilogram plates. Each time I add one, you see the beam deflect downward just a tiny bit. After three plates, I added myself, bringing the total up to around 315 pounds or 143 kilos of weight. And actually, the deflection measured by the dial indicator came pretty close to the theoretical predictions made with the simple formula. Here they are on a graph, and there’s the point at my weight, with a deflection of around 2 thousandths of an inch or 60 microns, just like we said. But, we can’t always use dial indicators in the real world because they need a reference point, in this case, the floor. Up on the superstructure of a bridge, there’s no immovable reference point like that. So an alternative is to use the beam itself as a reference. That’s how a strain gauge works, and that’s the cylindrical device that I’m epoxying to the bottom flange of my beam.

A strain gauge works by measuring the tiny change in distance between two parts of the steel. You might know that when you apply a downward load to a beam, it creates internal stress. At the top, the beam feels compression, and at the bottom it feels tension. But it doesn’t just feel the stress, it also reacts to it by changing in shape. Let me show you what I mean. When I put one of the plates on top of the beam, we can see a change in the readout for the strain gauge. (Of course, I had the gauge set to the wrong unit, so let me overlay the proper one with the magic of video compositing.) For each plate I add to the beam, we see that the flange actually lengthens, in this case by about 3 microstrain. That’s probably not a unit of measure you’re familiar with, but it really just means the bottom of the beam increased in length by 0.0003%. When I add another weight, we make it 0.0003% longer again. Same with the third weight. And then when I stand on top of the whole stack, we get a total strain of about 0.002%, a completely imperceptible change in shape to the human eye, but the strain gauged picked it up no problem.

Imagine how valuable it would be to an engineer to have many of these gauges attached to the myriad of structural members in a complicated bridge or building and be able to see how each one responds to changes in loading conditions in real time. You could quickly and easily check your design calculations to make sure the structure is behaving the way you expected. In my simple example in the studio, the gauge is measuring pretty much exactly what the predictions would show, but consider a structure far more complicated than a steel beam across two blocks, in other words, any other structure. What factors get neglected in that simple equation I showed earlier?

We didn’t consider the weight of the beam itself; I’m not actually a one-dimensional single point load, like the equation assumes, but rather my weight is spread out unevenly across the area of my sneakers; Is the length exactly what we entered into the equation? And, what about three-dimensional effects? For example, I put another strain gauge on the top flange of the beam. If you just follow the calculations, you would assume this flange would undergo compression, getting a tiny bit shorter with increased load. But really what happens in this flange depends entirely on how I shift my weight. I can make the strain go up or down simply by adjusting the way I stand on top, creating a twisting effect in the beam, something that would be much more challenging for an engineer to predict with simple calculations. Putting instruments on a structure not only helps validate the original design, but provides an easy way to identify if a member is overloaded. So it’s not unusual for critical structures to be equipped with instruments just like this one, with engineers regularly reviewing the data to make sure everything is working correctly.

Of course, we don’t only use steel in infrastructure projects, but lots of concrete too. And just like steel, concrete structures undergo strain when loaded. So I took a gauge and cast it into some concrete to measure the internal strain of the material. This is just a typical concrete beam mold and some ready-mix concrete from the hardware store. And even before we applied any load, the gauge could measure internal strain of the concrete from the temperature changes and chemical reactions of the curing process. Shrinkage during curing is one of the reasons that concrete cracks, after all. Luckily my beam stayed in one piece. Once the beam had cured and hardened for a few weeks, I broke it free from the mold. Compared to steel, concrete is a really stiff material, meaning it takes a lot of stress to cause any kind of measurable strain. So I got out my trusty hydraulic press for this one. I slowly started adding force from the jack, then letting the beam sit so the data logger could take a few readings from the strain gauge inside. After the fourth step, at just over 50 microstrain, the beam completely broke. Hopefully you can see how useful it might be to have an embedded sensor inside a concrete slab or beam, tracking strain over time, and especially when you know about the amount of strain that corresponds to the strength of the material. This is information that would be impossible to know without that sensor cast into the concrete, and there’s something almost magical about that. It’s like the civil engineering equivalent of x-ray vision.

One of the most amazing things about these sensors is their ability to measure tiny distances. 1 microstrain means one millionth of the original length, which on the scale of most structures, is a practically impossible distance for a human to perceive. But in addition to tiny distances, they also are excellent in measuring changes that happen over a large period of time. A perfect example is a crack in a concrete structure. You can look at grass, but you probably can’t perceive it growing, and you can watch paint, but you won’t perceive it drying. And, you can watch a crack in a concrete slab, like this one in my garage, but you’ll probably never see it grow or shrink over time. So how do you know if it’s changing? You could use a crack meter like this one, and take readings manually over the course of a month or year or decade. But in many cases, that’s not a good use of any person’s time, especially when the crack is somewhere difficult or dangerous to access. So, just like strain gauges measure distance, you can also get crack meters that measure distance electronically. I put this one across the crack in my garage slab and recorded the changes over the course of a few months.

And, I know why this crack exists. It’s because the soil under the slab is expansive clay that shrinks and swells according to its moisture content. I thought it would be fun to use some soil moisture sensors to see if I could correlate the two, but my sensors weren’t quite sensitive enough. However, just looking at the rainfall in my city, you can get a decent idea about what might be driving changes in the width of this crack, which grew by about half a millimeter over the course of this demonstration. Cracking concrete isn’t always something to be concerned about, but if cracks increase in size over time, it can be a real issue. So, using sensors to track the movement of cracks over long durations can help engineers assess whether to take remedial measures.

And, there are a lot of parameters in engineering that change slowly over time. Dams are among the most dangerous civil structures because of what can happen when one fails. Because of that, they’re often equipped with all kinds of instruments as a way to monitor performance and make sure they are stable over the long term. One parameter I’ve talked about before is subsurface water pressure. When water seeps into the soil and rock below a dam, it can cause erosion that leads to sinkholes and voids, and it also causes uplift pressure that adds a destabilizing force to a dam. Instruments used to measure groundwater pressure are called piezometers. They often resemble a water well with a long casing and a screen at the bottom, but instead of taking water out, we just measure the depth to the water level. That’s made a lot easier with electronic sensors, like this one, but I don’t have a piezometer in my backyard. So, to show you how this works, I’m just hooking my pressure transducer to the tap so we can see how the city’s water pressure changes over time. I hooked this up to a laptop and let it run for about a day and a half, and here are the results.

The graph is a little messy because of the water use in my house throwing off the readings every so often, but you can see a clear trend. The pressure is lowest when water demands are high, especially during the evenings when people are watering lawns, cooking, and showering. In the middle of the night, the pumps fill up the water towers, increasing the local pressure in the pipes. This information isn’t that useful, except that it gives you a new perspective of thinking about real-world measurements. Recently I had a plumber at my house who took a pressure reading at the tap, which seemed like a totally normal thing at the time. But now, seeing that the pressure changes by around half a bar (or nearly 10 psi) over the course of a day, it seems kind of silly to just take a single measurement. And that’s the value of sensors, giving engineers more information to make important decisions and keep people safe after a structure is built.

By the way, the engineering of these instruments is pretty interesting on its own. Most of the sensors I’ve used in the demos were sent to us by our friends at Geokon, not as a sponsorship but just because they enjoy the channel and wanted to help out. These devices rely on a wire inside the case whose tension is related to the force or strain on the sensor. The readout device sends an electrical pulse that plucks the wire and then listens to the frequency that comes back. You can see the pluck and the return signal on my oscilloscope here. Just like plucking a guitar string, the wire inside the instrument will vibrate at a different frequency depending on the tension, and you can even hear the sound of the vibration if you get close enough. Of course civil engineers use lots of different kinds of sensors, but vibrating wire instruments are particularly useful in long-term applications because they are incredibly reliable and they don’t drift much over time. They’re also less vulnerable to interference and issues with long cables, since they work in the frequency domain. In fact, there are vibrating wire instruments that have been installed and functioning for decades with no issues or drift.

And the demos I’ve shown in this video just scratch the surface. We’ve come up with creative ways to measure all kinds of things in civil engineering that don’t necessarily lend themselves to garage experiments, but are still critical in performance monitoring of structures. Borehole extensometers are used to measure settlement and heave at excavations, dams, and tunnels. Load cells measure the force in anchors to make sure they don’t lose tension over time. Inclinometers detect subtle shifts in embankments or slopes by measuring the angle of tilt in a borehole along its length. Engineers keep an eye on vibrations, temperature, pressure, tilt, flow rate, and more to make sure that structures are behaving like they were designed and to keep people safe from disaster.

May 02, 2023 /Wesley Crump

East Palestine Train Derailment Explained

April 18, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On the evening of Friday, February 3, 2023, 38 of 149 cars of a Norfolk Southern Railway freight train derailed in East Palestine, Ohio. Five of the derailed cars were carrying vinyl chloride, a hazardous material that built up pressure in the resulting fires, eventually leading Norfolk Southern to vent and burn it in a bid to prevent an explosion. The ensuing fireball and cloud brought the normally unseen process of hazardous cargo transportation into a single chilling view, and the event became a lightning rod of controversy over rail industry regulations, federal involvement in chemical spills, and much more. I don’t know about you, but in the flurry of political headlines and finger pointing, I kind of lost the story of what actually happened. Freight trains, like the one that derailed in East Palestine, are fascinating feats of engineering, and the National Transportation Safety Board (or NTSB) and others have released preliminary reports that contain some really interesting details. I’m not the train kind of engineer, but I think I can help give some context and clarity to the story, now that some of the dust has settled. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the East Palestine Train Derailment.

Modern freight trains are integral to daily life for pretty much everybody. Look around you, and chances are nearly every human-made object you see has, either as bulk raw materials or even as finished goods, spent time on the high iron. One of the reasons trains are so integral to our lives is because there’s nothing else that comes even close to their efficiency in moving cargo over land at such a scale. Steel wheels on steel rails waste little energy to friction (especially compared to rubber tires on asphalt). Locomotives may look huge, but their engines are almost trivial compared to the enormous weight they move. If a car were so efficient, its engine could practically fit in your pocket.  And yet, the trains those locomotives pull are not so much a just a vehicle as they are a moving location, larger and heavier than most buildings.

With this scale in mind, you can see why the crew in a locomotive can’t monitor the condition of all the cars behind them without some help. A rear-view mirror doesn’t do you much good when part of your vehicle is a half hour’s walk behind you. There was a time not too long ago when every freight train had a caboose. Part of their purpose was to have a crew at the end of the train who could help keep a lookout for problems with the equipment. Now modern railways have replaced that crew with wayside defect detectors. These are computerized systems that can monitor passing trains and transmit an automated message to the crew over the radio letting them know the condition of their train in real time. Defect detectors look for lots of issues that can lead to derailment or damage, including dragging equipment, over height or over width cars (a hazard if the train will be passing through tunnels or under bridges), and, important in this case, overheating axles and bearings. Depending on the railway operator and line, these detectors are often spaced every 10 or 20 miles (or 15 to 30 kilometers).

The freight train that derailed in East Palestine, designated 32N, passed several defect detectors along its way, and NTSB collected the data from each one. The suspected wheel bearing responsible for the crash was located on the 23rd car of the train. At mile post 79.9, it registered a temperature of 38 degrees Fahrenheit above the ambient temperature. Ten miles later, the bearing’s recorded temperature was 103 degrees above ambient. That might seem kind of high, but it is still well below the threshold set by Norfolk Southern that would trigger the train to stop and inspect the bearing. Twenty miles later, the train passed another defect detector that recorded the bearing’s temperature at 253 degrees above ambient (greater than the 200-degree threshold), triggering an alarm instructing the crew to stop the train. But, it was too late.

Freight trains are equipped with a fail-safe braking system powered by compressed air. There are two main connections between cars on a train: one is the coupler that mechanically joins each car, and the other is the air line that transmits braking control pressure. As long as this line is pressurized, the brakes are released, and the cars are free to move. But if one of these air lines is severed, like it would be during a derailment, the loss of pressure triggers the brakes to engage on every single car of the train. That’s what happened shortly after that defect detector recorded the over-temperature bearing. When the defect detector notified the crew of an issue, they immediately applied the brakes to slow the train. But before they could reach a controlled stop, the train’s emergency braking system activated.  A security camera nearby caught this footage showing significant sparks from what is presumably the failing car moments before the derailment. Understanding the severity of the situation, the crew immediately notified their dispatcher of the possible derailment. They applied handbrakes to the two railcars at the head of the train, uncoupled, and moved the two locomotives at the head end (and themselves) about a mile down the line away from the fire and damage, not knowing the events that would quickly follow.

A train’s “consist” defines the collection of cars that make it up. 32N’s consist included 2 locomotives at the head, a locomotive near the center of the train called distributed power, and 149 railcars. 38 of those 151 cars had come off the tracks, forming a burning pile of steel and cargo. Of those 38 cars that derailed, 11 were carrying hazardous materials including isobutylene, benzene, and vinyl chloride. Local fire crews and emergency responders worked to put out the fires and address the immediate threats resulting from the derailed cars. But despite the firefighting efforts, five of the derailed cars transporting vinyl chloride continued to worry authorities due to rising temperatures. Norfolk Southern suspected that the chemical was undergoing a reaction that would continue to increase in temperature and pressure within the tanks, eventually leading to an uncontrolled explosion and making an already bad situation much worse.

The cars carrying vinyl chloride were DOT-105 tank cars. These are not just steel cylinders on wheels. The US Department of Transportation actually has very specific requirements for tank cars that carry hazardous materials. DOT105 cars have puncture-resistant systems at either end to keep adjacent cars from punching a hole through the tank. They have a thermal protection system with insulation and an outer steel jacket to protect against fires. They are tested to pressures much higher than they would normally see, and they include pressure relief devices, or PRDs, that automatically open to keep the tank from reaching its bursting pressure. The PRDs on some of the vinyl chloride cars did operate to limit the pressure inside the tanks, but the temperature continued to increase.

As fires continued to burn, state and federal officials noted the temperature in one of the vinyl chloride cars was reaching a critical level. Rather than trust the PRDs to keep the tanks safe from bursting, they decided to perform a controlled release of the chemical to prevent an explosion. While they were still making the decision, the Ohio National Guard and the Federal Emergency Management Agency were running atmospheric models to estimate the extent of the resulting plume. Local emergency managers used these models to evacuate the area most likely to be affected by the release. On February 6, crews dug a large trench in the ground, vented the five vinyl chloride tanks into the trench and set the chemical on fire to burn it off. Despite being done on purpose to reduce the danger of the situation, the resulting fireball and pillar of smoke have become symbolic of the disaster itself.

You might be wondering, like I did, why the controlled burn was necessary if the tank cars were fitted with PRDs. While the NTSB’s full report hasn’t been released yet, they have released some details about their inspections of the vinyl chloride cars. Three of the cars were manufactured in the 1990s with aluminum hatches that cover the valves (as opposed to the more updated standard steel hatches). During the initial fires and “energetic pressure reliefs”, it seems that the aluminum may have melted and obstructed the relief valves, impacting their ability to reduce the building pressure.

You might also be wondering why a train passing through a populated area would be carrying so much vinyl chloride in the first place. Vinyl chloride might sound familiar to some of you as it is the ‘VC’ in PVC. This channel makes a lot of use of PVC demonstrations. It’s a material used in a lot of applications, so we produce it in vast quantities, and railways are usually how we move vast quantities of bulk materials and chemicals. But, vinyl chloride is a toxic, volatile, and flammable liquid, not something you want a big pool of near your city, so officials decided to burn it off. Flaring or burning chemicals is a pretty common practice for dealing with dangerous gases or liquids that can’t easily be stored. It’s essentially a lesser evil, a way to quickly convert a hazardous material to something less hazardous or at the very least, easier to dilute. 

While the byproducts of burning vinyl chloride are far from ideal, combusting it into the atmosphere was intended to be a way to quickly address the concern of it harming people on the ground or polluting a larger area. In fact, the US Environmental Protection Agency flew a specially-equipped airplane after the burnoff to measure chemical constituents of the resulting plume. They found low detections of any chemicals of concern and concluded in their report that the controlled burn of the railcars was a success.

But “success” is a strong word for an event like this, and I might have chosen a different word. While there were no immediate fatalities resulting from the crash, the impacts are far-reaching. Chemical pollutants were not only released into the air, but also washed into local waterways during the firefighting efforts. Hazardous substances reached all the way to the Ohio River, and the Ohio Department of Natural Resources estimated that roughly 40,000 small fish and other aquatic life were killed in the local creek that flows away from East Palestine. Between the contamination of water and soil, it’s impossible to say what the long term impact on the local ecology will be.

As for the residents, both the state and federal EPAs have been heavily involved in all aspects of the cleanup, monitoring air quality and water samples from wells and the city’s fresh water supply. So far, they haven’t detected any air quality levels of health concern after the derailment. As for the area’s groundwater, out of 126 wells tested, none have shown evidence of significant contamination. But as you’ve seen in some of my previous videos, it can take a while for contamination to move through groundwater.

The EPA has ordered Norfolk Southern to conduct all cleanup actions associated with the East Palestine train derailment. The company itself has pledged to “meet or exceed” regulatory requirements in regards to the cleanup. Cleaning up after such a disaster is no easy feat, from air, water, and soil testing, to disposal of huge volumes of contaminated water and soil, the whole thing is a mess, literally. The cleanup is still underway as I’m releasing this video, but so far they’ve removed over 5,000 tons of contaminated soil and collected about 7 million gallons or 26 million liters of contaminated water from rain falling on the site and washing off trucks working on the cleanup. The response has been robust, but we know how these cleanups can go. The EPA’s list of almost 1,800 hazardous waste sites of highest priority only has 450 examples of sites cleaned up enough to be taken off of the list!

The whole situation has also sparked policy discussions among several agencies. The NTSB is opening a special investigation into the safety culture and practices of Norfolk Southern. From congressional testimonials, to public statements from the Department of Transportation, to political posturing from a huge variety of public officials, one thing seems clear to me: this disaster will have an impact on the way railroading is conducted in America for years to come.

The residents of East Palestine have a long road ahead of them. While all the preliminary testing so far paints a relatively safe and healthy picture of the town after the event, many have reported symptoms and effects. Even if there really are no residual compounds present at dangerous levels, the anxiety and unease of living near a high profile chemical spill is hard to escape. The economic impact of just the perception of contamination is also very real, and things like home values and local agricultural businesses have already taken a direct hit. I live really close to a freight line myself, something that is a unique joy for my two-year-old. But now, when I see those tanker cars roll by, I can’t stop myself from just wondering what’s inside them and what might happen if they came off the rail in my neighborhood.

But I also recognize that much of the lifestyle I enjoy depends on those trains rolling by my house, and despite the tragedy of events like East Palestine, the DOT recognizes rail transportation to be the safest overland method of moving hazardous materials. Even with the bulk of hazardous materials being transported over rails, highway hazmat accidents result in more than 8 times as many fatalities! So, freight rail isn’t going away anytime soon. It’s the only feasible way to move the mountains of materials required for all of the industries in the US, and really, the world. And the fact that we rarely have to consider the incredible engineering details of tanker cars, defect detectors, and hazardous material cleanup operations is a testament to the hard work that goes into regulating and operating these lines.


But freight rail in the US is unlike any other industry. There are only seven companies that operate Class I railroads that make up the vast majority of rail transportation in the country. The US rail market essentially consists of two duopolies: CSX and Norfolk Southern in the east and Union Pacific and BNSF in the west. That gives these companies enormous political power, as we’ve seen in recent news. So, we have to ask ourselves, are accidents like East Palestine, however rare they may be, just a part of doing business, or is there more that can be done? And I think the answer in this case is clear. I expect we’ll see some changes to safety regulations in the future to make sure something like this never happens again. And hopefully the next Practical Engineering video on railway engineering will have a more positive light.

April 18, 2023 /Wesley Crump

Why Engineers Can't Control Rivers

April 04, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Old River Control Structure, a relatively innocuous complex of floodgates and levees off the Mississippi River in central Louisiana. It was built in the 1950s to solve a serious problem. Typically rivers only converge; tributaries combine and coalesce as they move downstream. But the Mississippi River is not a typical river. It actually has one place where it diverges into a second channel, a distributary, named the Atchafalaya. And in the early 1950s, more and more water from the Mississippi River was flowing not downstream to New Orleans in the main channel, but instead cutting over and into this alternate channel. 

The Army Corps of Engineers knew that if they didn’t act fast, a huge portion of America’s most significant river might change its path entirely. So they built the Old River Control Structure, which is basically a dam between the Mississippi and Atchafalaya Rivers with gates that control how much water flows into each channel on the way to the Gulf of Mexico. It was certainly an impressive feat, and now millions of people and billions of economic dollars rely on the stability created by the project, the now static nature of the Mississippi River that once meandered widely across the landscape. That’s why Dr. Jeff Masters called it America’s Achilles’ Heel in his excellent 3-part blog on the structure.

You see, the Atchafalaya River is both a shorter distance to the gulf and steeper too. That means, if the structure were to fail (and it nearly did during a flood in 1973), a major portion of the mighty Mississippi would be completely diverted, grinding freight traffic to a halt, robbing New Orleans and other populated areas of their water supply, and likely creating an economic crisis that would make the Suez Canal obstruction seem like a drop in the bucket. Mark Twain famously said that "ten thousand river commissions, with all the mines of the world at their back, cannot tame that lawless stream, cannot curb it or confine it, cannot say to it, Go here, or Go there, and make it obey;" And engineers have spent the better part of the last 140 years trying to prove him wrong.

In my previous video on rivers, we talked about the natural processes that cause them to shift and meander over time. Now I want to show you some examples of where humans try to control mother nature’s rivers and why those attempts often fail or at least cause some unanticipated consequences. We’ve teamed up with Emriver, maker of these awesome stream tables, to show you how this works in real life. And we’re here on location at their headquarters. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about the intersection between engineering and rivers.

One of the most disruptive things that humans do to rivers is build dams across them, creating reservoirs that can be kept empty in anticipation of a flood or be used to store water for irrigation and municipal supplies. But rivers don’t just move water. They move sediment as well, and just like an impoundment across a river stores water, it also becomes a reservoir for the silt, sand, and gravel that a river carries along. That’s pretty easy to see in this flume model of a dam. Fast flowing water can carry more sediment suspended in it than slow water. The flow of water rapidly slows as it enters the pool, allowing sediment to fall out of suspension. Over time, the sediment in the reservoir builds and builds. This causes some major issues. First, the reservoir loses capacity over time as it fills up with silt and sand, making it less useful. Next, water leaving on the other side of the dam, whether through a spillway or outlet works, is mostly sediment-free, giving it more capability to cause erosion to the channel downstream. But there’s a third impact, maybe more important than the other two, that happens well away from the reservoir itself. Can you guess what it is? 

In the previous video of this series, we talked about the framework that engineers and the scientists who study rivers (called fluvial geomorphologists) use to understand the relationship between the flow of water and sediment in rivers. This diagram, called Lane’s Balance, simplifies the behavior of rivers into four parameters: sediment volume, sediment size, channel flow, and channel slope. You can see when we reduced the volume of sediment in a stream, like we would by building a dam, Lane’s Balance tips out of equilibrium into an erosive condition. In fact, according to Lane’s Balance, any time we change any of these four factors, it has a consequence on the rest of the river as the other three factors adjust to bring the stream back into equilibrium through erosion or deposition of sediments. And we humans make a lot of changes to rivers. We want them to stay in one place to allow for transportation and avoid encroaching on property; we want them to drain efficiently so that we don’t get floods; we want them to be straight so that the land on either side has a clean border; we want to cross over them with embankments, utilities, electrical lines, and bridges; we want to use them for power and for water supply. Oh and rivers and streams also serve as critical habitat for wildlife that we both depend on and want to preserve. All those goals are important and worthwhile, but, as we’ll see (with the help of this awesome demonstration that can simulate river responses), they often come at a cost. And sometimes that cost is borne by someone or someplace much further upstream or downstream than from where the changes actually take place.

One of the classic examples of this is channel straightening. In cities, we often disentangle streams to get water out faster, reduce the impacts of floods, and force the curvy lines of natural rivers to be neater so that we can make better use of valuable space. I can show it in the stream table by cutting a straight line that bypasses the river’s natural meanders.

The impact of straightening a river is a reduction in a channel’s length, necessarily creating an increase in its slope. Water flows faster in a steeper channel, making it more erosive, so the practical result of straightening a channel is that it scours and cuts down over time. It’s easy to see the results in the model. This is compounded by the fact that cities have lots of impermeable surfaces that send greater volumes of runoff into streams and rivers. That’s why you often see channels covered in concrete in urban areas - to protect against the erosion brought on by faster flows. And this works in the short term. But, making channels straight, steep, and concrete-covered ruins the stream or river as a habitat for fish, amphibians, birds, mammals and plants. It also has the potential to exacerbate flooding downstream, because instead of floodwaters being stored and released slowly from the floodplain, it all comes rushing as a torrent at once instead. And it’s not just cities. Channels are straightened in rural areas too to reduce flooding impacts to crops and make fields more contiguous and easy to farm. But over the long term, channelizing streams reduces the influx of nutrients to the soils in the floodplain by reducing the frequency of a stream coming out of its banks, slowly making the farmland less productive.

Stream restoration is big business right now as we have begun to recognize these long-term impacts that straightening and deepening natural channels has and reap the consequences of the mistakes of yesteryear. In the US alone, communities and governments spend billions of dollars per year undoing the damage that channelization projects have caused. Even the most famous of the concrete channels, the Los Angeles River, is in the process of being restored to something more like its original state. The LA River Ecosystem Restoration project plans to improve 11 miles (18 km) of the well-known concrete behemoth featured in popular films like Grease and Dark Knight Rises. The project will involve removing concrete structures to establish a soft-bottom channel, daylighting streams that currently run in underground culverts, terracing banks with native plants, and restoring the floodplain areas, giving the river space to overbank during floods. Thanks to fluvial geomorphologists, projects like this are happening all around the world. But, straightening channels isn’t the only way humans impact rivers and streams.

Another impactful place is at road crossings. Bridges are often supported on intermediate piers or columns that extend up from a foundation in the river bed. Water flows faster around the obstruction created by these piers, making them susceptible to erosion and scour. Engineers have to estimate the magnitude of this scour to make sure the piers can handle it. You don’t have to scour the internet very hard to find examples where bridges met their demise because of the erosion that they brought on themselves. In fact, the majority of bridges that fail in the United States don’t collapse from structural problems or deterioration; they fail from scour and erosion of the river below.

But, it’s not just piers that create erosion. Both bridges and embankments equipped with culverts often create a constriction in the channel as well. Bridge abutments encroach on the channel, reducing the area through which water can flow, especially during a flood, causing it to contract on the upstream side and expand on the downstream side. Changes in the velocity of water flow lead to changes in how much sediment it can carry. Often you’ll see impacts on both sides of an improperly designed bridge or culvert; Sediment accumulates on the upstream side, just like for a dam, and the area downstream is eroded and scoured. Modern roadway designs consider the impacts that bridges and culverts might have on a stream to avoid disrupting the equilibrium of the sediment balance and reduce the negative effects on habitat too. Usually that means bridges with wider spans so that the abutments don’t intrude into the channel and culverts that are larger and set further down into the stream bed.

Just like bridges or culvert road crossings, dams slow down the flow of water upstream, allowing sediment to fall out of suspension as we saw in the flume earlier in the video. The consequences include sediment accumulation in the reservoir and potential erosion in the downstream channel, but there’s one more consequence. All that silt, sand, and gravel that a dam robs from the river has a natural destination: the delta. When a river terminates in an ocean, sea, estuary, or lake, it normally deposits all that sediment. Let’s watch that process happen in the river table. River deltas are incredibly important landscape features because they enable agricultural production, provide habitat for essential species, and they feed the sand engines to create beaches that act as a defensive buffer for coastal areas. Wind and waves create nearly constant erosion along the coastlines, and if that erosion is not balanced with a steady supply of sediment, beaches scour away, landscapes are claimed by the sea, habitat is degraded, and coastal areas have less protection against storms.

And hopefully you’re seeing now why it’s so difficult, and some might even say impossible, to control rivers. Because any change you make upsets the dynamic equilibrium between water and sediment. And even if you armor the areas subject to erosion and continually dredge out the areas subject to deposition, there’s always a bigger flood around the corner ready to unravel it all over again. So many human activities disrupt the natural equilibrium of streams and rivers, causing them to either erode or aggrade, or both, and often the impacts extend far upstream or downstream. It’s not just dams, bridges, and channel realignment projects either. We build levees and revetments, dredge channels deeper, mine gravel from banks, clear cut watersheds, and more. Historically we haven’t fully grasped the impacts those activities will have on the river in 10, 50, or 100 years.

In fact, the first iteration of the stream tables we’ve been filming were built by Emriver’s late founder, Steve Gough (goff) in the 1980s. At the time, he was working with the state of Missouri trying to teach miners, loggers, and farmers about the impacts they could have on rivers by removing sediment or straightening channels. These people who had observed the behavior of rivers their entire lives were understandably reluctant to accept new ideas. But, seeing a model that could convey the complicated processes and responses of rivers was often enough to convince those landowners to be better stewards of the environment. Huge thanks to Steve’s wife, Katherine, and the whole team here at Emriver who continue his incredible legacy of using physical models to shrink down the enormous scale of river systems and the lengthy time scales over which they respond to changes down to something anyone can understand to help people around the world learn more about the confluence of engineering and natural systems. Thank you for watching, and let me know what you think!

April 04, 2023 /Wesley Crump

Why Construction Projects Always Go Over Budget

March 21, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Boston, Massachusetts is one of the oldest cities in America, founded in 1630, more than a few years before the advent of modern motor vehicles. In the 1980s, traffic in downtown Boston was nearly unbearable from the tangled streets laid out centuries ago, so city planners and state transportation officials came up with what they considered a grand plan. They would reroute the elevated highway and so-called “central artery” of Interstate 93 into a tunnel below downtown and extend Interstate 90 across the inner harbor to the airport in another tunnel. Construction started in 1991, and the project was given the nickname Big Dig because of the sheer volume of excavation required for the two tunnels. In terms of cost and complexity, the Big Dig was on the scale of the Panama Canal or Hoover Dam. It featured some of the most innovative construction methods of the time, and after 16 years of work, the project was finished on time and under budget…[Grady makes a skeptical face into the camera]

Actually, no. You might know this story already. Of course, the Big Dig did make a big dent in the traffic problem in Boston, but that came at a staggering price. The project was plagued with problems, design flaws, fraud, delays, and of course, cost overruns. When construction finished in 2007, the final price tag was around fifteen-billion dollars, about twice the original cost that was expected.

It’s a tale as old as civil engineering: A megaproject is sold to the public as a grand solution to a serious problem. Planning and design get underway, permits issued, budgets allocated (that took a lot longer than we expect), construction starts, and then there are more problems! Work is delayed, expenses balloon, and when all the dust settles, it’s a lot less clear whether the project’s benefits were really worth the costs.

Not many jobs go quite as awry as the Big Dig, but it’s not just megaprojects that suffer from our inability to accurately anticipate the expense and complexity of construction. From tiny home renovations to the largest infrastructure projects in the world, it seems like we almost always underestimate the costs. And the consequences of missing the mark can be enormous. Well, I’ve been one of those engineers trying to come up with cost estimates for major infrastructure projects, and I’ve been one of those engineers who underestimated. So I have a few ideas about why we so consistently get this wrong.  I’m Grady, and this is Practical Engineering. In today’s episode, we’re trying to answer the question of why construction projects always seem to go over budget.

Major projects are often paid for with public funds, so it’s important (it’s vital) that the benefits we derive from them are worth the costs. And the only way we can judge if any project is worth starting is to have an accurate estimate of the costs first. And, of course, this is not just a problem with civil infrastructure but with all types of large projects paid for with public funds like space programs and defense projects. They have to be justified. Most projects have benefits, and you do get those benefits at the end, no matter the cost, but if they aren’t worth the costs, you’d rather not go through with the project at all. This is especially true for projects like streets and highways where not only costs get underestimated but the benefits are often overestimated too. Check out my friend Jason’s videos on the Not Just Bikes channel for more information about that.

One of the biggest issues we face with large projects is a chicken-and-egg problem: you don’t know how much they’ll cost until you go through the design, but you don’t want to go through a lengthy and expensive design phase and end up with a project you can’t afford. Budgeting and securing funds are usually slow processes, plus you need to know if the job is even worth doing in the first place, so you can’t just wait until the bids come in to find out how much a project is going to cost. You need to know sooner than that, which usually means you need your design professional to estimate the cost. For an infrastructure project, that’s the engineer, and engineers are notoriously not good at estimating costs. 

We don’t know which contractors are busy and which ones aren’t, what machinery they have, or whether or not they’ll bid on your project. We don’t know the sales reps at the concrete and asphalt plants or keep track of the prices of steel, aggregates, pumps, and piping. We don’t have a professional network full of subcontractors, material suppliers, and equipment rental companies. We didn’t study construction cost estimating in college, and most of us have never built anything in the field. And the people who have, those who are most qualified to do this job (the contractors that will actually bid on the project), usually aren’t allowed to participate in the cost estimating during design because it would spoil the fair and transparent procurement process. It would give one or more contractors a leg up on their competition. Because, (here’s a little secret), they aren’t always so good at estimating costs either. When those bids come in, there’s often a huge spread between them, meaning one of the most significant uncertainties of an entire project is sometimes simply which contractors will decide to bid the job.

Of course, there are some alternatives to the normal bidding process that many infrastructure projects use, but even those often require early cost estimates from people who are necessarily limited in their ability to develop cost estimates. In fact, the industry term for the cost estimate that comes from an engineer is the Opinion of Probable Construction Cost or OPCC. Take a look at that mouthful. Two qualifiers: opinion of probable construction cost. And still, agencies and municipalities and DOTs will write down that number on a folded piece of paper, slide it surreptitiously to their governing board, and whisper, “This is how much we need.” And the next day, the journalists who were at the meeting will publish that number in the news. And now, every future prediction of the project’s cost will be compared to that OPCC, no matter how early in the process it was developed. All this to say: estimating the cost of a construction project is hard work (especially early on in the project’s life cycle), it takes highly skilled and knowledgeable people to do well, and even then, it is a process absolutely chock full of uncertainties and risks that are really hard to distill down to a single dollar value. But construction cost estimates aren’t just imprecise. If that were true, you would expect us to overestimate as frequently as we come under. And we know that’s not the case. Why is it always an underestimate?

One hint is in the fact that you often just hear a single number for a project’s cost. What’s included in that 15 billion dollars for the Big Dig or the cost estimate you see for a major project in the news? The truth is that it’s different for every job, to the point where it’s almost a meaningless number without further context. Large infrastructure projects are essentially huge collaborations between public and private organizations that span years, and sometimes decades, between planning, design, permitting, and construction. Land acquisition, surveying, environmental permitting, legal services, engineering and design, and the administration to oversee that whole process all cost money (sometimes a lot of money), and that’s before construction even starts. So if you think that bid from a contractor is the project’s cost, you’re missing out on a lot. And if those pre-construction costs get included in one estimate (for example, the final tally of a project’s cost) when they weren’t included in an earlier estimate (like the engineer’s OPCC), of course it’s going to look like the project came in over budget. You’re not comparing apples to apples.

Another reason for underestimation is inflation. The main method we use to estimate how much something will cost is to look back at similar examples. We consult the Ghost of Construction Past to try and predict the future. It’s not unusual to look at the costs of projects 5 or 10 years old to try and guess the cost of a different project 5 or 10 years into the future. The problem with that is dollars or euros or yen or pounds sterling don’t buy the same amount of stuff in the future that they did in the past. The cost of anything is a moving target, and it’s usually moving up. That’s okay, you might think, just adjust the costs. There are even inflation calculators online, but they normally use the consumer price index. That’s a figure that tracks the cost of a basket of goods and services that a typical individual might buy. Prices vary widely across locations and types of goods, so the idea is that, if you monitor the dollar price of groceries, electricity, clothing, gasoline, et cetera, it can give you a broad measure of how the value of money changes over time for a normal consumer. But there’s not much concrete and earthwork in that basket of goods, which means the consumer price index is generally not a good measure of how construction costs change over time. 

There are a few price indices that track baskets full of labor hours, structural steel, lumber, and cement and even separate those baskets by major city. You have to pay to get access to the data, and they can help a wayward engineer adjust past construction costs to the present day. But they can’t help them predict how those prices will change in the future. And that’s important because large infrastructure projects take a long time to design, permit, and fund. So if there are 2 or 5 or 10 years between when an estimate was prepared and when it’s being used or even discussed, there’s a good chance that it’s an underestimate simply because the value of money itself slid out from underneath it. Cost estimates have an expiration date, a concept that gets overlooked, sometimes even by owners, and often by the media who report these numbers.

That slow time scale for construction projects creates another way that costs go up. Designing a big project is just like navigating a big ship. If things start moving in the wrong direction, the time to fix it is already past. So, we don’t do it all in one fell swoop. You have to have a bunch of milestones where you stop and check the progress because going back to the drawing board is time-consuming and expensive. The issue with this process is that, the further a project matures, the more people get involved. Once you’ve established feasibility, the bosses and boss’ bosses start to weigh in with their advice. Once you have a preliminary design, it gets sent out to regulators and permitting agencies. Once you have some nice renderings, you hold public meetings and get citizens involved. And with all those cooks in the kitchen participating in the design process, does the project get simpler and more straightforward? Almost never.

There is no perfect project that makes everyone happy. So, you end up making compromises and adding features to allay all the new stakeholders. This may seem like a bunch of added red tape, but it really is a good thing in a lot of ways. There was a time when major infrastructure projects didn’t consider all the stakeholders or the environmental impacts, and, sure, the projects probably got done more quickly, efficiently, and at a lower cost (on the surface). But the reality is that those costs just got externalized to populations of people who had little say in the process and to the environment. I’m not saying we’re perfect now, but we’re definitely more thoughtful about the impacts projects have, and we pay the cost for those impacts more directly than we used to. But, often, those costs weren’t anticipated during the planning phase. They show up later in design when more people get involved, and that drives the total project cost upward.

And the thing about project maturity is that, even when you get to the end of design, the project still only exists as a set of drawings on pieces of paper. There are still so many unanswered questions, the biggest one being, “How do we build this?” Large projects are complex, putting them at the mercy of all kinds of problems that can crop up during construction: material shortages, shipping delays, workforce issues, bad weather, and more. Then there are the unexpected site conditions. An engineer can only reasonably foresee so much while coming up with a design on paper or in computer software. A good example is the soil or rock conditions at the site. During design, we drill boreholes, take samples, and do tests on those samples. That lets you characterize the soil or rock in one tiny spot. Of course, you can drill lots of holes, but those holes and those tests are expensive, so it’s a guessing game trying to balance the cost of site investigations with the consequences of mischaracterizing the underlying materials.

If the engineer guesses wrong, it can mean that excavation is more time-consuming because the contractor expected soil and got rock, or that backfill material has to be brought in from somewhere else because the stuff on site isn’t any good. In the worst cases, projects have to be redesigned when the conditions at the site turn out to be different from what was assumed in the design phase. And that’s just the dirt. While it might be great for science or history, imagine the cost of your project if you find historical artifacts or endangered species that you didn’t know were there. It’s a simple reality that there is a lot of uncertainty moving from design into construction, and there just aren’t that many unexpected conditions that make a construction project simpler and cheaper. Of course, opportunities for cost savings do crop up from time to time, but usually those savings get pocketed by the contractor, not passed along to the owner. That’s intentional that the contractor takes on a lot of risk both good and bad. But you can’t saddle a contractor with all the risk that something unexpected won’t show up, and nearly all large contracts have change orders during construction that drive up the cost of the project.

Of course, you can’t ignore the more nefarious ways that costs go up. Any industry that has a lot of money moving around has to contend with fraud, and you don’t have to look too hard through the news to find examples of greed. And there are also plenty of examples where politicians or officials misrepresented the expected cost of a project to avoid public scrutiny. But, in most cases, the reasons for going over budget are much less villainous and far more human: we are just too darned optimistic and short-sighted. But that’s not a good excuse, and I think there’s a lot of room for improvement here. So what do we do? How can we get the actual project cost closer to the budget?

Of course, we can bring construction costs down, but that’s a whole discussion in and of itself. Maybe we’ll table that topic for a future video. I can hear people screaming at the monitor to just add contingency to the budget. Anyone who’s ever guesstimated the cost of anything knows to tack on an extra 15% for caution. Of course, contingency is a tool in the toolbox, but even that has to be justified. We know that the final cost of a project can be more than twice the preliminary estimates, but if you tell a client you added 100% to your estimate for safety, most likely, you’re going to get fired. No one wants to believe there’s that much uncertainty, and also it might not be true. You can’t set aside a billion dollars for a project that costs a hundred thousand, give or take a few K. Sure, you’ll come in under budget, but you just tied up a huge pile of public resources for no good reason.

It turns out a lot of the research suggests spending more money during the planning and design phases. Of course the paper-pushing engineer is saying to spend more money on engineering. But really, construction is where the majority of project costs are, so the theory is that if you can reduce the risks and uncertainty going into construction by spending a little more time in the preconstruction phases, you’ll often earn more than that cost back in the long run. Take three to five percent of those dollars you would have spent on construction, and spend them on risk assessment and contingency planning, and see if it doesn’t pay off. Honestly, even most contractors would prefer this. I know their insurance carriers would.

But, all that considered, I think the biggest place for improvement in budgeting for large construction projects is simply how we communicate those budgets. A single dollar number is easy to understand and easy to compare to some future single dollar number, but really it’s meaningless without more context about when it was developed and what it includes. Because, what is a budget anyway? It’s a way to manage expectations. And if you’re early on in the planning or design phase of a big project, you should expect the unexpected. There’s uncertainty in big projects, and it should be okay to admit that to the public. It should be okay to say, we think it’s going to cost X, but there are still a lot of unknowns. And we think the project will still be worth doing, even if the cost climbs up to Y. And if it goes beyond that, we’re not just going to keep pressing on. We’re going to regroup and find a way to make the benefits worth the costs. There is a ton of room to improve how we develop cost estimates for projects, but there’s tons of room to improve how we communicate about them too.

March 21, 2023 /Wesley Crump

Why Rivers Move

March 07, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is a map of the Mississippi River drafted by legendary geologist Harold Fisk. It’s part of a fairly unassuming geological report that he wrote in 1944 for Army Corps of Engineers, but the maps he produced are anything but run of the mill. They’re strikingly beautiful representations of not just the 1944 path of the Mississippi, but of all the historical paths it’s cut through the landscape over thousands of years. Although astonishing to see on a map, that meandering path represents a major challenge, not just for the people who live and work near the river, but the people around the world who depend on the goods and services that it supports. And that’s a lot of people. What the native Americans called the “Father of Waters” is one of the most important freight corridors in the entire United States, and a huge proportion of the grains we export to other countries is transported on barges along the Mississippi. A change in the river’s course could bottleneck freight traffic, cripple the economy, and potentially even result in a global food crisis. In the 80 or so years since Harold Fisk’s report was written, we’ve spent billions of dollars on infrastructure just to coerce the mighty Mississippi to stay within its current channel. And that’s only a single case study in a battle that’s happening nonstop, around the world between human activity on Earth and the dynamic nature of the rivers that form its landscape.

Even though the natural shifting and meandering of rivers and streams can seriously threaten our infrastructure, our economy, and even the environment, it’s not something that many people pay attention to or even know about at all! Because the timescale is slow and gradual, you don’t see it in the headlines until it becomes a serious problem. And the factors that affect how rivers move don’t really follow our intuitions. So, we’ve teamed up with Emriver, a company that makes physical river models called stream tables to create a two part-series on the science and engineering behind why river channels shift and meander, and what tools engineers use to manage the process. We’re on location at their facility in Carbondale, Illinois, and I’m so excited to show you these models. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about fluvial geomorphology, or the science behind the shape of rivers.

If someone asked you to engineer a channel for water to flow between two locations, what path would you choose? Probably a straight line between them, right? It’s the simplest and most cost effective choice. So why doesn’t mother nature choose it? This river table is full of media that represents earthen materials like silt, sand, and gravel. Each particle size has a different color to make it easier to differentiate. (And the online video compression algorithms love this stuff.) Water flows in at the top of the table and out at the bottom, so we can witness the actual physical processes that happen in real rivers. In the real world, this river system would be tens or hundreds of miles long, and what happens in this model over the course of a few hours might take hundreds or thousands of years as well. Let’s create that straight path in the earth connecting the inlet and outlet of the stream table, set the water flowing through it, and just see what happens. [Beat for time lapse]. Did the channel behave like you expected, or did you find the formation of the meandering path a little bit unintuitive? Hopefully by the end of this video, it will make perfect sense.

We learn about the process of erosion even when we’re really young. Wind and water carve at the earth, transporting the material from one location to another. In most places, erosion happens so slowly that you could never watch it in action, like growing grass or drying paint. But take a look at a river and you immediately see erosion underway. All you have to do is dip below the surface of the water and look. We usually think of rivers as highways for water, but they also transport another material in enormous quantities: sediment. All that silt, sand, gravel, and rock that erodes from the earth cascades and concentrates in rivers and streams, where it’s carried through valleys and eventually out to the lakes and oceans. Because of their power to move rock and soil, the shape of earth’s landscape, the geomorphology, is hugely influenced by river systems.

Maybe because the processes themselves happen so slowly, it took a long time for science to develop around how and why rivers change their paths through the landscape. But, in the 1950’s, a civil engineer and hydrologist by the name of Emory Lane quit his job at the US Bureau of Reclamation to serve as a professor at Colorado State University. Through his time at the Bureau, he worked in hydraulic laboratories studying the interactions between water, soil, and rock. By the time he accepted his appointment, he was well on the way to developing a unified theory of sediment transport. In 1955, he published his landmark equation that is still used today by engineers, geologists, and other professionals in the river sciences. And just like a lot of the most famous equations in history, it doesn’t look too complicated. It says that, in a stable stream, the flow of water multiplied by the slope of that stream is proportional to the flow of sediment in the stream multiplied by the size of that sediment. It seems simple - just four parameters - but, you know, it’s also a funny looking equation with zero context, so maybe you’re not feeling like an expert just yet. But, with the help of the stream table, I can show you the beauty of this relationship and how simple it makes predictions about how rivers will behave.

Let’s just look at some examples. Say that a large area is hit by wildfire that burns all the trees and vegetation. Where before you had a lush and verdant landscape with plants, bushes, and trees to stabilize the soil, now it’s mostly just bare earth. When it rains, the water that runs off the burned area erodes the unprotected landscape, washing more sediment into the river than it would have before the fire. We can demonstrate this by simply adding media to the upstream part of the stream table. Can you predict how the river will respond? Let’s look back at Lane’s Equation. We’ve increased the flow of sediment in the river, but we haven’t changed any of the other variables. We didn’t change the size of the sediment, we didn’t change the flow in the river, and we didn’t change its slope. That means the two sides are imbalanced. Lane’s Equation no longer holds true, and the river is out of equilibrium. In other words, this is no longer a stable channel. In fact, we can convert Lane’s equation into a diagram to make this much simpler to understand.

On one side of this balance is the sediment load and the other side is the volume of flow in the stream. Add more flow and you can transport more sediment. Reduce the flow of water, and you reduce the flow of sediment accordingly. Pretty straightforward, right? But we still have to include the other two parameters, sediment size and stream slope. Now you can see how things get a little more complicated to keep in balance. Any disturbance to any of these four parameters causes the scale to get out of balance, affecting the stream’s equilibrium. When that happens, you have short term consequences, and long term ones too. For the wildfire example where we increased the sediment load in the stream, the top of the balance swings left toward deposition. There’s not enough water to keep the sediment in suspension, so it’s going to deposit within the bed of the river like we’re seeing here in the model. The flow in this example just can’t hold all the sediment we’re washing into it, so it accumulates in the bed and banks of the channel over time.

Here’s another example of a natural disruption to a river system that’s easier to demonstrate in the flume. Beavers build a small dam across the channel, creating a pond that slows down the flow. As the velocity of the stream reduces, heavier sediment settles out. That means that the water below the beaver dam only carries the fine particles of silt and clay downstream. You can see the lighter white particles being carried away while the darker, heavier ones get caught behind the dam. Let’s take a look at Lane’s Balance to predict what will happen to the stream. When we reduce the size of the sediment load in the river, it shifts the left side of the balance inward, and again we lose our equilibrium. But this time, instead of deposition, we can expect the stream to erode downstream, and downstream of a dam, human- or beaver-made, is a common place to find erosion occurring.

Let’s look at one more example of a natural disturbance to a river, changes in the flow. After all, rivers rarely carry a constant volume of water. Their flows change with the seasons and the weather with tremendous variability. That includes floods where heavy precipitation within a watershed converges toward valleys to swell the rivers and streams. We can simulate a flood in our model channel just by turning up the flow, and hopefully at least this parameter matches your intuitions. You can easily see the sediment being carried downstream by the increased flow of water. The banks of the river erode and the material is carried away by the flood. Looking at our diagram, it’s easy to see why. If we increase the flow of water, the scale is out of balance, leading to erosion of the channel.

These disturbances to a channel’s equilibrium seem relatively benign, and even beautiful, in the stream table, but they can represent a serious threat to property, infrastructure, and even the environment. Erosion can cause rivers to shift, washing away roads, underground utilities, and even destabilizing structures. I worked on a project once with a river running alongside a cemetery. Imagine the haunting headlines that a little erosion could create. On the other hand, deposition in a river channel can also create serious issues. Sediment can choke a navigation channel, reducing its capacity for freight traffic, and fill up reservoirs, reducing their storage volume. It can damage the habitat of native fish and other wildlife. And, deposition can reduce the ability of a river channel to carry water, increasing the impacts and inundation during a flood.

Of course, floods and many other disturbances to channels are usually short term events, so the scale naturally balances itself once the river returns to normal conditions. But consider something longer term, like the beaver pond we discussed or a change in climate that means a river is receiving greater flows year over year. At first the balance swings toward erosion or deposition, but a central part of Lane’s theory is that natural forces will gradually adjust the factors to bring the river back into equilibrium. That’s mostly a result of the fourth parameter that we haven’t touched on yet: slope. Erosion and deposition have a natural feedback mechanism with the slope of a river. But how can a river change its slope? After all, the starting and ending points are relatively fixed. Slope is defined as the length of a line divided by its change in elevation (the rise over the run if you remember from algebra class). A river really can’t change the rise (or fall) between its source and mouth, but it can change the run, its length.

Consider the original example I gave you at the beginning of the video. Its Lane balance was all out of whack. Too much water and too much slope created a situation where it eroded out significantly at first. But over the course of a few hours, a new pattern started to emerge. The river started to meander, to lengthen itself by curving back and forth, creating a sinuous path from start to finish. That lengthening led to a reduction in the river’s slope, naturally bringing the channel back closer to its equilibrium condition.

But look closely and you’ll still see sediment moving. It erodes from the outside of bends where flow is most swift, called cut banks, and it deposits on the inside of bends where the flow is slower, called point bars. This creates natural meandering of rivers and geographic features like oxbow lakes where a river cuts itself off at a bend, leaving a curved depression behind. You also see natural aggradation where a river discharges into an ocean or lake where sediment falls out of suspension, called a delta. These phenomena happen for most rivers and streams, even those that are quote-unquote “balanced” according to Lane’s theory. In reality, there’s no such thing as a static state for a river. All the variables are changing over time. Floods, droughts, fires, debris jams, animal activity, and many other natural processes ping the balance this way and that, and we haven’t mentioned the human activities that affect rivers at all. That’s the topic of the next video in this series (by the way) so make sure you subscribe so you don’t miss it. In addition to the constant shifting of flow and sediment load, the natural processes that pull a river toward equilibrium are not very precise or predictable as we can easily see in the stream table. In reality, Lane’s scale is always in motion, bouncing between erosion and deposition states at every point along a river or stream. We call this a dynamic equilibrium because even when all the factors of sediment transport are in balance, rivers still shift and meander. In that way, Lane’s equation is more a way to characterize the magnitude of change than a binary measure of whether a stream channel is in motion or not.

And of course, it’s a simplification. I’ve been calling it an equation, but there’s no equal sign to be found. It’s really just a qualitative relationship that can’t tell you exactly how fast a river will meander or to what extent. There are also factors that it doesn’t consider like vegetation or pulsed flow. For example, imagine a scenario where the climate shifts toward more extreme periods of droughts and floods. Lane’s relationship looks at averages. So, if one river has a relatively constant flow while an identical river has pulses of high and low flows, as long as their average flow is the same, Lane’s relationship would assume they would behave identically. Well, we decided to try it out. See if you can see the difference. [Beat for time lapse].

Even if Lane would predict similar behavior between the two models, it’s easy to see that the pulsed flow model experiences much more erosion and faster movements of the channel. Clearly, we still have progress to make in our understanding of how rivers and streams behave over time under the wide variety of conditions that rivers face. From the tiniest urban drainage ditches to the mighty Mississippi, rivers and streams have enormous consequences for humans. And, like pretty much everything in life, rivers are complicated. Even when all those conditions are perfectly balanced, they never stop moving and changing.

March 07, 2023 /Wesley Crump

The Only State Capital Where You Can’t Drink the Water

February 21, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

As a blast of bitter Arctic air poured into North America around Christmas Time in December 2022, weather conditions impacted nearly every aspect of life, from travel to electricity to just trying to get out the front door. But the frigid temperatures kicked one American city while it was already down. For many people, the idea of not being able to drink the water in their own house is unimaginable. But for the residents of Jackson, Mississippi, it was just another day…or twelve. Last August, a flood took out the aging water system, leaving nearly everyone in the City without water for more than a week. Only a few months later, that arctic weather spell broke so many pipes in the city that residents again lost access to water, some for nearly two weeks, continuing one of the worst water crises in American history. It’s a stark reminder of the massive undertaking involved in providing clean water, day after day, to an entire city of people at once and the enormous stakes of getting it wrong. What does it really take to run a public water supply, what happened in Jackson, and what does its future hold? I’m Grady, and this is Practical Engineering. In today’s episodes, we’re talking about the Jackson water crisis.

Jackson is not only the capital of Mississippi but also its largest city. Its water utility has around 70,000 connections to homes and businesses and services about 170,000 people through two surface water treatment plants. The OB Curtis plant is the larger of the two, with a rated capacity of 50 million gallons per day or 190,000 cubic meters per day. An intake structure collects raw water from a nearby reservoir. That’s the term for untreated surface water. From there, two large-diameter pipelines carry the water to the headworks of the plant. At the headworks, pumps send the raw water through various treatment processes to clean it up and, ideally, make it safe for drinking. Closer to downtown, the JH Fewell Plant has a rated capacity of about half the OB Curtis plant. It draws raw water directly from the Pearl River downstream of the reservoir. The City also has a few groundwater wells to supplement the surface water system.

In normal conditions, clean and drinkable water flows from both plants into a network of pipes and elevated tanks that deliver that water to each building in the City of Jackson. This is a public water supply, something that might sound kind of obvious, but that term has a specific meaning. Because not just anyone can hook a bunch of pipes up to customers and sell them water. Water is both an immediate necessity for life on this blue earth and a powerful agent of disease transmission, so we have rules that regulate those who would collect it and deliver it to others. Specifically, we have the Safe Drinking Water Act, and each state also has its own rules that govern contaminants, monitoring, and public notifications. The goal of drinking water regulation is to make sure that no matter where you are in the United States, you can open the tap and use that water to cook, bathe, or drink, and not have to worry about getting sick. This might sound like a relatively ordinary endeavor, but designing, building, operating, and maintaining a public water system - even for a relatively small city like Jackson - is a monumental enterprise that requires a lot of money, a lot of people, a lot of oversight, and a lot of infrastructure.

Unfortunately, Jackson has gone without many of those necessities for decades, creating issues in the City that eventually led the federal government, specifically the Environmental Protection Agency or EPA, to conduct an inspection of the system in 2020. What they saw shocked them. The City’s water system was in such a state of disrepair and mismanagement that the EPA immediately issued an emergency order. Regulation of a public water system usually falls to the state, in this case the Mississippi State Department of Health, but the federal government can step in if there is an imminent and substantial threat to public health, and in this case, the EPA decided there was. The emergency order required the City to create a plan to fix all the broken equipment and bring the system back into working order, but the urgency was too little, and too late. From that time in early 2020 until nearly the present day, Jackson’s water system faced a seemingly unending cascade of challenges bringing to light just how bad things had gotten.

In February of 2021, the same winter storm that nearly took out the Texas power grid, hit the City of Jackson too. The unseasonably cold weather affected water mains below the city streets, causing them to break and leak. So many water mains broke that the pumps at the water treatment plants couldn’t keep up. The result was the pressure in the system dropping, in some places so low that they had no water pressure at all. In other words, they had no water. Like it had done so many times before, the City issued a system-wide boil water notice. You may have heard this term before but not quite understood the implications. Water systems are pressurized well beyond what’s needed to move the water through the pipes. That’s done for a reason. High pressure keeps unwanted contaminants out. If the system loses pressure, pollutants can be drawn into the pipes through cracks, breaks, or joints, contaminating the water inside. So if a main breaks or a pump stops working or a treatment plant has to shut down, the operator sends out a boil water notice to affected customers, letting them know that their water might be contaminated, and that it should be boiled to kill any potential pathogens before using it for drinking or cooking. This notice in February 2021 lasted for an entire month. Imagine not being able to trust the water from your tap for that long. But even though that particular notice was eventually lifted, residents of Jackson have lived under a practically constant recommendation to boil the water that comes out of the tap, and that’s if they even have any water to boil in the first place.

Only a few months later, in April of 2021, an electrical fire in the OB Curtis plant took out all five of the high-service pumps, the ones that deliver fresh water into the distribution system. Again the pipes lost pressure, and again a boil water notice was issued, this one lasting for four days. It would be another year before the electrical panel for the pumps would be replaced, crippling the treatment plant’s ability to pressurize the water system. That November, chemical feed issues forced operators to shut down the OB Curtis plant, once again causing the system pressure to drop. That boil water notice lasted another four days. In April 2022, water hammer broke a pipe in the OB Curtis plant, again requiring a shutdown. In June, filters at the plant failed, requiring yet another shutdown and yet another system-wide boil water notice (this one for two weeks while the City worked to fix the problem).

In July, the EPA issued a report summarizing the litany of problems faced by the Jackson water distribution system, and the list would be impressive if it didn’t represent such an injustice to the people the system is meant to serve. Water mains were constantly breaking. The City had an annual rate of 55 breaks per 100 miles of pipe when the industry benchmark is 15. There was no monitoring of pressure, meaning the City had no way to identify or address problem areas in the system. There was no map of the system pipes or valves, making it difficult or impossible to implement repairs. Water towers weren’t getting enough flow, causing the water inside to stagnate. Monitoring equipment in the treatment plants wasn’t working, and if it was working, it wasn’t calibrated. And if it was calibrated, there wasn’t enough staff to keep an eye on it.

For an extended period, the utility had no manager, and it almost never had enough operators to staff the plants. A treatment plant operator has to be licensed and know a lot about chemistry and hydraulics and the various equipment used to clean water. It’s usually a great career because it doesn’t require a college degree, the work is rewarding, and the hours are consistent, but that wasn’t the case in Jackson. The City couldn’t pay enough to keep the three shifts at each treatment plant staffed 7 days per week, so the operators that were there were working lots of overtime, and occasionally not being paid for it.

Over the course of 4 years, the City had issued over 750 boil water notices because of the numerous losses of pressure. Water meters throughout the City were broken or misconfigured, meaning people weren’t being billed correctly or billed at all. In fact, the City estimated its non-revenue water, that’s the water that isn’t being paid for because of leaks or bad metering, to be 50 percent! Half of all the water treated and delivered into the distribution system was just being lost; it wasn’t generating revenue that could be used to maintain infrastructure and pay the staff. On top of that many of the large institutions that should be the utilities biggest customers, including local schools and hospitals, had opted to drill their own wells rather than rely on the failing city system, cutting off even more revenue. It’s not hard to imagine why the system was having trouble keeping up.

Throughout the entire year after the federal inspection, the City had been in constant negotiations with the state and the EPA trying to plot a path forward to bringing their ailing water system back into compliance. Biweekly meetings that included representatives from nearly every side of the issue were held to keep track of progress, but the progress was slow. In August of 2022, the OB Curtis plant switched the chemicals used for corrosion control, resulting in a boil water notice that lasted nearly a month. As the city worked to get the treatment process under control, the mayor said in a press conference, “Even when we come out of this boil water notice, I want to be clear that we are still in a state of emergency.” Then came the flood.

In late August, a deluge of heavy rainfall swept across Mississippi, dropping enormous volumes of precipitation across the state. The Ross Barnett Reservoir was already full of water, meaning all the inflows had to be released through the spillway. That swelled the Pearl River downstream, flooding streets and homes throughout the city. You might think a flood would be a good thing for a water system; after all, a flood is just a lot of water. But the problem with flooding is sediment. Heavy runoff carries soil, making the water muddy and much more difficult to treat. Several raw water pumps at OB Curtis quickly failed as they tried to deliver the sediment-laden water to the plant. And the fraction of water that did make it all the way to the plant was still a muddy mess. Any operator will tell you that slow changes to treatment processes and chemical feeds are best. When there’s a sudden shift in water quality that requires rapid adjustments to the processes, problems are bound to occur, and they did. Muddy water clogged filters and upset the various other processes to the point where the plant was utterly unable to treat it, resulting in a complete collapse of the system. The downstream surface water treatment plant suffered a similar fate. Nearly everyone in Jackson lost the ability to use water for basic safety and hygiene. Schools, restaurants, and businesses were closed. From washing hands to fighting fires to just having water to drink, the City was incapacitated.

The flooding threw the water crisis into the national spotlight. The Mayor, the governor, and eventually President Biden all issued disaster declarations, freeing up emergency resources. Federal officials and emergency workers flooded into Jackson to deliver bottled water to the residents and help restore the water supply. A team of engineers and drinking water experts worked to tackle miscellaneous projects at both treatment plants and most importantly, staff the facilities. By September 6, water pressure had been restored to customers, and a week later, the boil water notice was lifted. But it wasn’t the end of the emergency. The water system was barely functioning, and the relief team couldn’t stay in Jackson indefinitely. An emergency contract was issued for an outside company to take over operations of the water system for a year, but it was clear to everyone involved that an enormous capital improvement program and a huge influx of cash was the only way Jackson could pull itself out of the crisis. The City continued negotiating with the state, the EPA, and the Department of Justice, and in November of 2022, they appointed a third-party manager to take over control of the water system.

Ted Henifin, a licensed engineer and former public works director from Virginia, had already been involved in the emergency work, and was now tasked with 13 priority projects to bring Jackson’s system back, if not into perfect working order, at least into compliance with the drinking water laws. He was given broad power and freedom from normal contracting rules to hire, purchase, and contract as needed to get the work done, and he said in November that he planned to wrap up his priority projects within a year. And then the cold came.

The Christmas polar vortex event created a repeat of February 2021, cracking water mains with freezing weather, and creating so many leaks that the water treatment plants just couldn’t keep up. Jackson’s reliance on surface water creates a challenge because, unlike groundwater, the surface water supply is affected by ambient temperature. Freezing weather means chilly water in the reservoir, and when you’re sending very cold water through the underground mains, it causes them to shrink, and in some cases, to break. The cold weather also affected the chemistry of the raw water entering the plant, causing issues with the treatment processes and forcing the OB Curtis Plant to shut down. 22 schools had little to no water pressure and had to move to virtual learning as they returned from the winter break. For many, it was the last straw, and at least one restaurant decided to close their doors for good after having no water pressure for more than 40 days over the past two years, and many more days than that under boil water notices. The Christmas outage had customers without water for two weeks in the latest, but probably not the last event in the saga of underinvestment, misfortune, and utter failure to deliver a basic necessity to the residents of Jackson.

I talk about the engineering behind catastrophes like this, but in so many cases, it’s impossible to ignore the larger issues driving the story. Like all infrastructure, fresh water systems require investment. In an ideal world, those resources come from the water rates, the money that people pay for the water they receive, so the system supports itself. But, when those rates aren’t enough, something has to be done quickly and decisively, because chronic underinvestment creates a vicious cycle. The infrastructure fails, the billing system doesn’t work, customers leave, staff positions can’t be filled, and things just spiral downhill. So infrastructure funding is often supplemented by debt, by grants, by state and federal investment programs, all resources that require more than good management; they often require politicians. The people in charge of the water system in Jackson have been trying to sound the alarms for years. The Mayor even said the city was in an emergency the week before a flood hit the City that completely collapsed the water system. But it took that flood to convince politicians to free up resources. It’s also impossible to ignore the history of Jackson as a part of this story, including a legacy of racism that isolated and separated minorities, gutted the community tax base, and ultimately led to the failing infrastructure we see now. If you want to learn more about that history, there are far more qualified voices than mine to tell it, so I’ll leave some links to the best sources I found below.


In January, the mayor of Jackson announced that they had secured $800 million dollars in federal funding to tackle the city’s issues with water and sewer infrastructure. Those funds will take years to allocate and spend, but it’s another step forward for a city whose water system has fallen so far behind. Clean water is a human right, and the fact that the citizens of a major city in the US don’t have access to it is more than a shame. I’m sharing the story because I think it’s important for all of us to see the consequences of mismanagement, disregard, and discrimination and hopefully learn from those mistakes so that we can be better managers, leaders, or just advocates for the infrastructure that we rely on everyday.

February 21, 2023 /Wesley Crump

Why Some Roadways Are Made of Styrofoam

February 07, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

If you’ve ever driven or ridden in an automobile, there’s a near 100% chance you’ve hit a bump in the road as you transition onto or off of a bridge. In fact, some studies estimate that it happens on a quarter of all bridges in the US! It’s dangerous to drivers and expensive to fix, but the reason it happens isn’t too complicated to understand. It’s a tale (almost) as old as time: You need a bridge to pass over another road or highway. But, you need a way to get vehicles from ground level up to the bridge. So, you design an embankment, a compacted pile of soil that can be paved into a ramp up to the bridge. But, here’s the problem. Even though the bridge and embankment sit right next to each other, they are entirely different structures with entirely different structural behavior. A bridge is often relatively lightweight and supported on a rigid foundation like piles driven or drilled deep into the ground. An embankment is - if the geotechnical engineers will forgive me for saying it - essentially just a heavy pile of dirt. And when you put heavy stuff on the ground, particularly in places that have naturally soft soils like swamps and coastal plains, the ground settles as a result. If the bridge doesn’t settle as much or at the same rate, you end up with a bump. Over the years, engineers have come up with a lot of creative ways to mitigate the settlement of heavy stuff on soft soils, but one of those solutions seems so simple, that it’s almost unbelievable: just make embankments less heavy. Let’s talk about some of the bizarre materials we can use to reduce weight, and a few of the reasons it’s not quite as simple as it sounds. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about lightweight fills.

The Latin phrase for dry land, “terra firma,” literally translates to firm earth. It’s ingrained in us that the ground is a solid entity below our feet, but geotechnical engineers know better. The things we build often exceed the earth’s capacity to withstand their weight, at least not without some help. Ground modification is the technical term for all the ways we assist the natural soil’s ability to bear imposed loads, and I’ve covered quite a few of them in previous videos, including vertical drains that help water leave the soil; surcharge loading to speed up settlement so it happens during construction instead of afterwards; soil nails used to stabilize slopes; and one of the first videos I ever made: the use of reinforcing elements to create mechanically stabilized earth walls.

One of the simplest definitions of design engineering is just making sure that the loads don’t exceed the strength of the material in question. If they do, we call it a failure. A failure can be a catastrophic loss of function, like a collapse. But a failure can also be a loss of serviceability, like a road that becomes too rough or a bridge approach that develops a major bump. Ground modification techniques mostly focus on increasing the strength of the underlying soil, but one technique instead involves decreasing the loads, allowing engineers to accept the natural resistance of a soft foundation.

Let me put you in a hypothetical situation to give you a sense of how this works: Imagine you’re a transportation engineer working on a new highway bridge that will replace an at-grade intersection that uses a traffic signal, allowing vehicles on the highway to bypass the intersection. This is already a busy intersection, hence the need for the bypass, and now you’re going to mess it all up with a bunch of construction. You design the embankments that lead up to the bridge to be built from engineered fill - a strong soil material that’s about as inexpensive as construction gets. You hand the design off to your geotechnical engineer, and they come back with this graph: a plot of settlement over time. Let’s just say you want to limit the settlement of the embankment to 2 inches or 5 centimeters after construction is complete. That’s a pretty small bump. This graph says that, to do that, you’ll have to let your new embankment sit and settle for about 3 years before you pave the road and open the bridge. If you put this up on a powerpoint slide at a public meeting in front of all the people who use this intersection on a daily basis, what do you think they’ll say?

Most likely they’re going to ask you to find a way to speed up the process (politely or otherwise). From what I can tell from my inbox, a construction site where no one’s doing any work is a commuter’s biggest pet peeve. So, you start looking for alternative designs and you remember a key fact about roadway embankments: the weight of the traffic on the road is only a small part of the total load experienced by the natural ground. Most of the weight is the embankment itself. Soil is heavy. They teach us that in college. So what if you could replace it with something else? In fact, there is a litany of granular material that might be used in a roadway embankment instead of soil to reduce the loading on the foundation, and all of them have unique engineering properties (in other words, advantages, and disadvantages).

Wood fibers have been used for many years as a lightweight fill with a surprisingly robust service life of around 50 years before the organic material decays. Similarly, roadway embankments have been seen as a popular way to reuse waste materials. In particular, the State of New York has used shredded tires as a lightweight fill with success, so far avoiding the spontaneous combustions that have happened in other states. There are also some very interesting materials that are manufactured specifically to be used as lightweight fills.

Expanded shale and clay aggregates are formed by heating raw materials in a rotary kiln to temperatures above 1000 celsius. The gasses in the clay or shale expand, forming thousands of tiny bubbles. The aggregate comes out of the kiln in this round shape, and it has a lot of uses outside heavy civil construction like insulation, filtration, and growing media for plants. But round particles like this don’t work well as backfill because they don’t interlock. So, most manufacturers send the aggregate through a final crushing and screening process before the material is shipped out. Another manufactured lightweight fill is foamed glass aggregate. This is created in a similar way to the expanded shale where heating the raw material plus a foaming agent creates tiny bubbles. When the foamed glass exits the kiln, it is quickly cooled, causing it to naturally break up into aggregate sized pieces. You can see in my graduated cylinders here that I have one pound or about half a kilogram of soil, sand, and gravel. It takes about twice as much expanded shale aggregate to make up that weight since its bulk density is about half that of traditional embankment building materials. And the foamed glass aggregate is even lighter.

All these different lightweight fills can be used to reduce the loading on soft soils below roadways and protect underground utilities from damage, but they also have a major advantage when used with retaining walls: reduced lateral pressure. I’ve covered retaining walls in a previous video, so check that out after this if you want to learn more, but here’s an overview. Granular materials like soil aren’t stable on steep slopes, so we often build walls meant to hold them back, usually to take fuller advantage of a site by creating more usable spaces. Retaining walls are everywhere if you know where to look, but they also represent one of the most underappreciated challenges in civil engineering. Even though soil doesn’t flow quite as easily as water does, it is around twice as dense. That means building a wall to hold back soil is essentially like building a dam. The force of that soil against the wall, called lateral earth pressure, can be enormous, and it’s proportional both to the height of the wall and the density of the material it holds back. Here’s an example:

When Port Canaveral in Florida decided to expand terminal 3 to accommodate larger cruise ships, they knew they would need not only a new passenger terminal building but also a truly colossal retaining wall to form the wharf. The engineers were tasked with designing a wall that would be around 50 feet (or 15 meters) tall to allow the enormous cruise ships to dock directly alongside the wharf. The port already had stockpiles of soil leftover from previous projects, so the new retaining wall would get its backfill for free. But, holding back 50 feet of heavy fill material is not a simple task. The engineers proposed a combi-wall system that is made from steel sheet piles supported between large pipe piles for added stiffness, in addition to a complex tie-back structure to provide additional support at the top of the wall. When the design team considered using lightweight fill behind the retaining wall, they calculated that they could significantly reduce the size of the piles of the combi-wall, use a more-commonly available grade of steel instead of the specialty material, and simplify the tie-back system.

Even though the lightweight fill was significantly more expensive than the free backfill available at the site, it still saved the project about $3 million dollars compared to the original design. The fill at Port Canaveral (and all the lightweight fills we’ve discussed so far) are granular materials that essentially behave like normal soil, sand, or gravel fills (just with a lower density). They still have to be handled, placed, and compacted to create an embankment or retaining wall backfill just like any typical earthwork project. But, there are a couple of lightweight fills that are installed much differently. Concrete can also be made lightweight using some of the aggregates mentioned earlier in place of normal stone and sand, or by injecting foam into the mix, often called cellular concrete. On projects where it’s difficult or time consuming to place and compact granular fill, you can just pump this stuff right out of a hose and place it right where it needs to be, speeding up construction and eliminating the need for lots of heavy equipment. There are a few companies that make cellular concrete, and they can tailor the mix to be as strong or lightweight as needed for the project. You can even get concrete with less density than water, meaning it floats!

This test cylinder was graciously provided by Cell-Crete so I could give you a close up look at how the product behaves. Of course we should try and break it. Let’s put it under the hydraulic press and see how much force it takes. The pressure gauges on my press showed a force of just under a ton to break this sample. That is equivalent to a pressure of around 200 psi or 1.4 megapascals, much stronger than most structural backfills. You’re not going to be making skyscraper frames or bridge girders from cellular concrete, but it’s more than strong enough to hold up to traffic loads without imposing tons of weight into a retaining wall or the soft soils below an embankment.

The last lightweight fill used in heavy civil construction is also the most surprising: expanded polystyrene foam, also known as EPS and colloquially as styrofoam. When used in construction, it’s often called geofoam, but it’s the same stuff that makes up your disposable coffee cups, mannequin heads, and packaging material. EPS seems insubstantial because of its weight, but it’s actually a pretty strong material in compression. About 7 years ago I used my car to demonstrate the compressive strength of mechanically stabilized earth. Well, I still have that jack and I still drive that car, so let’s try the experiment with EPS foam. This is probably around 5 to 600 pounds, and there is some deflection, but the block isn’t struggling to hold the weight. In an actual embankment, the pavement spreads out traffic loads so they aren’t concentrated like what’s shown in my demonstration to the point where you would never know that you’re driving on styrofoam.

EPS foam has some cool benefits, including how easy it is to place. The blocks can be lifted by a single worker, placed in most weather conditions, don’t require compaction or heavy equipment, and can be shaped as needed using hot wires. But it has some downsides too. This material won’t work well for embankments that see standing water or high groundwater, because of the buoyancy. The embankment could literally float away. They’re also so lightweight that you have to consider a new force that most highway engineers don’t think about when designing embankments: the wind. Also, because EPS foam is such a good insulator, it creates a thermal disconnect between the pavement and the underlying ground, making the road more susceptible to icing. Finally, EPS foam has a weakness to a substance that is pretty regularly spilled onto roadways: it dissolves in fuel. If a crash, spill, or leak were to happen on an embankment that uses EPS foam without a properly designed barrier, the whole thing could just melt away.


Even with all those considerations, EPS foam is a popular choice for lightweight fills. We even have a nice government report on best practices called Guideline and Recommended Standard for Geofoam Applications in Highway Embankments (if you’re looking for some lightweight bedtime reading). It was used extensively in Seattle on the replacement of the Alaskan Way Viaduct to avoid overstressing the landfill materials that underlie major parts of the city. Thousands of drivers in Seattle and millions of people around the world drive over lightweight embankments, probably without any knowledge of what’s below the pavement. But the next time you pass over a bridge and don’t feel a bump transitioning between the deck and roadway embankments, it might just be lightweight aggregate, cellular concrete, or geofoam below your tires working to make our infrastructure as cost-effective and long-lasting as possible.

February 07, 2023 /Wesley Crump

What Really Happened with the Substation Attack in North Carolina?

January 17, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

At around 7PM on the balmy evening of Saturday, December 3, 2022, nearly every electric customer in Moore County, North Carolina was simultaneously plunged into darkness. Amid the confusion, the power utility was quick to discover the cause of the outage: someone or someones had assaulted two electrical substations with gunfire, sending a barrage of bullets into the high voltage equipment. Around 45,000 customers were in the dark as Duke Energy began work to repair the damaged facilities, but it wouldn’t be until Wednesday evening, four days after the shooting, that everyone would be back online. That meant schools were shuttered, local businesses were forced to close during the busy holiday shopping season, a curfew was imposed, and the county declared a state of emergency to free up resources for those affected. The attack came as other utilities around the United States were reporting assaults on electrical substations, including strikingly similar instances in Oregon and Washington. Let’s talk about what actually happened and try to answer the question of what should be done. We even have exclusive footage of the substations that I’m excited to show you. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the Moore County substation attacks.

Right in the geographic center of North Carolina, Moore County is home to just under 100,000 people. The county is maybe most famous for the Pinehurst Resort, a historic golf course that has hosted the US Open on several occasions. It also sits nearby Fort Bragg, one of the largest Army bases in the world. And here’s an overlay of Moore County’s transmission grid. Taking a look at this layout will help us understand this event a little better. By the way, this information is not secret - it’s publically available, at least for now, in a few locations including the Energy Information Administration website and Open Street Maps, and I’ll discuss the implications of that later in the video.

Two 230 kilovolt (or kV) transmission lines come into Moore County from the southwest and connect to the West End Substation near Pinehurst. One of the lines terminates here while the other continues to the northwest, without making any other connections in Moore County. These two 230 kV lines are the only connection to the rest of the power grid in the area. At the West End Substation, two power transformers drop the voltage to 115kV. From there, two 115kV lines head out in opposite directions to form a loop around Moore County. Distribution substations, the ones with transformers that lower the voltage further to connect to customers, are mostly spread out along this 115kV loop. So, essentially, most of Moore County has two links to the area power grid, and both of them are at a single substation, West End. And you might be able to guess one of the two substations that was attacked that Saturday evening. Interestingly, the other substation attacked was here in Carthage. Just looking at a map of the transmission lines, it would be easy to assume that Carthage provides a second link to the 230 kV transmission grid, but actually, it’s just a distribution substation on the 115 kV loop. The 230 kV line passes right by it.


Duke Energy (the owner of the substations) hasn’t shared many details about the attack. In their initial press release, they simply stated that “several large and vital pieces of equipment were damaged in the event.” Those investigating the attack, including the FBI, are also keeping details close to their chest. Our drone photographer had to have a police escort just to get this footage. But, we can use photos and clips of the substations to hypothesize some details of the event. Just take what I say with a grain of salt, because the folks in charge haven’t confirmed many details. It really looks like the attacker or attackers were specifically targeting the transformers. These are typically the largest and most expensive pieces of equipment (and the hardest to replace) in a substation. They do the job of changing the voltage of electricity as needed to move power across the network. And, even more specifically, it looks like the attackers went after the thin metal radiators of the transformers. Just like the radiator in your car, these are used on transformers to dissipate the heat that builds up within the main tank. But unlike the coolant system in a car, wet-type power transformers are filled with oil. If all that oil drains out of the transformer tank, it can cause the coils to overheat or arc, leading to substantial permanent damage.

Disabling the transformers was presumably the goal of the attack, but obviously, with power transformers being both so important and so difficult to replace, they are almost always equipped with protective devices. We don’t have to do a deep dive into the classic Recommended Practice for the Protection of Transformers Used in Industrial and Commercial Power Systems, but it’s enough to say that utilities put quite a bit of thought into minimizing the chance that something unexpected, whether it’s a short circuit or a bullet, can cause permanent damage to a transformer. Sensors can measure oil pressure, gas buildup, liquid levels, and more to send alarms to the utility when an anomaly like an oil leak occurs. And, some protective devices can even trigger the circuit breakers to automatically disconnect the transformer before it sustains permanent damage.

Whether it happened automatically or manually as a result of an alarm, the two 230 kV transformers in the West End substation were disconnected from the grid as a result of the shooting, and in doing so, the entire 115kV loop that goes around Moore County was de-energized, turning out the lights for the roughly 45,000 connected households and businesses. Aerial footage taken the day after the attacks shows the disconnect switches for the 230kV lines open, an easy visual verification that the transformers are de-energized. You can also see some disassembled radiators on site, presumably to replace the damaged ones on the transformers. It seems that the gunfire only damaged the transformer radiators, which is a good thing because those can usually be replaced and put back into service relatively easily. If the windings within the transformer itself were damaged, it would probably require replacement of the equipment. Transformers of this scale are rarely manufactured without an order, which means we don’t have a lot of spares sitting around, and the lead time can be months or years to get a new one delivered, let alone installed.

With at least three damaged transformers, the utility began working 24-hour shifts on a number of parallel repairs to restore power as quickly as possible. Again, they didn’t share many details of the restoration plans, so we can only talk about what we see in the footage. One of the more interesting parts of restoration involved bringing in this huge mobile substation. It seems that crews temporarily converted the Carthage substation so it could tap into the adjacent 230kv line. The power passes through mobile circuit switches, a truck-mounted transformer, secondary circuit breakers, voltage regulators, and disconnects to feed the 115kv loop. You can also see the cooling system of the mobile transformer is mounted at the back of the trailer to save space. With this temporary fix, and presumably some permanent repairs to the transformer radiators at West End, Duke Energy was able to restore service to all customers by the end of Wednesday, about 4 days after all this started. Knowing the extent of the damage, that’s an impressive feat! But they still have some work ahead of them. In this footage taken two weeks after the attack, you can see that one of the 230 kv transformers is back online while the other is still disconnected with all its radiators dismantled.

The FBI and local law enforcement are still working to find those involved in the incident, and there’s currently a $75,000 reward out for anyone who can help. Officials have stopped short of calling it an act of terrorism, presumably because we don’t know the motive of whoever perpetrated the act. The local sheriff said this person, “knew exactly what they were doing,” and I tend to agree. It doesn’t take a mastermind to take some pot shots at the biggest piece of equipment in a switchyard, but this attack shows some sophistication. They targeted multiple locations, they specifically targeted transformers, and one of the substations they chose was critical to the distribution of power to nearly all of Moore County. It’s fairly safe to say that this person or persons had at least some knowledge about the layout and function of power infrastructure in the area… but that’s not necessarily saying much.

To an unknown but significant extent, power infrastructure gets its security through obscurity. It’s just not widely paid attention to or understood. But, almost all power infrastructure in the US is out in the open, on public display, a fact that is a great joy for people like me who enjoy spotting the various types of equipment. But, it also means that it’s just not that hard for bad actors to be deliberate and calculated about how and where to cause damage. With its sheer size and complexity, it would be impossible to provide physical security to every single element of the grid. But, protecting the most critical components, including power transformers, is prudent. That’s especially true for substations like West End that provide a critical link to the grid for a large number of customers. They already have a new gate up, but that’s probably just a start. I think it’s likely that ballistic resistant barriers will become more common at substations over time, and, of course, those added costs for physical security will be passed down to ratepayers in one way or another.


But it’s important to put this event in context as well. Attacks on the power grid are relatively rare, and they fall pretty low on the list of threats, even behind cybersecurity and supply chain issues. The number one threat to the grid in nearly every place in the US? The weather. If you experience an outage of any length, it's many times more likely to be mother nature than a bad actor with a gun. That’s not to say that there’s not room for improvement though, and this event highlights the need for making critical substations more secure and also making the grid more robust so that someone can’t rob tens of thousands of people of their lights, heat, comfort, and livelihood for four days with just a few well-placed bullets.

January 17, 2023 /Wesley Crump

How Different Spillway Gates Work

January 03, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In the heart of Minneapolis, Minnesota on the Mississippi River is the picturesque Upper Saint Anthony Falls Lock and Dam, which originally made it possible to travel upstream on the river past the falls starting in 1937. It’s a famous structure with a fascinating history, plus it has this striking overflow spillway with a stilling basin at the toe that protects the underlying sandstone from erosion. But there’s another dam just downstream, that is a little less-well-known and a little less scenic, aptly called the Lower Saint Anthony Falls Lock and Dam. Strangely, the spillway for the lower dam is less than half the width of the one above, even though they’re on the exact same stretch of the Mississippi River, subject to the same conditions and the same floods. That’s partly because, unlike its upstream cousin, the Lower Saint Anthony Falls dam is equipped with gates, providing greater control and capacity for the flow of water through the dam. In fact, dams all over the world use gates to control the flow of water through spillways.

If you ask me, there’s almost nothing on this blue earth more fascinating than water infrastructure. Plus I’ve always wanted to get a 3D printer for the shop. So, I’ve got the acrylic flume out, I put some sparkles in the water, and I printed a few types of gates so we can see them in action, talk about the engineering behind them, and compare their pros and cons. And I even made one type of gate that’s designed to raise and lower itself with almost no added force. But this particular type of gate was made famous in 2019, so we’ll talk about that too. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about spillway gates.

Almost all dams need a way to release excess water when the reservoir is full. If you’ve ever tried to build an impoundment across a small stream or channel, you know how powerful even a small amount of flowing water can be. Modern spillways are often the most complex part of a dam because of the high velocities of flow. If not carefully managed, that quickly flowing water can quickly tear a dam apart. The incredible damage at Oroville Dam in 2017 is a striking example of this. Although many dams use uncontrolled spillways where the water naturally flows through once the reservoir rises to reach a certain level, gated spillways provide more control over the flow, and so can allow us to build smaller, more cost-effective structures. There are countless arrangements of mechanical devices that have been used across the world and throughout history to manage the flow of water. But, modern engineering has coalesced to variations on only a few different kinds of gates. One of the simplest is the crest gate that consists of a hinged leaf on top of a spillway.

A primary benefit of the crest gate is that ice and debris flow right over the top, since there’s nothing for the flow to get caught on. Another advantage of crest gates is that they provide a lot of control over the upstream level, since they act like a weir with an adjustable top. So, you’ll often see crest gates used on dams where the upstream water level needs to be kept within a narrow range. For example, here in San Antonio we have the RiverWalk downtown. If the water gets too low, it won’t be very attractive, and if it gets too high, it will overtop the sidewalks and flood all the restaurants. So, most of the dams that manage the flow of water in the San Antonio River downtown use steel crest gates like this one. Just up the road from me, Longhorn Dam holds back Ladybird Lake (formerly Town Lake) in downtown Austin. Longhorn Dam has vertical lift gates to pass major floods, but the central gates on the dam that handle everyday flows are crest gates. Finally, the dam that holds back Town Lake in Tempe, Arizona uses a series of crest gates that are lowered during floods.

Crest gates are attached to some kind of arm that raises or lowers the leaf as needed. Most use hydraulic cylinders like the one in Tempe Town Lake Dam. The ones here in San Antonio actually use a large nut on a long threaded rod like the emergency jack that comes in some cars. You might notice I’m using an intern with a metal hook to open and close the model crest gate, but most interns aren’t actually strong enough to hold up a crest gate at a real dam. In fact, one of the most significant disadvantages of crest gates is that the operators, whether hydraulic cylinders or something else, not only have to manage the weight of the gate itself but also the hydrostatic force of the water behind the gate, which can be enormous. Let’s do a little bit of quick recreational math to illustrate what I mean:

The gates at Tempe Town Lake are 32 meters or about 106 feet long and 6.4 meters or 21 feet tall. If the upstream water level is at the top of one of these gates, that means the average water pressure on the gate is around four-and-a-half pounds for every square inch or about 31,000 newtons for every square meter. Doesn’t sound like a lot, but when you add up all those square inches and square meters of such a large gate, you get a total force of nearly one-and-a-half million pounds or 660,000 kilograms. That’s the weight of almost two fully-loaded 747s, and by the way, Tempe Town Lake has eight of these gates. The hydraulic cylinders that hold them up have to withstand those enormous forces 24/7. That’s a lot to ask of a hydraulic or electromechanical system, especially because when the operation system fails on a crest gate, gravity and hydrostatic pressure tend to push the gate open, letting all the water out and potentially creating a dangerous condition downstream. The next kind of spillway gate solves some of these problems.

Radial crest gates, also known as Tainter gates, use a curved face connected to struts that converge downstream toward a hinge called a trunnion. A hoist lifts the gate using a set of chains or cables, and water flows underneath. My model being made from plastic means it kind of stays where it’s put due to friction, but full-scale radial gates are heavy enough to close under their own weight. That’s a good thing, because, unlike most crest gates, if the hoist breaks, the gate fails closed. The hoist is also mostly just lifting the weight of the gate itself, with the trunnion bearing the hydrostatic force of the water behind held back. These features make radial gates so reliable that they’re used in the vast majority of gated spillways at large dams around the world. If you go visit a dam or see a swooping aerial shot of a majestically flowing spillway, there’s a pretty good chance that the water is flowing under a radial gate.

The trunnion that holds back all that pressure while still allowing the gate to pivot is a pretty impressive piece of engineering. I mean, it’s a big metal pin, but the anchors that hold that pin to the rest of the dam are pretty impressive. Water pressure acts perpendicular to a surface, so the hydrostatic pressure on a radial gate acts directly through this pin. That keeps the force off the hoist, providing low-friction movement. But it’s not entirely friction-free. In fact, the design of many older radial gates neglected the force of friction within the trunnion and needed retrofits later on. I mentioned the story of California’s Folsom Dam in a prior video. That one wasn’t so lucky to get a structural retrofit before disaster struck in 1995. Operators were trying to raise one of the gates to make a release through the spillway when the struts buckled, releasing a wave of water downstream. Folsom Reservoir was half empty by the time they closed the opening created by the failed gate.

How did they do it? Stoplogs, another feature you’re likely to see on most large dams across the world. Just like all mechanical devices that could cause dangerous conditions and tremendous damage during a failure, spillway gates need to be regularly inspected and maintained. That’s hard to do when they’re submerged. The inspecting part is possible, but it’s hard to paint things underwater. In fact, it’s much simpler, safer, and more cost effective to do most types of maintenance in the dry. So we put gates on our gates. Usually these are simpler structures, just beams that fit into slots upstream of the main gate. Stoplogs usually can’t be installed in flowing water and are only used as a temporary measure to dewater the main gate for inspection or maintenance. I put some stoplog slots on my model so you can see how this works. I can drop the stoplogs into the slots one by one until they reach the reservoir level. Then I crack the gate open and the space is dewatered. You can see there’s still some leakage of the stoplogs, but that’s normal and those leaks can be diverted pretty easily. The main thing is that now the upstream face of the gate is dry so it can be inspected, cleaned, repaired, or repainted.

And if you look closely, it’s not just my model stoplogs that leak, but the gates too. In fact, all spillway gates leak at least a little bit. It’s usually not a big issue, but we can’t have them leaking too much. After all, there’s not much point in having a gate if it can’t hold back water. The steel components on spillway gates don’t just ride directly against the concrete surface of the spillway. Instead, they are equipped with gigantic rubber seals that slide on a steel plate embedded in the concrete. Even these seals have a lot of engineering in them. I won’t read you the entire Hydraulic Laboratory Report No. 323 - Tests for Seals on Radial Gates or the US Army Corps of Engineers manual on the Design of Spillway Tainter Gates, but suffice it to say, we’ve tried a lot of different ways to keep gates watertight over the years and have it mostly sealed up to a science now. Most gates use a j-bulb seal that’s oriented so that the water pressure from upstream pushes the seal against the embedded plate, making the gate more watertight. Different shapes of rubber seals can be used in different locations to allow all parts to move without letting water through where it’s not wanted.

In fact, there’s one more type of spillway gate I want to share where the seals are particularly important. Beartrap gates are like crest gates in that they have a leaf hinged at the bottom, but beartrap gates use two overlapping hinged leaves, and they open and close in an entirely different way. The theory behind a beartrap gate is that you can create a pressurized chamber between the two leaves. If you introduce water from upstream into this chamber, the resulting pressure will float the bottom leaf, pushing it upward. That, in turn, raises the upper leaf. The upstream water level rises as the gate goes up, increasing the pressure within the chamber between the gates. The two leaves are usually tied in a way that once fully open, they can be locked together. To lower the gates, the conduit to the upstream water is closed, and the water in the chamber is allowed to drain downstream, relieving the upward pressure on the lower leaf so it can slowly fall back to its resting position. It sounds simple in theory, but in practice this is pretty hard to get right.

I built a model of a bear trap gate that mostly works. If I open this valve on the upstream side, I subject the chamber to the upstream water pressure. In ideal conditions with no friction and watertight seals, this would create enough pressure to lift both leaves. In reality, it needs a little bit of help from the intern hook. But you can see that, as the water level upstream increases, the lower leaf floats upward as well. When the gates are fully opened, the leaves lock together to be self-supporting. Some old bear trap gates used air pressure in the chamber to give the gates a little bit of help going up. I tried that in my model and it worked like a charm. It took a few tries to figure out how much pressure to send, but eventually I got it down.

It’s not just my model bear trap gate that’s finicky, though. Despite the huge benefit of not needing any significant outside force to raise and lower the gates, this type of system has never been widely used. 

This chamber between the leaves is the perfect place for silt and sand to deposit. They were also quite difficult to inspect and maintain because you had to dewater the entire chamber and reroute flows. And because they weren’t widely used, there were never any off-the-shelf components, so anytime something needed to be fixed, it was a custom job. The world got to see a pretty dramatic example of the challenges associated with maintaining old bear trap gates in 2019 when one of the gates at Dunlap Dam near New Braunfels, Texas completely collapsed.

This dam was one of five on the Guadalupe River built in the 1930s to provide hydropower to the area. But over nearly a century that followed, power got a lot cheaper, and replacing old dams got a lot more expensive. Since the dam wasn’t built with maintenance in mind, it was nearly impossible to inspect the condition of the steel hinges of the gate. But that lack of surveillance caught up with the owner on the morning of May 14, 2019 when a security camera at the dam caught the dramatic failure of one of the gate’s hinges. The lake behind the dam quickly drained and kicked off a chain of legal battles, some of which are still going on today. Luckily, no one was hurt as a result of the failure. Eventually, the homeowners around the lake upstream banded together to tax themselves and rebuild the structure, a task that is nearly complete now more than three years later. Of course, there’s a lot more to this fascinating story, but it’s a great reminder of the importance of spillway gates in our lives and what can go wrong if we neglect our water infrastructure.

January 03, 2023 /Wesley Crump

How This Bridge Was Rebuilt in 15 Days After Hurricane Ian

December 20, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On September 28, 2022, Hurricane Ian made landfall on the western coast of Florida as a Category 4 storm, bringing enormous volumes of rainfall and extreme winds to the state. Ian was the deadliest hurricane to hit Florida since 1935. Over 100 people died as a result of flooding and over 2 million people lost power at some point during the storm. The fierce winds that sucked water out of Tampa Bay, also forced storm surge inland on the south side of the hurricane, causing the sea to swell upwards of 13 feet or 4 meters above high-tide. And that doesn’t include the height of the crashing waves. One of the worst hit parts of the state became a symbol for the hurricane’s destruction: the barrier island of Sanibel off the coast of Fort Myers. The island’s single connection to the mainland, the Sanibel Causeway, was devastated by Hurricane Ian to the point where it was completely impassable to vehicles. Incredibly, two weeks after hiring a contractor to perform repairs, the causeway was back open to traffic. But this fix might not last as long as you’d expect. How did they do it? And why can’t all road work be finished so quickly? Let’s discuss. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the Sanibel Causeway Hurricane Repairs.

What is a causeway anyway? You might think the only two options to get a road across a body of water are a bridge or a tunnel, but there’s actually a third option. You can build an embankment from compacted soil or rock that sits directly on the seabed and then construct a roadway on top of that. A path along reclaimed land like this is called a causeway, and the one between Fort Meyers and Sanibel Island in Florida was first built in 1963 and rebuilt in 2007. But, a causeway has a major limitation compared to a bridge or tunnel: it doesn’t allow crossing of maritime traffic because it divides the waterway in two. So, the Sanibel Causeway has some bridges. And actually, for a structure called a causeway, it’s mostly bridges, three to be exact. Bridges C and B are long, multi-span structures that sit relatively low above the water. Bridge A, closest to the mainland, is a high-span structure to allow for tall sailboats to pass underneath. Islands 1 and 2 are the actual causeway parts of the causeway where the road sits at grade (or on the ground). Overall, the causeway is about 3 miles or 5 kilometers long, carries over three million vehicles a year, on average, and, critically, is the only way to drive a vehicle on or off Sanibel Island, which is home to about 6,000 people.

Each of the two causeway islands serves as a county park with beaches and places for fishing. The islands aren’t natural. They were built up in the 1960s by dredging sand and silt from the bay and piling it up above the water level. It’s pretty easy to see this on the aerial photos of the islands. They really are just slender stretches of dredged sediment sitting in the middle of the bay. But, they didn’t pile the sediment that high above the water. The top of the roadway along the islands is only around 7 feet or 2 meters above sea level. And here’s the thing about sand and silt. If you look at the range of earthen materials by particle size, the large ones like gravel and even coarse sand don’t erode quickly because they’re heavy, and the tiny ones like clay don’t erode quickly either because they’re sticky (they have cohesion), but right in the middle are the fine sands and silts that aren’t heavy or sticky, so they easily wash away. The storm surge and waves brought on by Hurricane Ian breached both of the causeway islands, violently eroding huge volumes of sand out to sea and leaving the roadways on top completely destroyed. But that wasn’t the only damage.

In between the island sections of roadway and the bridges are the approach ramps: compacted soil structures that transition from the low causeway islands up to and down from the elevated sections. Instead of using traditional earthen embankments as the approaches for each bridge, the 2007 project included retaining walls built using mechanically stabilized earth, or MSE. I have a few videos about how these walls work you can check out after this if you want to learn more. Basically, reinforcing elements within the soil allow the slopes to stand vertically on the bridge approaches, saving precious space on the small causeway islands and reducing the total load on the dredged sand below each approach. Concrete panels are used as a facing system to protect the vulnerable earthen structures from erosion. But, you know, these are meant to protect against rainfall and strong winds, not hurricane force waves and 10 foot storm surge. With the full force of Hurricane Ian bearing down on them, three of the causeway’s approach ramps were heavily damaged. The one on the mainland side, and the ones on the north side of each causeway island. The bridges themselves largely withstood the hurricane with minimal damage, thanks to good engineering. But, with the approaches and causeway sections ruined, Sanibel Island was completely cut off from vehicle access, making rescue operations, power grid repairs, and resupplies practically impossible.

Within only a few days of the hurricane’s passing, state and county officials managed to pull together a procurement package to solicit a contractor for the repairs. On October 10, they announced their pick of Superior Construction and Ajax Paving and their target completion date of October 31st. Construction crews immediately sprang into action with a huge mobilization of resources, including hundreds of trucks, earth moving machines, cranes, barges, dredges, and more than 150 people. Major sections of the job were inaccessible by vehicle, so crews and equipment had to be ferried to various damaged locations along the causeway. The power was still out in many places, and cell phone and internet coverage were spotty. Even coordinating meals and places to sleep for the crew was a challenge.

For the most part, the repairs were earthwork projects, replacing the lost soil and sand along the causeway islands and bridge approaches. A lot of the material was dredged back from the seabed to rebuild each of the two islands, but over 2,000 loads of rock and 4,000 tons (3,600 metric tons) of asphalt were brought in from the mainland. Just coordinating that many crews and resources was an enormous challenge both for FDOT and the contractor. Both made extensive use of drones to track the quantities of materials being transported and placed and to keep an eye on the progress across the 3-mile-long construction site. Progress continued at a breakneck pace at each of the damaged areas of the causeway to bring the subgrade back up to the correct level. Once the eroded soil was replaced, all the damaged sections were paved with asphalt to provide a durable driving surface. With the incredible effort and hard work of the contractor and its crews, the designers, FDOT and their representatives, emergency responders, relief workers, and many more, the causeway was reopened to the public on October 19th, a short 15 days after the project started and well ahead of the original estimated completion date.

You might be wondering, “If they can fix a hurricane-damaged road in two weeks, why does the road construction along my commute last for years?” And it’s a good question, because you actually sacrifice quite a lot to get road work done so quickly. First, you sacrifice the quality of the work. And that's not a dig on the contractor, but a simple reality of the project. These temporary repairs aren’t built to last; they’re built to a bare minimum level needed to get vehicles safely across the bay. Look closely and you won’t see the conveniences and safety features of modern roadways like pavement markings and stripes, guard rails, or shoulders.                                                          

These embankments constructed as bridge approaches are also not permanent. Something happens when you make a big pile of soil like this (even if you do a good job with compaction and keeping the soil moisture content just right): it settles. Over time and under the weight of the embankment, the grains of soil compress together and force out water, causing the top of the embankment to sink. But the bridge sits on piles that aren’t subjected to these same forces. So, over time, you end up with a mismatch in elevation between the approach and bridge. If you’ve ever felt a bump going up to or off a bridge, you know what I mean. In fact, this is one of the many reasons why you might see a construction site sitting empty. They’re waiting for the embankments to settle before paving the roadway. Oftentimes, a concrete approach slab is used to try and bridge the gap that forms over time, but I don’t see any approach slabs in the photos of the repair projects. That means it’s likely these approaches will have to be replaced or repaired fairly soon. In addition, the slopes of the approaches are just bare soil right now, susceptible to erosion and weathering until they get protected with grass or hard armoring.

The other sacrifice you make for a fast-track project like this is cost. We don’t know the details of the contract right now, but just looking at all the equipment at the site, we know it wasn’t cheap. It’s expensive to mobilize and operate that much heavy equipment, and the rental fees come due whether they sit idle or not. It’s expensive to pay overtime crews to maintain double shifts. It’s expensive to get priority from material suppliers, equipment rentals, work crews, fuel, et cetera, especially in a setting like a hurricane recovery where all those things are already in exceptionally high demand. And, it’s expensive to keep people and equipment on standby so that they can start working as soon as the crew before them is finished. Put simply, we pay a major premium for fast-tracked construction and an even bigger one for emergency repairs where the conditions require significant resources under high demands.


Of course, it wasn’t just the roadways damaged on Sanibel Island. The power infrastructure and many many buildings were damaged or destroyed as well. And it wasn't just Sanibel Island affected, but huge swaths of coastal Florida too (including nearby Pine Island that had an emergency bridge project of its own). There’s a long way to go to restore not just the roadway to Sanibel Island, but also the island itself. And that will involve a lot of tough decisions about where, how much, and how strong to rebuild. After all, Sanibel is a barrier island, a constantly changing deposit of sand formed by wind and waves. These islands are critical to protecting mainland coasts by absorbing wave energy and bearing the brunt of storms. In fact, many consider barrier islands to be critical infrastructure, but development on the islands negates that critical purpose. That doesn’t mean the community doesn’t belong there; nearly every developed area is subject to disproportionate risk from some kind of destructive natural phenomenon. But it does obligate the planners and engineers involved in rebuilding to be thoughtful about the impacts hurricanes can have and how infrastructure can be made more resilient to them in the future.

December 20, 2022 /Wesley Crump

What Is A Black Start Of The Power Grid?

December 06, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

November 1965 saw one of the most widespread power outages in North American history. On the freezing cold evening of the 9th, the grid was operating at maximum capacity as people tried to stay warm when a misconfigured relay tripped a breaker on a key transmission line. The loss of that single line cascaded into a loss of service for over 30 million people in the northeast US plus parts of Ontario in Canada. Restoring electricity to that many people is no simple task. In this case, the startup began with a little 12 megawatt gas generator in Southampton, New York. That’s about the capacity of four wind turbines, but it was enough to get power plants in Long Island back online which were able to power up all of New York City, eventually returning service to all those 30 million people

The grid is a little bit of a house of cards. It’s not necessarily flimsy, but if the whole thing gets knocked down, you have to rebuild it one card at a time and from the ground up. Restoring power after a major blackout is one of the most high stakes operations you can imagine. The consequences of messing it up are enormous, but there’s no way to practice a real-life scenario. It seems as simple as flipping a switch, but restoring power is more complicated than you might think. And I built a model power grid here in the studio to show you how this works. This is my last video in a deep dive series on widespread outages to the power grid, so go back and check out those other videos if you want to learn more. I’m Grady and this is Practical Engineering. In today’s episode we’re talking about black starts of the grid.

An ideal grid keeps running indefinitely. Maybe it sustains localized damage from lightning strikes, vehicle accidents, hurricanes, floods, and wayward squirrels, but the protective devices trigger circuit breakers to isolate those faults and keep them from disrupting the rest of the system. But, we know that no grid is perfect, and occasionally the damage lines up just right or the protective devices behave in unexpected ways that cascade into a widespread outage. I sometimes use the word blackout kind of freely to refer to any amount of electrical service disruption, but it’s really meant to describe an event like this: a widespread outage across most or all of an interconnected area. Lots of engineering, dedicated service from linesworkers, plenty of lessons learned from past mishaps, and a little bit of good fortune have all meant that we don’t see too many true blackouts these days, but they still happen, and they’re still a grid operator’s worst nightmare. We explored the extreme consequences that come from a large-scale blackout in a previous video. With those consequences in mind, the task of bringing a power grid back online from nothing (called a black start) is frightfully consequential with significant repercussions if things go wrong.

The main reason why black starts are so complicated is that it takes power to make power. Most large-scale generating plants - from coal-powered, to gas-powered, to nuclear - need a fair amount of electricity just to operate. That sounds counterintuitive, and of course configurations and equipment vary from plant to plant, but power generating stations are enormous industrial facilities. They have blowers and scrubbers, precipitators and reactors, compressors, computers, lights, coffee makers, control panels and pumps (so many pumps): lubrication pumps, fuel pumps, feedwater pumps, cooling water pumps, and much much more. Most of this equipment is both necessary for the plant to run and requires electricity. Even the generators themselves need electricity to operate.

I don’t own a grid scale, three-phase generator (yet), but I do have an alternator for a pickup truck, and they are remarkably similar devices. You probably already know that moving a conductor through a magnetic field generates a current. This physical phenomenon, called induction, is the basis for almost all electricity generation on the grid. Some source of motion we call the prime mover, often a steam-powered turbine, spins a shaft called a rotor inside a set of coils. But you won’t see a magnet on the rotor of a grid-scale generator, just like (if you look closely inside the case) you won’t see a magnet inside my alternator. You just see another winding of copper wire. Turns out that this physical phenomenon works both ways. If you put a current through a coil of wire, you get a magnetic field. If that coil is on a rotor, you can spin it like so.

This is my model power plant. I got this idea from a video by Bellingham Technical College, but their model was a little more sophisticated than mine. Let me give you a tour. On the right we have the prime mover. Don’t worry about the fact that it’s an electric motor. My model power plant consumes more energy than it creates, but I didn’t want to build a mini steam turbine just for this demonstration. The thing that’s important is that the prime mover drives a 3-phase generator, in my case through this belt. And the generator you already saw is a car alternator that I “modified” to create an Alternating Current instead of a Direct Current like what’s used in a vehicle. The alternator is connected to some resistors that simulate loads on the grid. And I have an oscilloscope hooked up to one of the phases so we can see the AC waveform. Yeah, all this is so we can just see that sine wave on the oscilloscope. It could have been a couple of tiny 3-phase motors; It could even have just been a signal generator. But, you guys love these models so I thought you deserved something slightly grander in scale. There’s a few other things here too, including a second model power plant, but we’ll get to those in a minute.

The alternator I used in my model has two brushes of graphite that ride along the rotor so that we can supply current to the coil inside to create an electromagnet. This is called excitation, and it has a major benefit over using permanent magnets in a generator: it’s adjustable. Let’s power up the prime mover to see how it works. If there’s no excitation current, there’s no magnetic field, which means there’s no power. We’re just spinning two inert coils of wire right next to each other. But watch what happens when I apply some current to the brushes. Now the rotor is excited, and I have to say, I’m pretty excited too, because I can see that we’re generating power. As I increase the excitation current, we can see that the voltage across the resistor is higher, so we’re generating more power. Of course, this additional power doesn’t come for free. It also puts more and more mechanical load on the prime mover. You can see when I spin the alternator with no excitation current, it turns freely. But when I increase the current, it becomes more difficult to spin. Modern power plants adjust the excitation current in a generator to regulate the voltage of electricity leaving the facility, something that would be much harder to do in a device that used permanent magnets that don’t need electricity to create a magnetic field.

The power for the excitation system can come from the generator, but, like the other equipment I mentioned, it can’t start working until the plant is running. In fact, power plants often use around 5 to 10 percent of all the electricity they generate. That’s why a black start of a large power plant is often called bootstrapping, because the facility has to pick itself up by the bootstraps. It needs a significant amount of power both to start and maintain its own creation of power, and that poses an obvious challenge. You might be familiar with the standby generators used at hospitals, cell phone towers, city water pumps, and many other critical facilities where a power outage could have severe consequences. Lots of people even have small ones for their homes. These generators use diesel or natural gas for fuel and large banks of batteries to get started. Imagine the standby generator capacity that would be needed at a major power plant. Five percent of the nearest plant to my house, even at a quarter of its nameplate capacity, is 18 megawatts. That’s more than 100 of these.

It’s just not feasible to maintain that amount of standby generation capacity at every power plant. Instead, we designate black start sources that can either spin up without support using batteries and standby devices or that can remain energized without a connection to the rest of the grid. Obviously, these blackstart power plants are more expensive to build and maintain, so we only have so many of them spread across each grid. Their combined capacity can only supply a small fraction of electricity demands, but we don’t need them for that during a blackout. We just need them to create enough power so that larger base load plants can spin up. Hydropower plants are often used as blackstart sources because they only need a little bit of electricity to open the gates and excite the generators to produce electricity. Some wind turbines and solar plants could be used as blackstart sources, but most aren’t set up for it because they don’t produce power 24-7.

But, producing enough power to get the bigger plants started is only the first of many hurdles to restoring service during a blackout. The next step is to get the power to the plants. Luckily, we have some major extension cords stretched across the landscape. We normally call them transmission lines, but during a blackout, they are cranking paths. That’s because you can’t just energize transmission lines with blackstart sources. First those lines have to be isolated so that you don’t inadvertently try to power up cities along the way. All the substations along a predetermined cranking path disconnect their transformers to isolate the transmission lines and create a direct route. Once the blackstart source starts up and energizes the cranking path, a baseload power plant can draw electricity directly from the line, allowing it to spin up.

One trick to speed up recovery is to blackstart individual islands within the larger grid. That provides more flexibility and robustness in the process. But it creates a new challenge: synchronization. Let’s go back to the model to see how this works. I have both generating stations running now, each powering their own separate grid. This switch will connect the two together. But you can’t just flip it willy nilly. Take a look at my oscilloscope and it’s easy to see that these two grids aren’t synchronized. They’re running at slightly different frequencies. If I just flip the switch when the voltage isn’t equal between the two grids, there’s a surge in current as the two generators mechanically synchronize. We’re only playing with a few volts here, so it’s a little hard to see on camera. If I flip the switch when the two generators are out of sync, they jerk as the magnetic fields equalize their current. If the difference is big enough, the two generators actually fight against each other, essentially trying to drive each other like motors. It’s kind of fun with this little model, but something like this in a real power plant would cause tremendous damage to equipment. So during a black start, each island, and in fact each individual power plant that comes online, has to be perfectly synchronized (and this is true outside of black start conditions as well).

I can adjust the speed of my motors to get them spinning at nearly the exact same speed, then flip the switch when the waveforms match up just right. That prevents the surges of power between the two systems at the moment they’re connected. You can see that the traces on the oscilloscope are identical now, showing that our two island grids are interconnected. One way to check this is to simply connect a light between the same phase on the two grids. If the light comes on, you know there’s a difference in voltage between them and they aren’t synchronized. If the light goes off and stays off, there’s no voltage difference, meaning you’re good to throw the breaker. Older plants were equipped with a synchroscope that would show both whether the plant was spinning at the same speed as the grid (or faster or slower) and whether the phase angle was a match. I bought an old one for this video, but it needs much higher voltages than I’m willing to play with in the studio, so let’s just animate over the top of it. Operators would manually bring their generators up to speed, making slight adjustments to match the frequency of the rest of the grid. But matching the speed isn’t enough, you also have to match the phase, so this was a careful dance. As soon as the synchroscope needle both stopped moving and was pointing directly up, the operator could close the breaker.

During a black start, utilities can start restoring power to their customers, slowly matching generation capacity with demand as more and more power plants come online. Generally, the most critical loads will be prioritized during the recovery like natural gas infrastructure, communications, hospitals, and military installations. But even connecting customers adds complexity to restoration.

Some of our most power-hungry appliances only get more hungry the longer they’ve been offline. For example an outage during the summer means all the buildings are heating up with no access to air conditioning. When the power does come back on, it’s not just a few air conditioners ready to run. It’s all of them at once. Add that to refrigerators, furnaces, freezers, and hot water heaters, and you can imagine the enormous initial demand on the grid after an extended outage. And don’t forget that many of these appliances use inductive motors that have huge inrush currents. For example, here’s an ammeter on the motor of my table saw while I start it up. It draws a whopping 28 amps as it gets up to speed before settling down to 4 amps at no load. Imagine the demand from thousands of motors like this starting all at the exact same instant. The technical term for this is cold load pickup, and it can be as high as eight to ten times normal electrical demands before the diversity of loads starts to average out again, usually after about 30 minutes. So, grid operators have to be very deliberate about how many customers they restore service to at a time. If you ever see your neighbor a few blocks away getting power before you, keep in mind this delicate balancing act that operators have to perform in order to get the grid through the cold load pickup for each new group of customers that go online.


The ability to black start a power grid quickly after a total collapse is so important because electricity is vital to our health and safety. After the 2003 blackout in the US, new reliability standards were issued, including one that requires grid operators to have detailed system restoration plans. That includes maintaining blackstart sources, even though it’s often incredibly expensive. Some standby equipment mostly just does just that: stands by. But it still has to be carefully maintained and regularly tested in the rare case that it gets called into service. Also, the grid is incredibly vulnerable during a blackstart, and if something goes wrong, breakers can trip and you might have to start all over again. Utilities have strict security measures to try and ensure that no one could intentionally disable or frustrate the black start process. Finally, they do detailed analysis to make sure they can bring their grid up from scratch, including testing and even running drills to practice the procedures; All this cost and effort and careful engineering just to ensure that we can get the grid back up and running to power homes and businesses after a major blackout.

December 06, 2022 /Wesley Crump

How Long Would Society Last During a Total Grid Collapse?

November 22, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In February 2021, a winter storm that swept through Texas caused one of the most severe power crises in American history. The cold weather created shockingly high electricity demands as people tried to keep their homes warm. But it also caused problems with the power supply because power plants themselves and their supporting infrastructure weren’t adequately protected against freezing weather. The result was that Texas couldn’t generate enough power to meet demand. Instead they would have to disconnect customers to reduce demands down to  manageable levels. But before grid operators could shed enough load from the system, the frequency of the alternating current dropped as the remaining generators were bogged down, falling below 59.4 hertz for over 4 minutes.

It might not seem like much, but that is a critical threshold in grid operations. It’s 1% below nominal. Power plants have relays that keep track of grid frequency and disconnect equipment if anything goes awry to prevent serious damage. If the grid frequency drops below 59.4 hertz, the clock starts ticking. And if it doesn’t return to the nominal frequency within 9 minutes, the relays trip! That means the Texas grid came within a bathroom break from total collapse. If a few more large power plants tripped offline or too few customers were shed from the system in time, it’s likely that the frequency would have continued to drop until every single generator on the grid was disconnected.

Thankfully, that nightmare scenario was avoided. Still, despite operators preventing a total collapse, the 2021 power crisis was one of the most expensive and deadly disasters in Texas history. If those four minutes had gone differently, it’s almost impossible to imagine how serious the consequences would be. Let’s put ourselves in the theoretical boots of someone waking up after that frigid February night in Texas, assuming the grid did collapse, and find out. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the impacts of blackouts on other infrastructure.

Every so often some loud noise wakes you from your sleep: a truck backfiring on the street outside, a baby crying, a cat knocking something off a shelf. But it’s a very different thing altogether to be awoken by silence, your unconscious mind telling you that the sounds you should be hearing are gone. It only takes a groggy minute to piece it together. The refrigerator is silent, no air is flowing through the heating register, the ceiling fan above your head is slowly coming to a stop. The power is out. You check your phone. It’s 4AM. Nothing you can really do but go back to sleep and hope they get it fixed by daylight.

Most of us have experienced a power outage at some point, but they’re usually short (lasting on the order of minutes or hours) and they’re mostly local (affecting a small area at a time). A wide area interconnection - that’s the technical term for a power grid - is designed that way on purpose. It has redundancies, multiple paths that power can take to get to the same destination, and power users and producers are spread out, reducing the chance that they could be impacted all at once. But having everyone interconnected is a vulnerability too, because if things go very wrong, everyone is affected. We’re in the midst of a deep dive series on wide scale outages to the power grid, and a mismatch between supply and demand (like what happened in Texas) is only one of the many reasons that could cause a major blackout. Natural disasters, engineering errors, and deliberate attacks can all completely collapse a grid, and - at least for the first few hours of an outage - you might not even know that what you’re experiencing is any more serious than a wayward tree branch tripping the fuse on the transformer outside your house.

You wake up 3 hours later, cold, sunlight peeking in through your bedroom window. The power is still off. You grab your cell phone to try and figure out what’s going on. It has a full battery from charging overnight, and you have a strong signal too. You try to call a friend, but the call won’t go through. You try a few more times, but still, nothing more than a friendly voice saying “All Circuits Are Busy.”

There is a vast array of pathways that information flows between people across the globe, and they all use grid power to function. Fiber networks use switches and optical terminals distributed throughout the service area. Cable TV and DSL networks have nodes that service around 500 to 1000 customers each that require power. Cellular networks use base stations mounted on towers or rooftops. Major telecommunications facilities are usually on prioritized grid circuits and may even have redundant power feeds from multiple substations, but even during a blackout where the entire grid is completely disabled, you might still have service. That’s because most telecommunication facilities are equipped with backup batteries that can keep them running during a power outage for 4 to 8 hours. Critical facilities like cellular base stations and data-centers often have an on-site backup generator. These generators have enough fuel to extend the resiliency beyond 24 to 48 hours. That said, major emergencies create huge demands on telecommunication services as everyone is trying to find and share information at once, so you might not be able to get through even if the services are still available. In the US, the federal government works with telecommunications providers to create priority channels so that 911 calls, emergency management communications, and other matters related to public safety can get through even when the networks are congested.

Since you’re trying to make a personal call and you aren’t enrolled in the Telecommunications Service Priority program, you’re not getting through. Just then, an emergency alert appears on your screen. It says that there’s a power grid failure and to prepare for an extended outage. The reality of the situation is just starting to set in. Since most people have a cell phone, wireless emergency alerts have become an important addition to the Emergency Alert System that connects various levels of government to tv, radio, satellite, and telephone companies to disseminate public warnings and alerts. During a blackout, sharing information isn’t just for likes on social media. It’s how we keep people safe, connect them with resources, and maintain social order. Two-way communications like cell phones and the internet might not last long during a grid outage, so one-way networks like radio and television broadcasts are essential to keep people informed. These facilities are often equipped with more backup fuel reserves and even emergency provisions for the staff so that they can continue to operate during a blackout for weeks if necessary.

Jump ahead a couple of days.Your circumstances start to dictate your experiences heavily. Even an outage of this length can completely upend your life if you, for example, depend on medication that must be refrigerated or electrically-powered medical equipment (like a ventilator or dialysis machine). But for many, a blackout on the order of a day or two is still kind of fun, a diversion from the humdrum of everyday life. Maybe you’ve scrounged together a few meals from what’s remaining in your pantry, enjoyed some candlelit conversations with neighbors, seen more stars in the night sky than you ever have in your life. But after those first 48 hours, things are starting to get more serious. You ponder how long you can stay in your home before needing to go out for supplies as you head into the kitchen to get a glass of water. You open the tap, and nothing comes out.

A public water supply is another utility highly dependent on a functioning electrical grid. Pumping, cleaning, and disinfecting water to provide a safe source to everyone within a city is a power-intensive ordeal. Water is heavy, after all, and just moving it from one place to another takes a tremendous amount of energy. Most cities use a combination of backup generators and elevated storage to account for potential emergencies. Those elevated tanks, whether they are water towers or just ground-level basins built on hillsides, act kind of like batteries to make sure the water distribution system stays pressurized even if pumps lose power. But those elevated supplies don’t last forever. Every state has its own rules about how much is required. In Texas, large cities must have at least 200 gallons or 750 liters of water stored for every connection to the system, and half of that needs to be in elevated or pressurized tanks so that it will still flow into the pipes if the pumps aren’t working. Average water use varies quite a bit by location and season, but that amount of storage is roughly enough to last a city two days under normal conditions. Combine the backup storage with the backup generation system at a typical water utility, and maybe they can stretch to 3 or 4. Without a huge mobilization of emergency resources, water can quickly become the most critical resource in an urban area during a blackout. But don’t forget the related utility we depend on as well: sewage collection.

Lift stations that pump raw sewage and treatment plants that clean it to a level where it’s safe to release back into the environment are energy intensive processes as well. Most states require that lift stations and treatment plants have backup power supplies or enough storage to avoid overflows during an outage, but usually those requirements are for short-term disruptions. When power is lost for more than a day or two, these facilities won’t be able to continue functioning without additional fuel and maintenance. Even in the best case scenario, that means raw wastewater in the sewers will have to bypass treatment plants and be discharged directly into waterways like rivers and oceans. In the worst case, sewers and lift stations will overflow, exposing the people within cities to raw sewage and creating a public health emergency.

Flash forward to a week after the start of the blackout, and any fun from the change of pace is long gone. You still keep your cell phone battery charged from your car, but you rarely get a signal and phone calls almost never connect. Plus, your car’s almost out of gasoline and the fuel at filling stations has long been sent to backup generators at critical facilities. You are almost certainly running low on food and water after a week, even if you’ve been able to share or barter with neighbors or visit one of the rare stores that was willing to open their doors and accept cash. By now, only the most prioritized facilities like hospitals and radio stations plus those with solar or wind charging systems still have a functioning backup power supply. Everything else is just dead. And now you truly get a sense of how complex and interconnected our systems of infrastructure are, because there’s almost nothing that can frustrate the process of restoring power than a lack of power itself. Here’s what I mean:

Power plants are having trouble purchasing fuel because, without electricity to power data centers and good telecommunications, banks and energy markets are shut down. Natural gas compressors don’t have power, so they can’t send fuel to the plants. Railway signals and dispatch centers are down, so the coal trains are stopped. Public roadways are snarled because none of the traffic signals work, creating accidents and reducing the capacity at intersections. Even if workers at critical jobs like power plants, pipelines, and substations still have gas in their vehicles, they are having a really hard time actually getting to work. And even if they can get there, they might not know what to do. Most of our complicated infrastructure systems like oil and gas pipelines, public water systems, and the electrical grid are operated using SCADA - networked computers, sensors, and electronic devices that perform a lot of tasks automatically… if they have power. Even if you can get people to the valves, switches, pump stations, and tanks to help with manual operations, they might not know under which parameters to operate the system. The longer the outage lasts, the more reserves of water, fuel, foods, medicine, and goods deplete, and the more systems break down. Each of these complicated systems are often extremely difficult to bring back online alone, and nearly impossible without the support of adjacent infrastructure.

Electricity is not just a luxury. It is a necessity of modern life. Even ignoring our own direct use of it, almost everything we depend on in our daily lives, and indeed the orderly conduct of a civil society, is undergirded by a functioning electrical grid. Of course, life as we know it doesn’t break down as soon as the lights go out. Having gone without power for three days myself during the Texas winter storm, I have seen first hand how kind and generous neighbors can be in the face of a difficult situation. But it was a difficult situation, and a lot of people didn’t come through on the other side of those three days quite as unscathed as I did.


Natural disasters and bad weather regularly create localized outages, but thankfully true wide-scale blackouts have been relatively few and far between. That doesn’t mean they aren’t possible, though, so it’s wise to be prepared. In general, preparedness is one of the most important roles of government, and at least in the US, there’s a lot we get right about being ready for the worst. That said, it makes sense for people to have some personal preparations for long-duration power outages too, and you can find recommendations for supplies to keep on hand at FEMA’s website. At both an institutional and personal level, finding a balance between the chance of disaster striking and the resources required to be prepared is a difficult challenge, and not everyone agrees on where to draw the line. Of course, the other kind of preparedness is our ability to restore service to a collapsed power grid and get everyone back online as quickly as possible. That’s called a black start, and it sounds simple enough, but there are some enormous engineering challenges associated with bringing a grid up from nothing. That’s the topic we’ll cover in the next Practical Engineering video, so make sure you’re subscribed so you don’t miss it. Thank you for watching, and let me know what you think.

November 22, 2022 /Wesley Crump

How Would a Nuclear EMP Affect the Power Grid?

November 08, 2022 by Wesley Crump


[Note that this article is a transcript of the video embedded above.]

Late in the morning of April 28, 1958, the USS Boxer aircraft carrier ship was about 70 miles off the coast of the Bikini Atoll in the Pacific Ocean. The crew of the Boxer was preparing to launch a high-altitude helium balloon. In fact, this would be the 17th high-altitude balloon to be launched from the ship. But this one was a little different. Where those first 16 balloons carried some instruments and dummy payloads, attached to this balloon was a 1.7 kiloton nuclear warhead, code named Yucca. The ship, balloon, and bomb were all part of operation Hardtack, a series of nuclear tests conducted by the United States in 1958. Yucca was the first test of a nuclear blast in the upper limits of earth’s atmosphere. About an hour and a half after the balloon was launched, it reached an altitude of 85,000 feet or about 26,000 meters. As two B-36 peacemaker bombers loaded down with instruments circled the area, the warhead was detonated.

Of course, the research team collected all kinds of data during the blast, including the speed of the shock wave, the effect on air pressure, and the magnitude of nuclear radiation released. But, from two locations on the ground, they were also measuring the electromagnetic waves resulting from the blast. It had been known since the first nuclear explosions that the blasts generate an electromagnetic pulse or EMP, mainly because it kept frying electronic instruments. But until Hardtack, nobody had ever measured the waves generated from a detonation in the upper atmosphere. What they recorded was so far beyond their expectations, that it was dismissed as an anomaly for years. All that appears in the report is a casual mention of the estimated electromagnetic field strength at one of the monitoring stations being around 5 times the maximum limit of the instruments.

It wasn’t until 5 years later that the US physicist Conrad Longmire would propose a theory for electromagnetic pulses from high-altitude nuclear blasts that is still the widely accepted explanation for why they are orders of magnitude stronger than those generated from blasts on the ground. Since then, our fears of nuclear war not only included the scenario of a warhead hitting a populated area, destroying cities and creating nuclear fallout, but also the possibility of one detonating far above our heads in the upper atmosphere, sending a strong enough EMP to disrupt electronic devices and even take out the power grid. As with most weapons, the best and most comprehensive research on EMPs is classified. But, in 2019, a coalition of energy organizations and government entities called the Electric Power Research Institute (or EPRI) funded a study to try and understand exactly what could happen to the power grid from a high altitude nuclear EMP. It’s not the only study of its kind, and it’s not without criticism from those who think it leans optimistic, but it has the most juicy engineering details from all the research I could find. And the answers are quite a bit different than Hollywood would have you believe. This is a summary of that report, and it’s the first in a deep dive series of videos about large-scale threats to the grid. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about the impact of a nuclear EMP on our power infrastructure.

A nuclear detonation is unwelcome in nearly every circumstance. These events are inherently dangerous and the physics of a blast go way beyond our intuitions. That’s especially true in the upper atmosphere where the detonation interacts with earth’s magnetic field and its atmosphere in some very unique ways to create an electromagnetic pulse. An EMP actually has three distinct components all formed by different physical mechanisms that can have significantly different impacts here on Earth’s surface. The first part of an EMP is called E1. This is the extremely fast and intense pulse that immediately follows detonation.

The gamma rays released during any nuclear detonation collide with electrons, ionizing atoms and creating a burst of electromagnetic radiation. That’s generally bad on its own, but when detonated high in the atmosphere, earth’s magnetic field interacts with those free electrons to produce a significantly stronger electromagnetic pulse than if detonated within the denser air at lower altitudes. The E1 pulse comes and goes within a few nanoseconds, and the energy is somewhat jokingly referred to as DC to daylight, meaning it’s spread across a huge part of the electromagnetic spectrum.

The E1 pulse generally reaches anywhere within a line of sight of the detonation, and for a high-altitude burst, this can cover an enormous area of land. At the height of the Yucca test, that’s a circle with an area larger than Texas. A weapon at 200 kilometers in altitude could impact a significant fraction of North America. But, not everywhere within that circle experiences the strongest fields. In general, the further from the blast you are, the lower the amplitude of the EMP. But, because of earth’s magnetic field, the maximum amplitude occurs a little bit south of ground zero (in the northern hemisphere), creating this pattern called a smile diagram. But no one will be smiling to find out that they are within the affected area of a high altitude nuclear blast.

Although a weapon like this wouldn’t damage buildings, create nuclear fallout, be felt by humans, or probably even be visible to most, that E1 pulse can have a huge effect on electronic devices. You’re probably familiar with antennas that convert radio signals into voltage and current within a conductor. Well, for a strong enough pulse spread across a huge range of frequencies, essentially any metallic object will act like an antenna, converting the pulse into massive voltage spikes that can overwhelm digital devices. And, the E1 pulse happens so quickly that even devices meant to protect against surges may not be effective. Of course, with just about everything having embedded electronics these days, this has far reaching implications. But on the grid, there are really only a few places where an E1 pulse is a major concern. The first is with the control systems within power plants themselves. The second is communications systems used to monitor and record data to assist grid operators. The EPRI report focused primarily on the third hazard associated with an E1 pulse: digital protective relays.

Most folks have seen the breakers that protect circuits in your house. The electrical grid has similar equipment used to protect transmission lines and transformers in the event of a short circuit or fault. But, unlike the breakers in your house that do both the sensing for trouble and the circuit breaking all in one device, those roles are separate on the grid. The physical disconnecting of a circuit under load is done by large, motor controlled contactors quenched in oil or dielectric gas to prevent the formation of arcs. And the devices that monitor voltage and current for problems and tell the breakers when to fire are called relays. They’re normally located in a small building in a substation to protect them from weather. That’s because most relays these days are digital equipment full of circuit boards, screens, and microelectronics. And all those components are particularly susceptible to electromagnetic interference. In fact, most countries have strict regulations about the strength and frequency of electromagnetic radiation you can foist upon the airwaves, rules that I hope I’m not breaking with this device.

This is a pulse generator I bought off eBay just to demonstrate the weird effects that electromagnetic radiation can have on electronics. It just outputs a 50 MHz wave through this antenna, and you can see when I turn it on near this cheap multimeter, it has some strange effects. The reading on the display gets erratic, and sometimes I can get the backlight to turn on. You can also see the two different types of E1 vulnerabilities here. An EMP can couple to the wires that serve as inputs to the device. And an EMP can radiate the equipment directly. In both cases, this little device wasn’t strong enough to cause permanent damage to the electronics, but hopefully it helps you imagine what’s possible when high strength fields are applied to sensitive electronic devices.

The EPRI report actually subjected digital relays to strong EMPs to see what the effects would be. They used a Marx generator which is a voltage multiplying circuit, so I decided to try it myself. A Marx generator stores electricity in these capacitors as they charge in parallel. When triggered, the spark gaps connect all the capacitors in series to generate very high voltages, upwards of 80 or 90 kilovolts in my case. My fellow YouTube engineer Electroboom has built one of these on his channel if you want to learn more about them. Mine generates a high voltage spark when triggered by this screwdriver. Don’t try this at home, by the way. I didn’t design an antenna to convert this high voltage pulse into an EMP, but I did try a direct injection test. This cheap digital picture frame didn’t stand a chance. Just to clarify, this is in no way a scientific test. It’s just a fun demonstration to give you an idea of what an E1 pulse might be capable of.

The E2 pulse is slower than E1 because it’s generated in a totally different way, this time from the interaction of gamma rays and neutrons. It turns out that an E2 pulse is roughly comparable to a lightning strike. In fact, many lightning strikes are more powerful than those that could be generated by high-altitude nuclear detonations. Of course, the grid’s not entirely immune to lightning, but we do use lots of lightning protection technology. Most equipment on the grid is already hardened against some high voltage pulses such that lightning strikes don’t usually create much damage. So, the E2 pulse isn’t as threatening to our power infrastructure, especially compared to E1 and E3.

The final component of an EMP, called E3, is, again, much different from the other two. It’s really not even a pulse at all, because it’s generated in an entirely different way. When a nuclear detonation happens in the upper atmosphere, earth’s magnetic field is disturbed and distorted. As the blast dissipates, the magnetic field slowly returns to its original state over the course of a few minutes. This is similar to what happens when a geomagnetic storm on the sun disrupts earth’s gravity, and large solar events could potentially be a bigger threat than a nuclear EMP to the grid. In both cases, it’s because of the disturbance and movement of earth’s magnetic field. You probably know what happens when you move a magnetic field through a conductor: you generate a current. We call that coupling, and it’s essentially how antennas work. And in fact, antennas work best when their size matches the size of the electromagnetic waves.

For example, AM radio uses frequencies between down to 540 kilohertz. That corresponds to wavelengths that can be upwards of 1800 feet or 550 meters, big waves. Rather than serving as a place to mount antennas like FM radio or cell towers, AM radio towers are the antenna. The entire metal structure is energized! You can often tell an AM tower by looking at the bottom because they sit atop a small ceramic insulator that electrically separates them from the ground. As you can imagine, the longer the wavelength, the larger an antenna has to be to couple well with the electromagnetic radiation. And hopefully you see what I’m getting at. Electrical transmission and distribution lines often run for miles, making them the ideal place for an E3 pulse to couple and generate current. Here’s why that’s a problem.

All along the grid we use transformers to change the voltage of electricity. On the transmission side, we increase the voltage to reduce losses in the lines. And on the distribution side, we lower the voltage back down to make it safer for customers to use in their houses and buildings. Those transformers work using electromagnetic fields. One coil of wire generates a magnetic field that passes through a core to induce current to flow through an adjacent coil. In fact, the main reason we use alternating current on the grid is because it allows us to use these really simple devices to step voltage up or down. But transformers have a limitation.

Up to a certain point, most materials used for transformer cores have a linear relationship between how much current flows and the strength of the resulting magnetic field. But, this relationship breaks down at the saturation point, beyond which additional current won’t create much further magnetism to drive current on the secondary winding. An E3 pulse can induce a roughly DC flow of current through transmission lines. So you have DC on top of AC, which creates a bias in the sine wave. If there’s too much DC current, the transformer core might saturate when current moves in one direction but not the other, distorting the output waveform. That can lead to hot spots in the transformer core, damage to devices connected to the grid that expect a nice sinusoidal voltage pattern, and lots of other funky stuff.

So what are the implications of all this? For the E1 pulse damaging some relays, that’s probably not a big deal. There are often redundant paths for current to flow in the transmission system. That’s why it’s called the grid. But the more equipment that goes offline and the greater the stress on the remaining lines, the greater the likelihood of a cascading failure or total collapse. EPRI did tests simulating a one megaton bomb detonated at 200 kilometers in altitude. They estimated that about 5% of transmission lines could have a relay that gets damaged or disrupted by the resulting EMP. That alone probably isn’t enough to cause a large-scale blackout of the power grid, but don’t forget about E3. EPRI found that the third part of an EMP could lead to regional blackouts encompassing multiple states because of transformer core saturation and imbalances between supply and demand of electricity. Their modeling didn’t lead to widespread damage to the actual transformers, and that’s a good thing because power transformers are large, expensive devices that are hard to replace, and most utilities don’t keep many spares sitting around. All that being said, their report isn’t without criticism and many believe that an EMP could result in far more damage to electric power infrastructure.
When you combine the effects of the E1 pulse and the E3 pulse, it’s not hard to imagine how the grid could be seriously disabled. It’s also easy to see how, even if the real damages to equipment aren’t that significant, the widespread nature of an EMP, plus its potential impacts on other systems like computers and telecommunications, has the potential to frustrate the process of getting things back online. A multi-day, multi-week, or even multi-month blackout isn’t out of the question in the worst-case scenario. It’s probably not going cause a hollywood-style return to the stone age for humanity, but it is certainly capable of causing a major disruption to our daily lives. We’ll explore what that means in a future video.

November 08, 2022 /Wesley Crump

Endeavour's Wild Journey Through the Streets of Los Angeles

October 18, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In May of 1992, the Space Shuttle Endeavour launched to low earth orbit on its very first flight. That first mission was a big one: the crew captured a wayward communications satellite stuck in the wrong orbit, attached a rocket stage, and launched it back into space in time to help broadcast the Barcelona Summer Olympics. Endeavour went on to fly 25 missions, spending nearly a year total in space and completing 4,671 trips around the earth. But even though the orbiter was decommissioned after its final launch in 2011, it had one more mission to complete: a 12 mile (or 19 kilometer) trip through the streets of Los Angeles to be displayed in the California Science Center. Endeavour’s 26th mission was a lot slower and a lot shorter than the previous 25, but it was still full of fascinating engineering challenges. This October marks the 10 year anniversary of the nearly 3-day trip, so let’s reminisce on this incredible feat and dive into what it took to get the orbiter safely to its final home. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about the Space Shuttle Endeavor Transport project.

As midnight approached on October 11, 2012 the Space Shuttle Endeavour began its harrowing (if somewhat sluggish) journey from LAX airport to the California Science Center near downtown LA. Although Endeavour traveled into space 25 times, launched a number of satellites, visited Mir, helped assemble the International Space Station, and even repaired the Hubble Telescope, it was never designed to navigate the busy streets of an urban area. But, despite spending so much of its career nearly weightless, it was too heavy for a helicopter, and it couldn’t be dismantled without causing permanent damage to the heat tiles, so the Science Center decided to foot the roughly $10 million it would take to move the shuttle overland. The chilly late night departure from the hangar at LAX was the start of the transport, but Endeavour’s journey to Exposition Park really started more than a year beforehand.

In April 2011, NASA awarded Endeavour to the California Science Center, one of only four sites to receive a retired shuttle. The application process leading up to the award and the planning and engineering that quickly followed were largely an exercise in logistics. You see, Endeavor is about 122 feet (37 meters) long with a 78 foot (24 meter) wingspan, and with 58 feet (18 meters) to the top of the vertical stabilizer during transport. It also weighs a lot, around 180,000 pounds (80,000 kg), about as much as a large aircraft. Transporting the shuttle through Los Angeles would not be a simple feat. So, the Science Center worked with a number of engineering firms in addition to their heavy transport contractor (many of whom offered their services pro bono) to carefully plan the operation.

The most critical decision to be made was what route Endeavor would take through the streets of LA. The Shuttle couldn’t fit through an underpass, which meant it would have to go over the 405, the only major freeway along its path. It also would face nearly countless obstacles on its journey, including trees, signs, traffic signals, and buildings. 78 feet is wider than most two-lane city streets, and there are a lot of paths in Los Angeles that a Space Shuttle could never traverse. And this isn’t a sleepy part of the city either. Exposition Park and the Science Center are just outside downtown Los Angeles. The engineering team looked at numerous routes to get the Shuttle to its destination, evaluating the obstacles along the way. They ultimately settled on a 12 mile (or 19 kilometer) path that would pass through Inglewood and Leimert Park.

On the NASA side, they had been stripping the Shuttle of the toxic and combustible fuel system used for the reaction control thrusters and explosive devices like hatch covers to make the vehicle safe for display at a museum. With Endeavor attached to the top of a 747 jet, NASA made a series of low altitude flyovers around California to celebrate the shuttle’s accomplishments and retirement before landing and offloading the vehicle at LAX, a short distance but a long journey away from its final destination. Three weeks later, the last leg of that journey began.  

For its ride, the shuttle would sit on top of the Overland Transporter, a massive steel contraption built by NASA in the 1970s to move shuttles between Palmdale and Edwards Air Force Base. Even though it was designed for the shuttle program, this was Endeavour’s first ride on the platform. Before this move, the transporter had been parked in the desert for the last 30 years since the last shuttle was assembled in Palmdale in 1985. Just like the Shuttle Carrier Aircraft, a modified Boeing 747 that ferried the shuttles on its back, and the main fuel tank that attached to the orbiter during launch, the overland transporter used ball mounts that fit into sockets on the shuttle’s underside (two aft and one forward). The contractor used four Self-Propelled Modular Transporters (or SPMTs) to support and move the shuttle. These heavy haul platforms have a series of axles and wheels, all of which can be individually controlled to steer left or right, crab sideways, or even rotate in place (all of which were needed to get this enormous spaceship through the narrow city streets). The SPMTs used for the Endeavour transport also included a hydraulic suspension that could raise or lower the Shuttle to keep it balanced on uneven ground and help avoid obstacles. Each of the four SPMTs could be electronically linked to work together as a single vehicle. An operator with a joystick walked alongside the whole assembly, controlling the move with the help of a team of spotters all around the vehicle. And yes, it was slow enough to walk next to it the entire trip.

About 6 hours into the move, the Shuttle pulled up to the shopping center at La Tijera and Sepulveda Eastway, the first of several stops to allow the public a chance to see the spectacle while also giving crews time to coordinate ahead of the move. Huge crowds gathered all along the route during the move, especially at these pre-planned stops. In fact, the transport project may be one of the most recorded events in LA history, a fact I’m sure gave a little bit of trepidation to the engineers and contractors involved in the project.

Even though this move was pretty unique, super heavy transport projects aren’t unusual. We move big stuff along public roadways pretty regularly when loads are quote-unquote “non-divisible” and other modes of transportation aren’t feasible. I won’t go into a full engineering lesson on roadway load limits here, but I’ll give you a flavor of what’s involved. Every area of pavement sustains a minute amount of damage every time a vehicle drives over. Just like bending a paperclip over and over eventually causes it to break, even small deflections in asphalt and concrete pavements eventually cause them to deteriorate. Those tiny damages add up over time, but some are more tiny than others. As you might expect, the magnitude of that damage is proportional to the weight of the vehicle. But, it’s not a linear relationship. The most widely used road design methodology estimates that the damage caused to a pavement is roughly proportional to the axle load raised to the power of 4. That means it would take thousands of passenger vehicles to create the same amount of damage to the pavement as a single fully-loaded semi truck. And it’s not just pavement. Heavy vehicles can cause embankments to fail and underground utilities like sewer and water lines to collapse.

Because heavy vehicles wear out roadways so quickly, states have load limits on trucks to try and maintain some balance between the benefits of the roadway to commerce and the cost of maintenance and replacement. If you want to exceed that limit, you have to get a permit, which can be a pretty straightforward process in some cases, or can require you to do detailed engineering analysis in others. Of course, nearly every state in the US has different rules, and even cities and counties within the states can have requirements for overweight vehicles. Most states also have exemptions to load limits for certain industries like agricultural products and construction equipment. But, curiously, no state has an exemption for space shuttles. So, in addition to picking a route through which the orbiter could fit, a big part of the Endeavor transport project involved making sure the weight of the shuttle plus the transporter plus the STMPs wouldn’t seriously damage the infrastructure along the way. The engineering team prepared detailed maps of all the underground utilities that could be damaged or crushed by the weight of the orbiter, and roughly 2,700 steel plates borrowed from as far as Nevada and Arizona were placed along the route to distribute the load.

Another place where Endeavour’s weight was a concern was the West Manchester Boulevard bridge over the 405. Around 6:30 PM, 19 hours into the move, Endeavour pulled up to the renowned Randy’s Donuts, its astronomically large donut a perfect prop for photos of such an enormous spacecraft. Photographers had a heyday, and they had time to line up their shots perfectly because there was plenty of work to be done to prepare for the next leg. The shuttle’s permit wouldn’t allow it to be carried over the bridge using the four heavy SPMTs. Instead, they would have to lift it off the transporters and lower it onto a lightweight dolly to get over the 405. The SPMTs were sent over the bridge one at a time ahead of the shuttle. Then, longtime donor and Science Center partner, Toyota, got a chance to shine. The dolly was attached to a stock Toyota Tundra pickup truck that slowly pulled the shuttle across the bridge. Toyota got a nice commercial out of it, and that pickup still sits outside the Science Center as part of a demonstration about leverage (although sadly, it was broken when I was there). By midnight, the shuttle was over the bridge and crews were working to reconnect the SPMTs so that the journey could continue.

Through the night, Endeavour continued its trip eastward, passing the Inglewood City Hall. By 9:30 the next morning, the shuttle had reached its next stop, The Forum arena, where it was greeted by a marching band and speeches by former astronauts. But even though the shuttle was stopped, the crews supporting the move (both ahead of and behind the orbiter) continued working diligently. During preparation for the move, the engineers in charge had used a mobile laser-scanner along the route to create a 3D point cloud of everything that could be in the way. Rather than use crews of surveyors to walk the route and document potential collision points, which would have taken months, they used a digital model of the shuttle to perform clash detection on a computer. This effort allowed the engineering team to optimize the path of the shuttle and avoid as many traffic signals, light poles, street signs, and parking meters as possible. In some cases, the Shuttle would have to waggle down the street to clear impediments on either side, sometimes with inches to spare. The collision detection also helped engineers create a list of all the facilities that would need to be temporarily removed along the way by the Shuttle delivery team. Armies of workers ahead of the move used that list to dismantle and lay obstacles down, and armies of workers behind the move could immediately reassemble them to minimize disruptions, outages, and street closures.

By around noon on Saturday (36 hours into the move), the Shuttle had reached one of the most challenging parts of the route: Crenshaw Drive. This narrow path has apartment buildings tight to the street, narrow straights for an overland space shuttle. Endeavour’s next stop was scheduled for 2PM at Baldwin Hills Crenshaw Plaza, only about 3 miles or 5 kilometers away. But, as the shuttle continued its northward crawl, it encountered several unexpected obstacles, mainly tree branches that had been assumed to be out of the way. By 5PM, the shuttle was still well south of the party as chainsaw crews worked to clear the path, but event organizers decided to go ahead with the performances. The Mayor took the stage to welcome Endeavour to Los Angeles, but the shuttle was still too far to be seen.

Later that night, Endeavour finally made the difficult turn onto Martin Luther King Jr. Boulevard for its final eastward trek, dodging trees all along the way. The trees were probably the most controversial part of the entire shuttle move project, with around 400 needing to be cut down along the route (often in the median between travel lanes). Many in the affected communities felt that having a space shuttle in their science museum wasn’t worth the cost of those trees, several of which were decades old. To try and make up for the loss, the Science Center pledged to replace all the trees that were removed two-to-one and committed to maintain the new trees for at least two years, all at a cost of about $2 million. But the tall pines along MLK Boulevard were planted in honor of the famed civil rights leader and deemed too important to remove. Instead, the shuttle zigzagged its way between the trees on its way to the Science Center. 

Endeavour continued inching eastward toward Exposition Park on the last leg of its journey, facing a few delays from obstacles, plus a hydraulic leak on one SPMT. But, by noon that Sunday, the shuttle was making its turn into Exposition Park to a crowd of cheering spectators. It hadn’t hit a single object along the way. With an average speed of about 2 miles or 3 kilometers per hour, on par with the rest of LA’s traffic, the orbiter was nearing the end of its voyage and achieving the dream of any multi-million dollar engineering project: to come in only 15 hours behind schedule. By the end of the day on Sunday, the shuttle was safely inside its new home at the California Science Center.
It took only a few weeks for the center to open the space to the public, and 10 years later, you can still go visit Endeavour today (and you should!). Here’s a dimly lit picture of the channel’s editor (and my best friend) Wesley and me visiting in 2018. The shuttle sits on top of four seismic isolators on pipe support columns so that it can move freely during an earthquake. But the current building is only meant to be temporary. The Shuttle’s final resting place, the Samuel Oschin Air and Space Center, broke ground earlier this year. Eventually, Endeavour will be moved the short distance and placed vertically, poised for launch complete with boosters and main fuel tank in celebration of all 26 of its missions: 25 into space and 1 through the streets of Los Angeles.

October 18, 2022 /Wesley Crump

What's the Difference Between Paint and Coatings?

October 04, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

There’s a popular myth that I’ve heard about several bridges (including the Golden Gate Bridge in San Francisco and the Forth Bridge in eastern Scotland) that they paint the structure continuously from end to end. Once they finish at one end, they just start back up on the other. It’s not exactly true (at least for any structures I’m familiar with), but if you drive over any steel bridges regularly, it might seem like the painting never quite ends. That’s because, despite its ease of fabrication, relatively low cost, and incredible strength, steel has a limitation that we’re all familiar with: rust. Steel corrodes when exposed to the elements, especially when the elements include salty sea air.

I’m doing a deep dive series into corrosion engineering. We’ve talked about the tremendous cost of rust and how different materials exhibit corrosion, we’ve talked about protecting against rust using dissimilar metals like zinc and aluminum, and now I want to show you the other major weapon in the fight against rust. If you’ve ever thought, “This channel is so good, he could make it interesting to watch paint dry…” well, let’s test it out. I have the rustomatic 3000 set up for another corrosion protection shootout, plus a bunch of other cool demos as well. I’m Grady and this is Practical Engineering. On today’s episode we’re talking about high performance coatings systems for corrosion protection.

You might have noticed a word missing from that episode headline: “paint.” Of course, paint and coatings get used interchangeably, even within the industry, but there is a general distinction between the two. The former has the sole purpose of decoration. For example, nearly everyone has painted the walls of a bedroom to improve the way it looks. Coatings, on the other hand, are used for protection. They look like paint on the surface, but their real purpose is to provide a physical barrier between the metal and the environment, reducing the chance that it will come into contact with oxygen and moisture that lead to corrosion. Combined with cathodic protection (that I covered in a previous video), a coating system properly applied and well maintained can extend the lifespan of a steel structure pretty much indefinitely. Although paint and coatings often include similar ingredients, are applied in the same way, and usually look the same in the end, there are some huge differences as well, the biggest one being the consequences if things go wrong.

There are definitely right ways and wrong ways to paint a bedroom, but generally, the risk of messing it up is pretty small. Sometimes the color is not quite right or the coverage isn’t perfect, but those are pretty easy to fix. In the worst scenario, it’s only a few hundred dollars and a couple of days’ work to completely redo it. Not true with a coating system on a major steel structure. Corrosion is the biggest threat to many types of infrastructure, and if the protection system fails, the structure can fail too. It’s not just money on the line, either. It’s also the environment and public safety. Pipelines can leak or break, and bridges can collapse. Finally, it’s often no simple matter to simply reapply a coating system because many structures are difficult to access and disruptive to shut down. Applying protective coatings is something you only want to do once every so often (ideally every 25 to 50 years for most types of infrastructure). That’s why the materials and methods used to apply them are so far beyond what we normally associate with painting and why the systems are often called “high-performance” coatings.

Let me show you what I mean. These are the standard US federal government specifications used in department of defense projects. We’re in Division 9, which is finishes, and if I scroll down, you can see we have a totally different document for paints and general coatings than the one used for high-performance coatings. There’s even a more detailed spec used for critical steel structures. If you take a peek into this specification, you’ll see that a significant portion of the work isn’t the coating application itself, but the preparation of the steel surface beforehand. It’s estimated that surface prep makes up around 70% of the cost of a coating system and that 80% of coating failures can be attributed to inadequate surface preparation. That’s why most coating projects on major steel structures start with abrasive blasting.

The process of shooting abrasive media through a hose at high pressure, often known as sandblasting, is usually the quickest and most cost efficient way to clean steel of surface rust, old coatings, dirt, and contaminants, and cleanliness is essential for good adhesion of the coating. But, abrasive blasting does more than just clean; It roughens. Most high performance coatings work best on steel that isn’t perfectly smooth. The roughness, also known as the surface profile, gives the coating additional surface area for stronger adhesion. In fact, let’s just take a look at a random product data sheet for a high-performance primer, and you can see right there that the manufacturer recommends blast cleaning with a profile of 1.5 mils. That means the difference between the major peaks and valleys along the surface should be around one and half thousandths of an inch or about 40 microns. It also means we need a way to measure that tiny distance in the field (in other words, without the help of scanning electron microscopy) to make sure that the steel is in the right condition for the best performance of the coating, and there are a few ways to do that.

One method uses a stylus with a sharp point that is drawn across the surface of the steel. The trace can be stored by a computer and the profile is the distance between the highest peak and lowest valley. Another option is just to use a depth micrometer with a sharp point that will project into the valleys to get a measure of the profile. Finally, you can use replica tape that has a layer of compressible foam. I have an example of several grit blasted surfaces here, and I can apply a strip of the replica tape. When I burnish the tape against the steel surface, the foam compresses to form an impression of the peaks and valleys. Here’s what that looks like in a cross-section view. When the tape is removed, we can measure its new thickness, subtract the thickness of the plastic liner, and get a measure of the surface profile. Here’s a look at how the foam looks after burnishing on a relatively smooth surface and a very rough one. I used my depth micrometer to measure a profile of about 1 mil or 25 microns for the smooth surface and about 2.5 mil or 63 microns on the rough one.

Just to demonstrate the importance of surface preparation, I’m going to do a little coating of my own here in my garage. I’ve got four samples of steel here: two I’ve roughened up using a flap disc on a grinder (in lieu of sand blasting), and two I’ve sanded to a fairly smooth surface. They aren’t mirror surfaces, but the surface profile is much lower than that of the roughened samples. I also have some oil and I’ll spread a thin coat on one of the rough samples and one of the smooth ones. I wiped the oil off with a paper towel, but no soap. So now we have all the phases of youth here: smooth and clean, rough and clean, rough and oily, and smooth and oily. I’ll coat one side of all four samples using this epoxy product, leaving the other sides exposed. Notice how the wet paint doesn’t even want to stick to the dirty surfaces, but it eventually does lay down. I put two coats on each sample, and now it’s into the rustomatic 3000, the silliest machine I’ve ever built. I go into more detail on this in the cathodic protection video if you want to learn more, but essentially it’s going to dip these samples in saltwater, let them dry, take a photo, and do it all over again roughly every 5 minutes to stress test these steel samples. We’ll leave it running for a few weeks and come back to see how the samples hold up against corrosion.

There are countless types of coating systems in use around the world to protect steel against corrosion. The chemistry and availability of new and more effective coatings continue to evolve, but there is somewhat of an industry standard system used in infrastructure projects that consists of three coats. The first coat, called the primer, is used to adhere strongly to the steel and provide the first layer of protection. Sometimes the primer coat includes particles of zinc metal. Just like using a zinc anode to provide cathodic protection, a zinc-rich prime coat can sacrifice itself to protect steel from corrosion if any moisture gets through. Next the midcoat provides the primary barrier to moisture and air. Epoxy is a popular choice because it adheres well and lasts a long time. Epoxy often comes in two parts that you have to mix together, like the product I used on those steel samples. But, epoxy has a major weakness: UV rays. So, most coating systems use a topcoat of polyurethane whose main purpose is to protect the epoxy midcoat from being damaged by the rays of the sun. It’s often clear to visible light, but ultraviolet light is blocked so it can’t damage the lower coats.

The coating manufacturer provides detailed instructions on how to apply each coating and under what environmental conditions it can be done. They’ve tested their products diligently and they don’t want to pay out warranties if something goes wrong, so coating manufacturers go to a lot of trouble to make sure contractors use each product correctly. They often have to wait for clear or cool days before coating to make sure each layer meets the specifications for humidity and temperature. Even the applied thickness of the product can affect a coating’s performance. A coating that is too thin may not provide enough of a barrier, and one that is too thick may shrink and crack. Manufacturers often give a minimum and maximum thickness of the coating, both before and after it dries. Wet film thickness can be measured using one of these little gauges. I just press it into the wet paint and I can see the highest thickness measurement that picked up some of the coating. Dry film thickness can also be measured in the field for quality control using a magnetic probe.

Of course, once the coating is applied and dry, it has to be inspected for coverage. Coatings are particularly vulnerable to damage since they are so thin, and defects (called holidays) can be hard to spot by eye. Holiday detecting devices are used by coating inspectors to make sure there are no uncovered areas of steel. Most of them work just like the game of operation, but with higher voltage and fancier probes. If any part of the probe touches bare metal, an alarm will sound, notifying the inspector of even the tiniest pinhole or air bubble in the coating so it can be repaired. Once the system passes the quality control check, the structure can be put into service with the confidence that it will be protected from corrosion for the next several decades to come.

Let’s check in on the rustomatic 3000 and see how the samples did. Surprisingly, you can’t see much difference in the time lapse view. I let these samples run for about 3 weeks, and the uncoated steel underwent much more corrosion than the coated area of each square. I also have dried salt deposits all over my shop now. But, the real difference was visible once the samples were cleaned up. I used a pressure washer to blast off some of the rust, and this was enough to remove the epoxy coating on all the samples except the rough and clean one. That sample took a little more effort to remove the coating. At first glance, the coating appears to have protected all the samples against this corrosion stress test, but if you look around the edges, the difference becomes obvious.

The rough and clean sample had the least intrusion of rust getting under the edges of the coating, and you can see that nearly the entire coated area is just as it was before the test. The smooth and clean sample had much more rust under the edges of the coating that you can see in these semicircular areas protruding into the coated area. Similarly, the roughened yet oily sample had those semicircular intrusions of rust all around the perimeter of the coated area. The smooth and dirty sample was, as expected, the worst of them all. Lots of corrosion got under the coating on all sides, including a huge area along nearly the entire bottom of the coated area. It’s not a laboratory test, but it is a conspicuous example of the importance of surface preparation when applying a coating for corrosion protection.

Like those samples, I’m just scratching the surface of high performance coating systems in this video. Even within the field of corrosion engineering, coatings are a major discipline with a large body of knowledge and expertise spread across engineers, chemists, inspectors, and coatings contractors, all to extend the lifespan and safety of our infrastructure.

October 04, 2022 /Wesley Crump

What Really Happened at the New Harbor Bridge Project?

September 20, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In July of 2022, the Texas Department of Transportation issued an emergency suspension of work on the half-finished Harbor Bridge project in Corpus Christi, citing serious design flaws that could cause the main span to collapse if construction continues. The bridge is a high-profile project and, when constructed, might briefly be the longest cable-stayed bridge in North America. It’s just down the road from me, and I’ve been looking forward to seeing it finished for years. But, it’s actually not the first time this billion dollar project has been put on hold. In a rare move, TxDOT released not only their letters to the bridge developer, publicly castigating the engineer and contractor, but also all the engineering reports with the details of the alleged design flaws. It’s a situation you never want to see, especially when it’s your tax dollars paying for the fight. But it is an intriguing look into the unique challenges in the design and construction of megaprojects. Let’s take a look at the fascinating engineering behind this colossal bridge and walk through the documents released by TxDOT to see whether the design flaws might kill the project altogether. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about the New Harbor Bridge project in Corpus Christi, Texas. 

By the way, my new book comes out November 1st. Stay tuned to the end for a sneak preview.

Corpus Christi is a medium-sized city located on the gulf coast of south Texas. But even though the city is well down the list of the largest metropolitan areas in the state, it has one of the fastest growing cargo ports in the entire United States. The Port of Corpus Christi is now the third largest in the country by tonnage, due primarily to the enormous exports of crude oil and liquefied natural gas. But there are a couple of limitations to the port that are constraining its continued growth. One is the depth and width of the ship channel which is currently in the process of being deepened and widened. Dredging soil from the bottom of a harbor is an engineering marvel in its own right, but we’ll save that for another video. The second major limitation on the port is the harbor bridge.

Built in 1959, this bridge carries US Highway 181 over the Corpus Christi ship channel, connecting downtown to the north shore area. When it was constructed, the Harbor Bridge was the largest project ever to be constructed by the Texas Highway Department, later known as TxDOT. It was the pinnacle of bridge engineering and construction for the time, allowing the Army Corps of Engineers to widen the channel below so that the newest supertanker ships of the time could enter the port. The Harbor Bridge fueled a new wave of economic growth in the city, and it’s still an impressive structure to behold… if you don’t look too closely. Now, more than 60 years later, the bridge is a relic of past engineering and past needs. The Harbor Bridge has endured a tough life above the salty gulf coast, and the cost to keep corrosion from the bay at bay has increased substantially year by year. The bridge also lacks pedestrian and bicycle access, meaning the only way across the ship channel is in a watercraft or a motor vehicle (which is not ideal). Finally, the bridge is a bottleneck on the size of ships that can access the port, keeping them from entering or exiting fully-loaded and creating an obstacle to commerce within Corpus Christi. So, in 2011 (over a decade ago, now), the planning process began for a taller and wider structure.

The New Harbor Bridge project includes six-and-a-half miles (or about ten kilometers) of new bridge and roadway that will replace the existing Harbor Bridge over the Corpus Christi ship channel. And here’s a look at how the two structures compare. The new bridge will allow larger ships into the port with its 205 feet (or 62 meters) of clearance above the water. The bridge is being built just a short distance inland from the existing Harbor Bridge, which is a good thing for us because the Port Authority wouldn’t give us permission to cross the old bridge with a drone. It will eventually be demolished at the end of construction. The project also requires lots of roadway reconfigurations in downtown Corpus Christi that will connect the new bridge to the existing highway. The crown jewel will be the cable-stayed main span, supported by two impressive pylons on either side of the ship canal across 1,661 feet or 506 meters. The bridge will feature 3 lanes of traffic each way plus a bicycle and pedestrian shared use path with a belvedere midspan that will give intrepid ramblers an impressive view of Corpus Christi Bay.

The project was procured as a design-build contract awarded to a joint venture between Dragados USA and Flatiron Construction, two massive construction companies, with a huge group of subcontractors and engineers to support the project. Design-build (or DB for those in the industry) really just means that the folks who design it and the folks who build it are on the same team and work (hopefully) in collaboration to deliver the final product. That’s a good thing in a lot of ways, and design-build contracts on large projects often end up moving faster and being less expensive than similar jobs that follow the traditional design-bid-build model where the owner hires an engineer to develop designs and then bids the designs out to hire a separate qualified contractor. When an engineer and contractor work together to solve problems collaboratively, you often end up with innovative approaches and project efficiencies that wouldn’t be possible otherwise. You also don’t have to wait for all the engineering to be finished before starting construction on the parts that are ready, so the two phases can overlap somewhat. However, as we’ll see, DB contracts come with some challenges too. When the engineer and contractor are in cahoots (legally speaking), the owner of the project is no longer in the middle, and so has less control over some of the major decisions. Also, DB contracts force the engineer and contractor to make big decisions about the project very early in the design process, sometimes before they’ve even won the job, which reduces the flexibility for changes as the project matures.

Construction on the New Harbor Bridge project started in 2016 with an original completion date of 2020. But, another bridge halfway across the country would soon throw the project into disarray. In March of 2018, a pedestrian bridge at Florida International University in Miami collapsed during construction, killing six people and injuring ten more. After an extensive investigation, the National Transportation Safety Board put most of the blame for the bridge collapse on a miscalculation by the engineer, FIGG, the same engineer hired by Flatiron and Dragados to design the New Harbor Bridge project in Texas. I should note that FIGG disputes the NTSB’s assessment and has released their own independent analysis pinning the blame for the incident on improper construction. Nevertheless, the FIU collapse led TxDOT to consider whether FIGG was the right engineer for the job.

In November of 2019, they asked the DB contractor to suspend design of the bridge so they could review the NTSB findings and conduct a safety review. And only a few months later, TxDOT issued a statement that they had requested their contractor to remove and replace FIGG Bridge Engineers from the design of the main span bridge. That meant a new engineering firm would have to review the FIGG designs, recertify all the engineering and calculations, and take responsibility for the project as the engineer of record. Later that year, FIGG would be fired from another cable-stayed bridge project in Texas, and in 2021 they were debarred by the Federal Highway Administration from bidding on any projects until 2029. It took about six months for the New Harbor Bridge DB contractor to procure a new engineer for the main span. The contractor said it expected no major changes to the existing design.

Construction on the project forged ahead through most of this shakeup with steady progress on both of the approach bridges that lead to the main span. These are impressive structures themselves with huge columns supporting each span above. The bridge superstructure consists of two rows of segmental box girders, massive elements that are precast from concrete at a site not far from the bridge. For each approach, these segments are lifted and held in place between the columns using an enormous self-propelled gantry crane. Once all the segments within a span are in place, steel cables called tendons are run through sleeves cast into the concrete and stressed using powerful hydraulic jacks. When the post-tensioned tendons are locked off, the span is then self-supporting and the crane can be moved to the next set of columns. This segmental construction is an extremely efficient way to build bridges. It’s used all over the world today, but it actually got its start right here in Corpus Christi. The JFK Memorial Causeway bridge was replaced in 1973 to connect Corpus Christi to North Padre Island over the Laguna Madre. It was the first precast segmental bridge constructed in the US. And if you’re curious, yes qualified personnel can get inside the box girders. It’s a convenient way to inspect the structural members to make sure the bridge is performing well over the long term. The Harbor Bridge project will include locked entryways to the box girders and even lights and power outlets within.

Work on the main span bridge didn’t resume until August of 2021, nearly 2 years after TxDOT first suspended the design of this part of the project. And by the end of 2021, both pylons were starting to take shape above the ground. Early this year, the contractor mobilized two colossal crawler cranes to join the tower cranes already set up at both the main span pylons. These crawlers were used to lift the table segments where the bridge superstructure connects to the approaches. The next step in construction is to begin lifting the precast box girder sections into place while crews continue building the pylons upward toward their final height. Rather than doing the entire span at once, these segments will be lifted into place using a balanced cantilever method, where each one is connected to the bridge from the pylon outward.

But, it probably won’t happen anytime soon after TxDOT suspended construction on the main span in July and has continued a very public feud with the contractor since then that is far from resolved. During the shakeup with FIGG, TxDOT hired their own bridge engineer to review the designs and inform their decision that ultimately ended with FIGG fired from the project. When the DB contractor hired a new engineer to recertify the bridge designs, TxDOT kept their independent engineer to review the new designs. Unfortunately, many of the flaws identified in the FIGG design persisted into the current design of the bridge. In April of 2022, TxDOT issued the contractor a notice of nonconforming work. This is a legal document in a construction project used to let a contractor know that something they built doesn’t comply with the terms of the contract. And when that happens, it is the contractors job to fix the nonconforming work at their own cost. The notice included the entire independent review report and a summary table of 23 issues that TxDOT said reflected breaches of the contract, and it required their contractor to submit a schedule detailing the plan to correct the nonconforming work. But they didn’t provide that schedule, or at least not to TxDOT’s standards. So, in July, TxDOT sent another letter enacting a clause in the contract that lets them immediately suspend work in an emergency situation that could cause danger to people or property, citing five serious issues with design of the main span. So let’s take a look at them.

The first two of the alleged flaws are related to the capacity of the foundation system that supports each of the two pylons. Each tower sits on top of an enormous concrete slab or cap that is the area of two basketball courts and 18 feet or 5-and-a-half meters thick. Below that slab are drilled shaft piles, each one about 10 feet or 3 meters in diameter and 210 feet or 64 meters deep. The most critical loads on the pylons are high winds that push the bridge and towers horizontally. You might not think that wind is powerful enough to affect a structure of this size, but don’t forget that Corpus Christ is situated on the gulf coast and regularly subject to hurricane force winds. The independent reviewer estimated that, under some loading conditions, many of the piles holding a single tower would be subject to demands of more than 20% of their capacity. In other words, they would fail. The primary design error identified in the analysis was that the original engineer had assumed that the pile cap, that concrete slab between the tower and the piles, was perfectly rigid in the calculations.

All of engineering involves making simplifying assumptions during the design process. Structures are complicated, soils are variable, loading conditions are numerous. So, to make the process simpler, we neglect factors that aren’t essential to the design. And with a pile cap that is greater in depth than most single story buildings, you might think it’s safe to assume that the concrete isn’t going to flex much. But, we’re talking about extreme loads. When you take into account the flexibility of the pile cap, you find out that the stresses from the pylon aren’t distributed to each pile evenly. Instead, some become overloaded, and you end up with a foundation that the design reviewer delicately labeled as “exceedingly deficient to resist design loadings.”

The next critical design problem identified is related to the delta frame structures that transfer the weight of the bridge’s superstructure into each cable stay. These delta frames connect to the box girders below the bridge deck using post-tensioned tendons. But, these tendons can’t be used to resist shear forces, those sliding forces between the girders and delta frames. For those forces, according to the code, you need conventional steel reinforcement through this interface. Without it, a crack could develop, and the interface could shear apart.

The fourth issue identified is related to the bearings that transfer the weight of the bridge deck near each pylon. The independent reviewer found that, under some load conditions, the superstructure could lift up rather than pushing down on the tower. That would not only cause issues with the bearings themselves, which need to be able to resist movement in some directions while allowing movement in others. It would also cause loads to redistribute, reducing the stiffness of the bridge that depends on a rigid connection to each tower.

The final issue identified, and the most urgent, is related to the loads during construction of the bridge. Construction is a vulnerable time for a bridge like this, especially before the deck is connected between the pylons and the first piers of the approaches. The contractor is planning to lift derrick cranes onto the bridge deck that will be used to hoist the girder segments into place and attach them to each cable stay. TxDOT and their independent reviewer allege that the bridge isn’t strong enough to withstand these forces during construction and will need additional support or more reinforcement.

For the contractor’s part, they have denied that there are design issues and issued a statement to the local paper saying that they were “confident in the safety and durability of the bridge as designed.” In their letter to TxDOT, they cite their disagreements with the conclusions of the independent design reviewer and accuse TxDOT of holding back the results of the review while allowing them to continue with construction and ignoring attempts to resolve the differences. Because of TxDOT’s directive to suspend the work, they have already started demobilizing at the main span, reassigning crews, and reallocating resources. In August, TxDOT sent another letter notifying the contractor of a default in the contract and giving them 15 days to respond.

It’s hard to overstate the disruption of suspending work in this way. Construction projects of this scale are among the most complicated and interdependent things that humans do. They don’t just start and stop on a dime, and these legal actions will have implications for thousands of people working on the New Harbor Bridge project. Just the daily rental fees of those two crawler cranes alone is probably in the tens of thousands of dollars per day. Add up all the equipment and labor on a job this size, and you can see that the stakes are incredibly high when interrupting an operation like this. It’s never a good sign when the insurance company is cc’ed on the letter.

If the bridge design is truly flawed (and clearly TxDOT thinks that it is since they are sharing the evidence publicly), it’s a good thing that they stopped the work so the issues can be addressed before they turn into a dangerous situation for the public. But it also begs the question of why these concerns were handled in a way that let the contractor keep working even when TxDOT knew there were issues. Megaprojects like this are immensely complex, and their design and construction rarely goes off without at least a few complications. There just isn’t as much precedent for the engineering or construction. But, we have processes in place to account for bumps in the road (and even bumps in the bridge deck). Those processes include thorough quality control on designs before construction starts.

So who’s at fault here? Is it the DB contractor for designing a bridge, then recertifying that design with a completely new engineering team, that apparently had a number of serious flaws? Or is it TxDOT for failing to catch the alleged errors (or at least failing to stop the work) until the very last minute after hundreds of millions of taxpayer dollars have already been spent on construction that may now have to be torn down and rebuilt? The simple answer is probably both, but it’s a question that is far from settled, and the battle is sure to be dramatic for those who follow infrastructure, if not discouraging for those who pay taxes. The design issues are serious, but they’re not insurmountable, and I think it’s highly unlikely that TxDOT won’t see the project to completion in one way or another. Some work may have to be replaced while other parts of the project may be fine after retrofits. The best case scenario for everyone involved is for TxDOT to repair their relationship with their contractor and get the designs fixed instead of firing them and bringing on someone new. In the industry, they call that stepping into a dead man’s shoes, and there won’t be many companies jumping for a chance to take over this controversial job halfway through construction. 
Two things are for sure (as they almost always are in projects of this magnitude): The bridge is going to cost more than we expected, and it’s going to take longer to build than the current estimated completion date in 2024. There’s actually another, much longer, cable-stayed bridge racing to finish construction in the US and Canada between Detroit, Michigan and Windsor, Ontario. Barring any major issues, it is currently scheduled to be complete by the end of 2024 and will probably now beat the Corpus Christi project. Every single person who crosses over either one of these bridges, once they’re complete, will do so as an act of trust in the engineers who designed them and the agencies who oversaw the projects. So, I’m thankful that TxDOT is at least being relatively transparent about what’s happening behind the scenes to make sure the New Harbor Bridge is safe when it’s finished. As someone who lives in south Texas, I’m proud to have this project in my backyard, and I’m hopeful that these issues can be resolved without too much impact to the project’s schedule or cost. The latest headlines make it seem like things are headed in that direction. Until then, if you’re in Corpus Christi crossing the ship channel, as you drive over the aging but still striking (and still standing) old Harbor Bridge, you’ll have a really nice view of an impressive construction site and what was almost the nation’s longest cable-stayed bridge.

September 20, 2022 /Wesley Crump

These Metals Destroy Themselves to Prevent Rust

September 06, 2022 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the old Howard Frankland Bridge that carries roughly 180,000 vehicles per day across Old Tampa Bay between St. Petersburg and Tampa, Florida. A replacement for the bridge is currently under construction, but the Florida Department of Transportation almost had to replace it decades earlier. The bridge first opened for traffic in 1960, but by the mid-1980s it was already experiencing severe corrosion to the steel reinforcement within the concrete members. After less than 30 years of service, FDOT was preparing to replace the bridge, an extremely expensive and disruptive endeavor. But, before embarking on a replacement project, they decided to spend a little bit of money on a test, a provisional retrofit to try and slow down the corrosion of steel reinforcement within the bridge’s substructure. Over the next two decades, FDOT embarked on around 15 separate corrosion protection projects on the bridge. And it worked! The Howard Frankland Bridge lasted more than 60 years in the harsh coastal environment before needing to be replaced, kept in working condition for a tiny fraction of the cost of replacing it in the 1980s.

The way that bridge in Tampa was protected involves a curiously simple technique, and I’ve built a ridiculous machine in my garage so we can have a corrosion protection shootout and see how it measures up. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about cathodic protection for corrosion control.

Of all the structural metals in use today, most applications in infrastructure consist of mild steel (just plain old iron and carbon). There are so many applications where steel infrastructure comes into contact with moisture, including bridges, spillway gates, water tanks, and underground pipelines. That means there are so many opportunities for rust to deteriorate the constructed environment. We’re in the middle of a deep dive series on rust, and in the previous video about corrosion, I talked about its astronomical cost, which equates to roughly $1,400 per person per year, just in the United States alone. Of course, we could build everything out of stainless steel, but it’s about 5 times as expensive for the raw materials, and much more difficult to weld and fabricate than mild steel. Instead, it’s usually more cost effective to protect that mild steel against corrosion, and there are a number of ways to do it. Paint is an excellent way to create a barrier so that moisture can’t reach the metal, and I’ll cover coatings in a future video. But, there are some limitations to paint, including that it’s susceptible to damage and it’s not always possible to apply (like for rebar inside concrete). That’s where cathodic protection comes in handy.

Let me introduce you to what I am calling the Rustomatic 3000, a machine you’re unlikely to ever need or want. It consists of a tank full of salt water, and a shaft on a geared servo. These plastic arms lower steel samples down into the saline water and then lift them back up so the fan can dry them off, hopefully creating some rust in the process. Corrosion is an electrochemical process. That just means that it’s a chemical reaction that works like an electrical circuit. The two individual steps required for corrosion (called reduction and oxidation) happen at separate locations. This is possible because electrons can flow through the conductive metal from areas of low electric potential (called anodes) to those of high potential (called cathodes). As the anode loses electrons, it corrodes. This reaction is even possible on the same piece of metal because different parts of the material may have slightly different charges that drive the corrosion cell.

However, you can create a much larger difference in electric potential by combining different metals. This table is called the galvanic series, and it shows the relative inertness or nobility (in other words, resistance to corrosion) of a wide variety of metals. When any two of these materials are joined together and immersed in an electrolyte, the metal with lesser nobility will act as the anode and undergo corrosion. The more noble metal becomes the cathode and is protected from corrosion.

You can see that steel sits near the bottom of the galvanic table, meaning it is less noble and more prone to corrosion. But, there are a few metals below it, including some commonly available ones like Aluminum, Zinc, and Magnesium. And wouldn’t you know it, I have some pieces of Aluminum, Zinc, and Magnesium here in my garage that I attached to samples of mild steel in this demo. We can test out the effects of cathodic protection in the rustomatic 3000.  Each time the samples are lifted to dry, the arduino controlling the whole operation triggers a couple of cameras to take a photo. One of the samples is a control with no anode, then the other three have anodes attached consisting of magnesium, aluminum, and zinc from left to right. I’ll set this going and come back to it in a few minutes your time, three weeks my time.

One application of cathodic protection you might be familiar with is galvanizing, which involves coating steel in a protective layer of zinc. The coating acts kind of like a paint to physically separate the steel from moisture, but it also acts as a sacrificial anode because it is electrically coupled to the metal. Galvanizing steel is relatively inexpensive and extremely effective at protecting against corrosion, so nearly all steel structures exposed to the environment have some kind of zinc coating, including framing for buildings, handrails, stairs, cables, sign support structures, and more. Most outdoor-rated nails and screws are galvanized. You can even get galvanized rebar for concrete structures, and there are applications where it is worth the premium to extend the lifespan of the project.

But because it’s normally a factory process that involves dipping assemblies into gigantic baths of molten zinc, you can’t really re-galvanize parts after the zinc has corroded to the point where it’s no longer protecting the steel. Also, in aggressive environments like the coast or cold places that use deicing salts, a thin zinc coating might not last very long. In many cases, it makes more sense to use an anode that can be removed and replaced, like I’ve done in my demonstration here. Cathodic protection anodes like this are used on all kinds of infrastructure projects, especially those that are underground or underwater.

I let this demonstration run for 3 weeks in my garage. Each cycle lasted about 5 minutes, meaning these samples were dipped in salt water just about 6,000 times. And here’s a timelapse of those entire three weeks. Correct me if you find something better, but I think this might be the highest quality time lapse video of corrosion that exists on the internet.

It’s actually really pretty, but if you’re the owner of a bridge or pipeline that looks like this sample on the left, you’re going to be feeling pretty nervous. You can see that the unprotected steel rusts far faster than the other three and the rust attacks the sample much more deeply. The sample with the magnesium looks like it was most protected from corrosion, but watch the anode. It’s nearly gone after just those three weeks, and that makes sense. It’s the least noble metal on the galvanic series by a long shot. The samples with aluminum and zinc anodes do experience some surface corrosion, but it’s significantly less than the control.

In fact, this is exactly how the lifespan of the Howard Frankland bridge in Tampa was extended for so long. Zinc was applied around the outside of concrete girders and in jackets around the foundation piles, then coupled to the reinforcing steel within the concrete so it would act as a sacrificial anode, significantly slowing down the corrosion of the vital structural components.

Here’s a closeup of each sample after I took them down from the Rustomatic 3000, and you can really see how dramatic the difference is. The pockets of rust on the unprotected steel are so thick compared to the minor surface corrosion experienced by the samples with magnesium, aluminum, and zinc anodes. The anodes went through some pretty drastic changes themselves. After scraping off the oxides, the zinc anode is nearly intact, and you can even see some of the original text cast into the metal. The aluminum anode corroded pretty significantly, but there is still a lot of metal left. On the other hand, there’s hardly anything left of the magnesium anode after only three weeks. And here’s a look at the metal after I wire brushed all the rust off each sample. The difference in roughness is hard to show on camera, but it was very dramatic to the touch. There’s no question that the samples with cathodic protection lost much less material to corrosion over the duration of the experiment.

There’s actually one more trick to cathodic protection used on infrastructure projects. Rather than rely on the natural difference in potential between different materials, we can introduce our own electric current to force electrons to flow in the correct direction and ensure that the vulnerable steel acts as the cathode in the corrosion cell. This process is called impressed current cathodic protection. In many places, pipelines are legally required to be equipped with impressed current cathodic protection systems to reduce the chance of leaks which can create huge environmental costs. The potential between the pipe and soil is usually only a few volts, around that of a typical AA battery, but the current flow can be in the tens or hundreds of amps. If you look along the right-of-way for a buried pipeline, especially at road crossings, you can often see the equipment panels that hold rectifiers and test stations for the underground cathodic protection system. The Howard Frankland bridge also had some impressed current systems in addition to the passive protection to further extend its life, proving a valuable lesson we learn over and over again.


The maintenance and rehabilitation of existing facilities is almost always less costly, uses fewer resources, and is less environmentally disruptive than replacing them. You don’t need a civil engineer to tell you that an ounce of prevention is worth a pound of cure (or the whatever the metric equivalent of that is). It’s true for human health, and it’s true for infrastructure. Making a structure last as long as possible before it needs to be replaced isn’t just good stewardship of resources. It’s a way to keep the public safe and prevent environmental disasters too. Corrosion is one of the number one ways that infrastructure deteriorates over time, so cathodic protection systems are an essential tool for keeping the constructed environment safe and sound.

September 06, 2022 /Wesley Crump
  • Newer
  • Older