Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

Why Bridges Don't Sink

July 02, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The essence of a bridge is not just that it goes over something, but that there’s clear space underneath for a river, railway, or road. Maybe this is already obvious to you, but bridges present a unique structural challenge. In a regular road, the forces are transferred directly into the ground. On a bridge, all those forces on the span get concentrated into the piers or abutments on either side. Because of that, bridge substructures are among the strongest engineered systems on the planet. And yet, bridge foundations are built in some of the least ideal places for heavy loading. Rivers and oceans have soft, mucky soils that can’t hold much weight. Plus, obviously, a lot of them are underwater.

What happens when you overload soil with a weight it can’t handle? In engineering-speak, it’s called a bearing failure, but it’s as simple as stepping in the mud. The foundation just sinks into the ground. But, what if you just keep loading it and causing it to sink deeper and deeper? Congratulations! You just invented one of the most widely used structural members on earth: the humble foundation pile. How do they work, and how can you install them underwater? I’m Grady, and this is Practical Engineering. Today we’re having piles of fun talking about deep foundations.

I did a video all about the different types of foundations used in engineering, but I didn’t go too deep into piles. A pile is a fairly simple structural member, just a long pole driven or drilled into the ground. But, behind that simplicity is a lot of terrifically complex engineering. Volume 1 of the Federal Highway Administration’s manual on the Design and Construction of Driven Pile Foundations is over 500 pages long. There are 11 pages of symbols, 2 pages of acronyms, and you don’t even get to the introduction until page 46. And just a little further than that, you get some history of driven piles. Namely that the history has been lost to time. Humans have been hammering sticks into the ground since way before we knew how to write about it. And that’s pretty much all a driven pile is.

The first piles were made from timber, and wood is still used all these years around the world. Timber piles are cheap, resilient to driving forces, and easy to install. But, wood rots, it has an upper limit on length from the size of the tree, and it’s not that strong compared to the alternatives. Concrete piles solve a lot of those problems. They come in a variety of sizes and shapes, and again, are widely used for deep foundations. One disadvantage of concrete piles is that they have to be pretty big to withstand the force required to drive them into ground. Some concrete piles can be upwards of 30 inches or 75 centimeters wide. It is hard to hit something that big hard enough to drive it downward into soil, and a lot of ground has to either get out of the way or compress in place to make room. Steel piles solve that problem since they can be a lot more slender. Pipe piles are just what they sound like, and the other major alternative is an H-pile. Your guess is as good as mine why the same steel shape is an I-beam but an H-pile. But, no matter the material, all driven piles are installed in basically the same way. 

Newton’s third law applies to piles like everything else. To push one deep into the ground creates an equal and opposite reaction. You would need either an enormous weight to take advantage of gravity or some other strong structure attached to the ground to react against and develop the pushing force required to drive it downward. Instead of those two options, we usually just use a hammer. By dropping a comparatively small weight from a height, we convert the potential energy of the weight at that height into kinetic energy. The force required to stop the hammer as it falls gets transferred into the pile. Hopefully this is intuitive. It’s pretty hard to push a nail into wood, but it’s pretty easy to hammer it in... well, it’s a little bit easier to hammer it in. There are quite a few types of pile drivers, but most of them use a large hammer or vibratory head to create the forces required.

Maybe it goes without saying, but the main goal of a foundation is to not move. When you apply a load, you want it to stay put. Luckily, piles have two ways to do that (at least for vertical loads). The first is end-bearing. The end, or toe, of a pile can be driven down to a layer of strong soil or hard rock, making it able to withstand greater loads. But there’s not always a firm stratum at a reasonable depth below the ground. Quote-unquote “bedrock” is a simple idea, but in practice, geology is more complicated than that. Luckily, piles have a second type of resistance: skin friction, also known as shaft resistance. When you drive a pile, it compacts and densifies the surrounding soil, not only adding strength to the soil itself, but creating friction along the walls of the pile that hold it in place. The deeper you go, the more friction you get. Let me show you what I mean.

I have my own pipe pile in the backyard that I’ve marked with an arbitrary scale. When I drop the hammer at a prescribed height, the pile is driven a certain distance into the ground. Do this enough times, and eventually, you reach a point where the pile kind of stops moving with each successive hammer blow. In technical terms, the pile has reached refusal. I can graph the blow count required to drive the pile to each depth, and you get a pretty nice curve. It’s easy to see how it got stronger against vertical loads the deeper I drove it in. Toward the end, it barely moved with each hit. This is a really nice aspect of driven piles, you install them in a similar way to how they’ll be loaded by the final design. Of course, bridges and buildings don’t hammer on their foundations, but they do impose vertical loads. The tagline of the Pile Driving Contractors Association is “A Driven Pile is a Tested Pile” because, just by installing them, you’ve verified that they can withstand a certain amount of force. After all, you had to overcome that force to get them in the ground. And if you’re not seeing enough resistance, in most cases, you can just keep driving downward until you do!

But piles don’t just resist downward forces. Structures experience loads in other directions too. Buildings have horizontal, or lateral, loads from wind. Bridges see lateral loads from flowing water, and even ice or boats contacting the piers. Both can experience uplift forces that counteract gravity from floods due to buoyancy or strong winds. If you’ve ever hammered in a tent stake, you know that piles can withstand loading from all kinds of directions. And then there’s scour. The soil along a bridge might look like this right after the bridge is built, but after a few floods, it can look completely different. Engineers have to try and predict how the soil around a bridge will scour over time, from natural changes in the streambed and those created by the bridge itself. Then they make sure to design foundations that can accommodate those changes and stay strong over the long term. This is why bridge foundations sometimes look kind of funny. Loads transfer from the superstructure down into the piers. The piers sit on a pile cap that transfers and distributes loads into the piles themselves. Those piles can be vertical, but if the engineer is expecting serious lateral loads, some of the piles are often inclined, also called battered piles. Inclined piles take better advantage of the shaft resistance to make the foundation stronger against horizontal loads.

As important and beneficial as they are, driven piles have some limitations too. For one, they’re noisy and disruptive to install. Just last year, I had two friends on separate trips to Seattle who sent me a video of the exact same pile-driving operation. It’s good to have friends who know how much you like construction. But my point is, this type of construction is pretty much impossible to ignore. In dense urban areas, most people are just not willing to put up with the constant banging. Plus the vibrations from installing them can disrupt surrounding infrastructure. Pile driving is crude; in many cases, the piles aren’t designed to withstand the forces of the structure they’ll support but rather the forces they’ll have to experience during installation which are much higher. They can’t easily go through hard geological layers, cobbles, or boulders; they can wander off path, since you can’t really see where you’re going, and they can cause the ground to heave because you’re not removing any soil while you force them into the subsurface. The second major category of piles solves a lot of these problems.

And, wouldn’t you know it? There’s an FHWA manual that has all the juicy details - Drilled Shafts: Construction Procedures and Design Methods. This one a whopping 747 pages long. A drilled shaft is also exactly what it sounds like. The basic process is pretty simple. Drill a long hole into the ground. Place reinforcing steel in the hole. Then fill the whole thing with concrete. But, bridge piers are often, as you probably know, installed underwater. Pouring concrete underwater is a little tricky. Imagine trying to pour a smoothie at the bottom of a pool! Let me show you what I mean.

This is my garage-special bridge foundation simulator. It has transparent soil in the form of superabsorbent polymer beads… and you know we have to add some blue water too. You can probably imagine how easy it might be to drill a hole in this soil. It’s just going to collapse in on itself. We need a way to keep the hole open so the rebar and concrete can be installed. So, drilled shafts installed in soft soils or wet conditions usually rely on a casing to support the walls. Installing a casing usually happens while the hole is drilled, following the auger downward. I tried that myself, but I only have two hands, and it was pretty unwieldy. So, just for the sake of the demo, I’m advancing the casing into the soil ahead of time. Now I can drill out the soil to open the shaft. And now I’m realizing the limitations of my soil simulant. It was still pretty hard to do, even with the casing in place. It took a few tries, but I managed to get most of it out.

So now I have an open hole, but it’s still full of water. Even if your casing runs above the water surface, and you try to pump it out, you can still have water leaking in from the bottom. In ideal conditions, you can get a nice seal between the bottom of the casing and the soil, but even then, it’s pretty hard to keep water out of the hole, and luckily it doesn’t matter.

Instead of concrete, I’m using bentonite clay as a substitute. It’s got a similar density, and it’s perfect for this demo because you can push it through a small tube… if you get the proportions right. Ask me how I know. This is me pondering the life decisions that led up to me holding a gigantic syringe full of bentonite slurry in my garage. You can’t just drop this stuff through the water. It mixes and dilutes, just turning into a mess. Same is true for concrete. The ratio of water to cement in a concrete mix is essential to its strength and performance, so you can’t do anything that would add water to the mix. The trick is a little device called a tremie. Even though it has a funny name, it’s nothing more than a pipe that runs to the bottom of the hole. As long as you keep the end of the tremie below the surface of the concrete that you’re pumping in, or concrete simulant in my case, there’s no chance for it to mix with the water and dilute. I’m just pushing the clay into the casing with a big syringe, making sure to keep the end of the tube buried. Because concrete is a lot more dense than water, it just displaces it upward, out of the hole. 

In underwater installations, the casing is often left in place. One advantage is that you can build a floating pile cap. Instead of building a big cofferdam and drying out the work area to construct a big concrete structure, sometimes you can raise the pile cap into or above the water surface, reducing the complexity of its construction. These “high rise” pile caps are used a lot in offshore wind turbines. But, not all casings are permanent.

In some situations, it’s possible to pull the casing once the hole is full of concrete, saving the sometimes enormous cost of each gigantic steel tube. I tried to show this in my demo. It’s not beautiful, but it did work. Again, the concrete is dense, so the pressure it exerts on the walls of the hole is enough to keep the soil from collapsing. And because drilled shafts can be much larger than driven piles, sometimes you don’t even need a group of them. Lots of structures, including wind turbines, highway signs, and more, are built on mono-pile foundations. Just a single drilled shaft deep in the ground, eliminating the need for a pile cap altogether. Another interesting aspect of drilled shafts is that you can ream out the bottom, creating an enlarged base that increases the surface area at the toe. This helps reduce a pile’s tendency to sink, and it can help with uplift resistance too.

Driven piles and drilled shafts are far from the only types of deep foundation systems. There are tons of variations on the idea that have been developed over the years to solve specific challenges: Continuous flight auger piles do the drilling and concreting in essentially one step, using a hollow-stem auger to fill the hole as it’s removed. Then reinforcement is lowered into the wet concrete. You can fill a hole with compacted aggregate instead of concrete, called a stone column or tradename Geopier if you’re only worried about compressive loads. Helical or screw piles twist into the ground, instead of being hammered, reducing vibrations and disturbance. Micropiles are like tiny drilled shafts used when there are access restrictions or geologic constraints. And of course, there are sheet piles that aren’t really used for foundations but are driven piles meant to create a wall or barrier. Let me know if I forgot to mention your favorite flavor of pile.

Even though they’re usually much stronger than shallow foundations, piles can and do fail. We’ve talked about San Francisco’s famous Millennium Tower in a previous video. That’s a skyscraper on a pile foundation that sank into the ground, causing the building to tilt. It seems like they mostly have it fixed now, but it’s still in the news every so often, so only time will tell. In 2004, a bridge pier on the Lee Roy Selmon Expressway in Tampa, Florida sank 11 feet (more than 3 meters) while it was still under construction because of the complicated geology. It cost 90 million dollars to fix and delayed the project’s completion by a year. These case studies highlight the complexity of geotechnical engineering when we ask the ground to hold up heavier and heavier loads. The science and technology that goes into designing deep foundations are enough to spend an entire career studying, but hopefully, this video gives you a little insight into how they work.

July 02, 2024 /Wesley Crump

This Bridge Should Have Been Closed Years Before It Collapsed

June 18, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On January 28, 2022, about an hour before dawn, the four-lane Fern Hollow Bridge in Pittsburgh, Pennsylvania, collapsed without warning. Five vehicles, including an articulating bus, fell with the bridge, and another car drove off the abutment after the collapse, not realizing the bridge was gone. Although there were no fatalities, several people in the vehicles were seriously injured. And this bridge had been listed as being in ‘poor condition’ for over a decade. Anyone who walked by the supports in the park below would have had reason to question its safety, as seen in this sadly prophetic tweet from 2018, four years before the collapse. So, why was it left open in this state?

While some initial findings were released earlier, the official NTSB report was delivered to the public this year in February, more than two years after the collapse and over a year after the replacement bridge was built and open. The report included the use of some really cool technology for the forensic study of structures and revealed systemic flaws in how we inspect, analyze, and prioritize repairs for bridges. In fact, the NTSB issued recommendations to basically every organization involved in this bridge from the bottom to the top, and they referenced that tweet that got so much attention. This is a crazy case study of how common sense can fall through the cracks of strained budgets and rigid oversight from federal, state, and city staff. And the lessons that came out of it aren’t just relevant to people who work on bridges. It's a story of how numerous small mistakes by individuals can collectively lead to a tragedy. I’m Grady, and this is Practical Engineering.

The Fern Hollow Bridge was opened in 1973, replacing an aging arch bridge built at the beginning of the 20th century, crossing Frick Park and Fern Hollow Creek. The 1973 bridge used a K-Frame design with continuous steel girders supporting the deck, each with two angled steel legs supported on concrete thrust blocks. At the time, the bridge’s design was celebrated because it blended well into its settings. It was featured on the cover of the 1974 edition of Prize Bridges by the American Institute of Steel Construction. And a big part of why it looked so harmonious with the park below was the type of steel used for the design.

The Fern Hollow bridge was fabricated from weathering steel, sometimes referred to by its genericized trademark, COR-TEN. And it was developed commercially right there in Pittsburgh by US Steel in the 1930s. Unlike most types of steel, whose rust can continually flake away, exposing more material to corrosion, the oxide layer on weathering steel (called its patina) is more stable and protective, shielding the underlying material against exposure to the elements. In that way, weathering steel acts kind of like aluminum, which protects itself from corrosion in a similar way. And, architecturally, it’s a nice material. You get this rustic look that can give structures a more comfortable and less obtrusive appearance. But weathering steel has a limitation: For a stable patina to form, the material has to stay mostly dry. If water pools or the steel is kept damp for extended periods, that patina of rust isn’t enough to protect the underlying steel, and it will continue to corrode. Corrosion of structural steel is called section loss by engineers, and it's easy to see why in these photos of the bridge.

What’s more alarming than what’s in those photos is where they came from: inspection reports of the bridge. It’s not that this deterioration somehow went unnoticed. The bridge supports were clearly visible from a popular walking trail. Between 2005 and 2021, this bridge was inspected a total of 14 times! In those reports, you get a clear and vivid story of its decline. First were the drainage problems. You can see in these images from multiple previous bridge inspections there were drainage grates on the roadway that were 100 percent clogged. Rainwater, and even worse, salty meltwater from the frequent snow that Pittsburgh sees each winter couldn’t follow the prescribed drainage paths off the deck and into the creek below. Instead, that water would leak through the bridge deck, dribbling over the structural steel, and pooling in portions of the legs where webbing and tie-plates could catch puddles of water, leaves, and debris.

The City was aware of the section loss due to these drainage problems for many years before the collapse. Nearly every inspection report noted problems with the drains and the accelerating corrosion that was resulting. In fact, in 2009, the cross-braces connecting each pair of legs were found to be failing, and steel cables were installed as a temporary retrofit until the framing could be replaced. These cables were lightly tensioned to add structural integrity to the bridge but were never meant to be a permanent solution.

You can see the ends of two of these cables in this now-infamous tweet from 2018. Of course, the more notable feature of this image is the fully separated steel cross-brace! That photo was taken about nine years after the temporary cables were installed. And they remained in place for the rest of the bridge's life, which ended up being only four more years. But the cross-bracing between the legs wasn’t the only place where corrosion was an issue. The legs themselves were also fabricated from weathering steel, and that steel was suffering, too. Since 2005, inspection reports marked them in fair to poor condition with areas where the steel had completely rusted through. By 2019, all four legs were given the worst assessment possible for an individual bridge element. According to the code, that should trigger a structural review to check whether the integrity is affected by the poor condition of a structural element, but it was never done. And that’s not all.

An important part of inspecting steel bridges is identifying members that are “fracture critical.” That’s engineering jargon, but the idea is actually pretty straightforward. It’s any piece of steel under tension that lacks redundancy. If it breaks, the bridge collapses. And these types of members get special attention because of their importance, so inspection teams identify them ahead of time to make sure they get a proper look. This drawing shows in green which elements of the bridge were considered to be fracture-critical. Notice that while the girders crossing the span are identified, the legs are not. And, at first glance, that might match your intuitions. Bridge piers, columns, and vertical supports usually don’t experience tension forces, right? They’re in compression. So if there’s a crack or break, the forces just squeeze it together, generally not that big a deal. But K-frame bridges are different. By splaying out the legs, there are loading conditions that can apply bending forces, putting part of each beam in tension. And, this particular bridge had another feature that was absolutely essential to its performance.

Each leg of the bridge was essentially an I-beam: a central web with a top and bottom flange. To simplify the foundation design, each leg had a “shoe”: a tapered end that would connect to the concrete block. It’s clear that the narrower section would have less strength, so larger stiffeners were added to each shoe to strengthen that portion of the leg. These are just steel plates welded to the web and flanges to increase the leg’s rigidity. And I built a little cardboard model to clarify this point. This particular stiffener, called the transverse tie plate, bridges the two flanges right where they taper down. And if I apply a compressive force on the leg, it’s easy to see what kind of force that tie plate experiences as a result. It’s tension. These tie plates were fracture-critical members of the bridge, but never identified as such, and so, even though it was clear they were deteriorating quickly over time, the inspectors never elevated the concerns to a priority level that might have spurred a more immediate response. But there was another opportunity to catch the problem.

In 2013, an inspector was concerned enough about the bridge's safety to recommend that it be reviewed for a load rating. Most bridges are designed to allow any legal load to pass over, but sometimes, either because of an old design or poor structural conditions, it's necessary to limit the weight of vehicles allowed. Engineers analyzed the bridge in 2014 and decided it could only handle 26 tons per vehicle, just over half of its previous rating. When NTSB reviewed that decision in hindsight, they found some pretty serious errors.

For one, the load rating for the bridge was based on a layer of about 3 inches of asphalt paving on top of the concrete road deck. In reality, the bridge had about double that amount. The City’s records of the removal and addition of pavement were poorly kept, so the engineering firm doing the load rating had no idea there was so much extra weight. For two, the engineers didn’t fully account for all the corrosion on the legs. This was partly because inspectors hadn’t cleaned off the rust to measure and report the actual thickness of the remaining steel. Even so, the engineers used a method that distributed the section loss from corrosion evenly along the entire leg, instead of applying it where it actually was, at the shoes and tie plates. That’s a pretty commonly-used simplification that usually generates conservative results (since the worst corrosion is rarely located at the most critical part of a structure), but again, that wasn’t the case for the Fern Hollow Bridge, and no one had recognized how important those tie plates really were.

And for three, those engineers made an incorrect assumption about how the bridge’s legs would buckle. A structural member under compression will buckle at different loads depending on how much restraint it has at the ends. This is something you learn in year one of engineering. If you keep a column from rotating at its ends, you substantially increase the amount of force it can withstand, and with the original cross-bracing between the legs, that would have been the case. But I’m sure I don’t have to tell you that steel cables don’t provide the same support as rigid members. Again, this is engineering 101: “You can’t push a rope.” The cables provided some restraint, but not in the same way that the original bracing could, so the load rating applied to the bridge ended up significantly overestimating its actual capacity. In fact, when NTSB updated the load rating with these errors fixed, they found that the bridge should have been limited to 3 tons, basically nothing for a bridge. In effect, this load rating exercise should have closed the bridge to traffic entirely. These structural issues were exactly what the process was meant to identify. But instead, the bridge stayed open to everyone except the largest of trucks,and here’s what happened, courtesy of NTSB’s animation team.

The transverse tie plate on the southwest leg, weakened by corrosion, failed first under tension, separating the flanges on the leg, and ultimately causing it to buckle. With no redundancy in the supports, the loads had nowhere to go, and so the rest of the bridge fell into the valley below. The articulated bus had both rear-facing and forward-facing cameras on it, which captured some truly harrowing footage of the event. Looking at the rear-facing camera, you can see the western portion of the bridge begin to fail. Keep an eye on the railing, and you can see the exact moment it starts. Once it started, there was no stopping it. Within two seconds, the front-facing camera shows that the collapse had propagated all the way to the eastern abutment, and the bridge fully failed.

Thankfully, the collapse happened during particularly inclement weather. School delays and generally poor conditions meant that traffic was lighter than normal, and the weather also likely kept folks away from the trail underneath. On a fair weather day during rush hour, it wouldn’t be uncommon for the eastbound lanes of the bridge to be fully occupied by heavy traffic, and the trail underneath to be populated with dog walkers, families, or even classes on field trips from nearby schools. The bridge also carried a large gas line, which was severed during the collapse, leading to a major leak and some evacuations, but they got it shut off in time. It really is remarkable that nobody was killed in a failure of this magnitude. But there were still multiple victims needlessly affected for the rest of their lives by the collapse, not to mention the overall erosion of trust in the organizations and engineering systems meant to keep the public safe.

By pure happenstance, President Biden was set to arrive in Pittsburgh on the very day of the collapse to speak in support of the Infrastructure Investment and Jobs Act, so he rearranged his trip to make some remarks at the site. One of the entities supported by the act is the National Highway Performance Program, which ultimately funded the replacement of the collapsed bridge. But at that time, no one understood the full scope of neglect and bad assumptions that led to the gradual, and then sudden, demise of the bridge. In those two years after Fern Hollow Bridge fell, the NTSB conducted numerous interviews with those involved, from the paving contractors to the inspectors to the bridge rating engineers. They performed 3D laser scanning of the bridge components to compare them to as-built conditions. They tested sections of steel for strength and durability. They reviewed all the previous records of the design, construction, and repairs. And they built a detailed finite-element model of the bridge to confirm that the gradual corrosion of one small structural element, the transverse tie plate on the southwest leg, initiated the collapse. And then they documented why it got to that point: the City of Pittsburgh just didn’t fix it.

This figure in the NTSB report tells the story as clearly as I think is possible. From 2005 onward, recommendations from inspection reports to repair parts of the bridge didn’t fall off the list. They just kept being repeated by each new inspection, year after year. Since 2007, every single inspection report that included recommendations said to repair the stiffener plates in the legs that were heavily corroded. These were Priority 2 recommendations, which means the timeframe to complete them is before the next inspection. But it was never done. They didn’t fix the drainage problems that were accelerating the corrosion, they didn’t apply the protective coatings that might have slowed it down, and they never analyzed the capacity of the legs after they were rated the worst possible condition a structural element can have. And, apparently, there was no mechanism to follow up on those recommendations by the state charged with overseeing the bridge inspection program. When there was finally a chance to recognize how deficient the structure really was through a new load rating, the engineers made a few bad assumptions, missed it by a mile, and left the bridge open, a ticking time bomb, for years. (Years, by the way, in which the City still didn’t address the recommendations from inspection reports.)

Due to the nature of the emergency, the site was cleaned up quickly, with a huge crane brought in to remove the bus, and building of the replacement bridge happened on a fast-track schedule. The new bridge uses a more conventional design: pre-stressed concrete girders on vertical piers. The formed stone texture on the columns certainly doesn’t blend into the park as well as the graceful and patinaed K-frame once did, but I doubt anyone involved in the project could stomach another structure built from weathering steel, given the circumstances. The new bridge might not win any awards for beauty, but it could win some for speed. After a colossal effort from the design and construction teams, it opened to limited traffic less than a year after the collapse in December of 2022, and by July the next year, it was fully operational. It would be almost a year later before the NTSB concluded why the previous bridge collapsed, not for the purpose of blame, but to issue recommendations to prevent something like this from recurring in the future. And unlike the recommendations from those inspection reports, many of the NTSB recommendations have already been put into practice.

They published a special report on weathering steel bridges to highlight the specific challenges of keeping them in good condition, and they identified several similar bridges that needed a closer look. The City of Pittsburgh quadrupled their spending on inspection, maintenance, and repairs. And they redid the load ratings on all the bridges they owned, resulting in one bridge being closed until it can be rehabilitated and two more having lane restrictions imposed. PennDOT released a technical bulletin to shore up their bridge inspection program. And even the federal government has implemented a process to identify, prioritize, and follow up on recommendations related to weathering steel bridges.

But as I read through those recommendations from the NTSB, one thing struck me: They all add up to more paperwork. And this is just my own personal opinion as someone who did this kind of work for nearly a decade (not on bridges, but other large infrastructure projects). We have these national inspection standards and procedures - huge documents that you could spend an entire career understanding. We have the Federal Highway Administration overseeing the program, state DOTs charged with carrying it out, individual bridge owners, like Pittsburgh, responsible for inspecting their own bridges, and then the private contractors who do most of the actual work. We have this huge machine with thousands of people, federal, state, and local involvement, and millions of dollars meant to keep the traveling public safe. And what did it do for us when a photo like this is all it would take any reasonable person to say: “This bridge needs to be fixed”?

This big machine, in a lot of cases, has all the work sectionalized out. The inspectors see the bridge up close, but they have no autonomy to do anything but document and give recommendations. It’s not their bridge. But the owners who are charged with the safety of their bridges just see a piece of paper. Each recommendation is just another one on the list of sometimes hundreds of action items, to sort and prioritize and try to find the budget to cover. All the NTSB recommendations feel a little bit like bandaids if the real source of the problem was that no one person in this whole machine had both a full appreciation of the bridge's condition and the authority to do something about it. And if that’s the case, I’m not sure any of those recommendations really fixes that problem. I don’t know what the answer is, and I’m still wrestling with trying to understand how something like this can fall through the cracks of the enormous system we’ve built for the sole purpose of trying to prevent it.

If you take a walk on the Tranquil Trail through Frick Park beside Fern Hollow Creek and look carefully, you can still see remnants of the old bridge. And I’m glad the City left them, because they’re a good reminder that design and construction are two parts of a three-part system for keeping people safe. Maintaining infrastructure is thankless work. Don’t get me wrong, it can be a really rewarding career. Inspections involve a lot of time out in the field seeing cool structures up close. And repair projects are often interesting challenges for contractors. But they’re not rewarding in the same way that designing and building new stuff can be. No one holds a press conference and cuts a big ribbon at the end of a bridge inspection or structural retrofit. Building a new structure is not just an achievement in its own right; it’s a commitment to take good care of it for its entire design life, and then to rehabilitate, or replace, or even close it when it’s no longer safe for the public. And I think this is the perfect case study to show that there’s more we could do to encourage and celebrate that kind of work as well.

June 18, 2024 /Wesley Crump

The Most Confusing Part of the Power Grid

June 04, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In March of 1989, Earth experienced one of its strongest geomagnetic storms in modern history. It all started when scientists observed a cluster of sunspots—active, magnetic areas on the sun's surface—emerging on its horizon. Over the next few days, the sun slowly rotated until the region began to point directly at Earth. Just as it did, two solar flares erupted from the sunspots. Accompanying the flares were coronal mass ejections: huge bursts of solar wind, essentially charged particles from the sun. The coronal mass ejections eventually crashed into the earth’s magnetic field, causing it to squish and compress and ultimately induce electric currents at the surface.

In Quebec, Canada, the rapid changes in magnetic fields would have mostly gone unnoticed by people, but they didn’t go unnoticed by the power grid. The region’s unique geology, a shield of hard rock that is a poor conductor of electricity, kept these induced currents from dissipating into the ground. So they found another path: the electrical transmission lines. The geomagnetic storm ended up blacking out a large part of the Hydro-Quebec power grid for nine hours. And the first domino of the collapse (or rather the first seven) were pieces of equipment known as static compensators. But to understand how static compensators work and why a solar flare could trip them offline, you kind of have to start with the basics.

You might know that most parts of all modern power grids use alternating current or AC. The voltage and current on the lines slosh back and forth, 50 or 60 cycles per second, depending on where you live. If you love power electronics, that low, dull, AC hum might be music to your ears. But if this is kind of new to you, alternating current can be a little bit mysterious. What’s even weirder is that, even though the current constantly alternates its polarity, electrical power only moves in one direction… under ideal conditions. And geomagnetic storms aren’t the only thing that can make the grid behave in funny ways. There are devices even in your own home that force the grid to produce power and move it through the system, even though they aren’t even consuming it. Let’s go out to the garage, and I can show you what I mean. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about how power actually flows on the grid.

I’ve built a model power grid here in the shop. (Not the first time I’ve said that, and it probably won’t be the last.) I’ll keep it simple at first and build up the complexity as I explain these concepts. And Zap McBody slam is back in the shop to help out. My grid has one power source, right now just a battery, a transmission line to carry the power, and a load (in this case, an incandescent light bulb). It’s probably not the most interesting circuit you’ve ever seen. But like I said, understanding the basics of power flow is essential to understanding the more complicated, and I think, the more interesting aspects of how it works on a huge scale. So, here’s a one-minute refresher on electrical circuits:

There are really only four numbers that matter the most in a circuit. First is voltage, the difference in electric potential between two locations. In the classic pipe analogy, voltage is the pressure that drives water to flow from one side to the other. In my circuit, the battery is supplying about 10 volts across the bulb. Next is current, the flow of electric charge. In the pipe analogy, this is the flow rate of the water. In my circuit, I can measure the current as 1.2 amps. Third is resistance, the opposition to the flow of current. It’s the size of the pipe. Incandescent bulbs actually change their resistance depending on voltage, so it can’t be measured directly with a meter. That’s okay, though, because all three of these values are related to each other. That relationship, called Ohm’s law, is about as simple as it gets. Voltage is equal to current times resistance. If you know two, you can find the other one with some basic math. For example, 10.1 volts flowing at 1.2 amps means the resistance of my lightbulb is around 8 ohms. The final number we care about is power, the transfer of energy. Power does the actual work, in this case, creating light and heat in the bulb. Calculating power is as simple as multiplying the voltage and the current together. 10 volts times 1.2 amps tells me that this bulb is dissipating 12 watts. That’s electrical engineering in a nutshell, and it’s relatively straightforward for a circuit like this that uses direct current or DC, because none of our important numbers change. But, as I mentioned, that’s not true on the grid.

Let me swap out the battery with a transformer plugged into an outlet and see what happens. At first glance, there’s no change. The bulb is still lighting, just like it did with the battery. I can measure the voltage by switching my meter to AC: 8.4 volts, not too far from the DC circuit. I can measure the current with this clamp over meter: 1.2 amps, same as before. But those are just simplifications of what’s really happening on the lines. To see that, we need a different piece of equipment. This oscilloscope measures voltage over time and plots it as a graph on the screen. And I can insert a resistor into the circuit and use a second probe to plot the voltage across that resistor as a simple way of measuring current. “So the yellow will be the voltage, and the green will be the current.” You can see that neither the voltage nor the current are constant… unless you trip over all the cords. They’re switching directions over and over again. This might not be too surprising to you yet, but watch what happens if I switch out the lightbulb with a different kind of load. Let’s try a capacitor.

This is a device that stores energy in an electric field between two plates. You see them everywhere in electrical circuits, and they do a funny thing on the grid. When I insert the capacitor to my circuit, the graph of voltage and current looks different because they’re no longer in phase. “Hey… that worked perfectly.” The current waveform is leading the voltage; the current peaks happen before the voltage ones. That’s because the current has to flow into the capacitor before the voltage between the plates rises. It takes time for the capacitor to charge and discharge, which results in a delayed response in the voltage. Now, let’s try another type of load called an inductor.

An inductor is basically a coil of wire. Like a capacitor, an inductor stores energy, but instead of an electric field, it stores that energy in a magnetic field. This is just an electromagnet like you might see in a scrapyard. If I swap in an air-core inductor, you can hear the screwdriver rattle against the table as the magnetic field rapidly changes direction. And, we get the opposite effect of the capacitor when the inductor is inserted into the circuit. This time, the current waveform is lagging the voltage. That magnetic field resists changes in current, so it creates a delay, this time in the current waveform. I can even vary this inductance and thus the lag in the current by moving this ferrite rod in and out of the core. All this is interesting on its own, but these little shifts in a graph have serious implications on the grid, and have even resulted in numerous blackouts across the world. Here’s why:

Remember, that the power consumed by an electrical load is just the voltage multiplied by the current. We can do that for any point in time across this graph. For a purely resistive load, like the lightbulb, the current and voltage are in sync. When one is positive, the other is positive. When one is negative, the other is too. So when you multiply them together at any point along the graph, you always get a positive number. The power fluctuates, but it’s always moving in one direction. For a reactive load (the term used for inductors and capacitors), that’s no longer the case. There are times in the cycle when the current and voltage are opposite polarity, meaning, instead of being consumed, power is actually flowing out of the load. In fact, for a purely capacitive or inductive load, there’s no power consumption at all - no work being done. It’s just stored in a magnetic or electric field and returned. But there’s still current flowing, and that matters.

Of course, most things connected to the power grid aren’t purely reactive. But lots of devices that we plug in have some amount of inductance. Look around your home for any big motors. Air conditioners, refrigerators, washers, dryers, large power tools, and more primarily use induction motors because they’re cheap, simple, and last a long time. And inside an induction motor is a series of wire coils used to create magnetic fields that spin the rotor, just like the inductor I used in the demo. Part of the power that flows into those coils just gets sent back out onto the grid. You might be thinking, “So What? Nothing wrong with storing a little bit of energy, as long as I give it back in less than a sixtieth of a second afterwards.” But, the grid still had to produce that power, and more importantly, deliver that power to your home and carry it back away.

The electric meter at your home, in most cases, only tracks the power you actually consume. So, you don’t pay for the reactive power that flows into your devices and back out again. But that doesn’t mean it doesn’t come at a cost. It still has to flow through the power network, where some gets lost as heat from resistance in the lines. So, the generators have to make, and the transmission lines have to move, more power, in some cases a lot more power, than is actually being used in the system. Reactive power can make up a big part of the total load on the system, even though it’s not doing any work. Just having the infrastructure in place to handle it is also costly. The conductors, transformers, and generators on the grid have to be sized for the total current that needs to move through the system, not just the current that does work. And that stuff is expensive. It’s like if you were a photographer and bought a bunch of props for a shoot from a company with a generous return policy. After you take your photos, you return everything back to the store. Those props were useful, even necessary to you, but only for a period of time. And there was a real cost to warehousing, transporting, and restocking them, even if you didn’t bear it. Imagine if there were a hundred photographers that did the same thing. It wouldn’t be long before such a store wasn’t very profitable. But unlike at your home, where the utility is generous in their return policy, lots of industrial and commercial customers do get charged for reactive power that uses up capacity on the grid without doing any real work.

Even though the oscilloscope graphs just show a shift between the two waveforms, with some clever math, you can actually separate the real power actually being used from the reactive power that oscillates on the grid into two parts, and treat them like they flow through the grid independently. I’m going to do my best to avoid that math here partly because it involves imaginary numbers but mostly because it’s not needed to understand the practical impacts. (This is already a lot to wrap your head around.) But out of that math comes this visualization: the power triangle. This leg is the real power that actually gets consumed, measured in watts or kilowatts that you’re probably used to. This leg is the reactive power that is returned instead of used, measured in volt-amps-reactive or VAR. By convention, we usually say that inductive devices “consume” reactive power and capacitive devices “supply” it. The hypotenuse of the triangle is the apparent power, the total amount of power that flows through the grid, measured in volt-amps. If you divide the real power by the apparent power, you get this ratio, called the power factor, a number that will be important in a minute.

Take a look at the distribution transformer that connects your home to the grid, and you might see a rating on the side. That number is not in watts or kilowatts like what you might see on a toaster or microwave, but in kilovolt-amps because it includes the flow of real and reactive power. Large users of electricity, like factories and refineries, usually have a low power factor because they use lots of big induction motors. They need comparatively robust and high-capacity connections to the grid, even if they actually consume only a portion of the energy that flows through. So the electric utility installs a meter that can track power factor, or they just come out every once in a while to measure it, so they can put it on the bill. Instead of free returns on reactive power, like we usually get at our homes, those customers have to pay a rental charge on the power they store, even though it goes right back out. But, it’s not just a matter of keeping track of costs. The stability of the entire grid depends on managing the flow of reactive power.

If you’ve watched some of my other videos on the power grid, you know how important it is to closely match power generation with demands as they go up and down. If not managed carefully, the frequency of the grid, which needs to stay within a very tight tolerance, can deviate. And if it goes too far, the whole thing can collapse. That’s what almost happened to Texas during the winter storm in 2021. But, it’s possible for the grid to collapse even if there’s enough generation to meet the demand because you still have to move that power to where it’s needed over transmission lines. Engineers use a PV curve to keep an eye on this challenge. It relates the power flowing to a load on the system to the voltage it sees. As you would expect, the more power that flows, the more the voltage drops, since more power is lost on the transmission lines on the way to the load. It’s the same reason the lights dim in old houses when the air conditioner kicks on: current goes up, voltage goes down. If you combine Ohm’s law and the power equation, you can see that the power lost on a transmission line is related to the current squared. Double the amps; quadruple the power lost as heat. But the further along this curve that the system operates, the more dangerous things get. There is a point, the nose of the curve, beyond which greater demand on the system actually reduces the amount of power that can be delivered, all while the voltage continues dropping. The generators may have the capacity to supply more power, but it can’t reach the load because of the limitations of the system. Operating below the nose is unstable because generators lose control of their speed, like a rubber tire losing its grip on a road.

Infrastructure is expensive, and building new power plants and transmission lines always comes with legal and environmental challenges too, so we’re often forced to use the grid to the very limits of its capacity. But, grid managers need to make sure to operate with enough margin that any contingency, like a generator going offline or a transmission line faulting, doesn’t push the system over the electrical cliff. Here’s where power factor comes in. Loads with lower power factor shift the nose of the PV curve down and to the left. That reduces the margin and lowers the voltages in the system for a given power demand, making a voltage collapse more likely if some part of the system goes down. So we use several ways to supply reactive power to provide voltage support and shift the curve back up.

Power plants can adjust their operating parameters to supply reactive power, but transmission lines have their own inductance that consumes the reactive power as it travels through. So, it is usually more efficient to address the problem on the load side, and there are several types of infrastructure that make this possible. Synchronous condensers are big motors that aren’t attached to anything. Instead of converting electrical power to mechanical power, they basically spin freely, but with some clever circuitry, they can generate or absorb reactive power from the grid. They can also help stabilize fluctuations in the grid with the inertia of their heavy rotating mass, something that is becoming increasingly important as we transition more to renewable sources that use inverters to connect to the network.

Another option, and one you’re more likely to spot, are shunt capacitor banks connected across the lines. Sometimes you can see them in substations, but many capacitor banks are installed on poles out in the open for anyone to have a look. Like the capacitor in my demo, they increase the power factor and boost the PV curve up. That can actually become a problem during off-peak hours by boosting the voltage above where it should be, so many capacitor banks are switched on or off depending on system conditions. Looking back at the PV curve, you can see how leaving the capacitors off during periods of low demand keeps voltage within limits, and having them on when demand is high provides more margin and more voltage. Some run on timers to come on during the highest demands of the day, and many are operated at a utility’s discretion to accommodate the varying conditions on the grid. They’re usually either all the way on or all the way off, so deciding when to throw the switch is an important one.

A third option for reactive power supply, called a static VAR compensator or SVC, addresses that challenge. These use electronics to rapidly switch inductors and capacitors on or off to constantly adjust to conditions in the system. That switching happens automatically and quickly, making them much better suited to the dynamic changes that happen on the grid.

That’s why Hydro-Quebec had them installed on their system in 1989. The long transmission lines between the hydroelectric power plants in the north and the load centers, like Montreal, in the south require careful control of the voltage to avoid instability. But the geomagnetic storm threw a wrench in the works. The induced currents in the transformers and along those transmission lines seriously increased the reactive power demand of the system. The resulting distortions in the voltage and current waveforms hadn’t been considered when the equipment was installed. The SVCs weren’t configured to handle the dynamic conditions affecting the system, so relays designed to protect them tripped, pulling the equipment out of service. Without the SVCs, the voltage on the grid dropped, the frequency increased, and chaos ensued. The grid operators couldn’t disconnect customers fast enough to keep things stable, and within seconds, the rest of the system collapsed. Lots of equipment was permanently damaged, and millions woke up that frigid morning with no real power, reactive power, or apparent power, shutting down basically the entire province for half-a-day and requiring costly and expensive repairs. They learned a lot of lessons that day, and adjusted a lot of relay settings since then. It’s just one of many case studies on the importance of understanding and managing this hopefully a little-less-perplexing idea of reactive power on the grid.

June 04, 2024 /Wesley Crump

Every Kind of Bridge Explained in 15 Minutes

May 21, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

The Earth is pretty cool and all, but many of its most magnificent features make it tough for us to get around. When the topography is too wet, steep, treacherous, or prone to disaster, sometimes the only way forward is up: our roadways and walkways and railways break free from the surface using bridges. A lot of the infrastructure we rely on day to day isn’t necessarily picturesque. It’s not that we can’t build exquisite electrical transmission lines or stunning sanitary sewers. It’s just that we rarely want to bear the cost. But bridges are different. To an enthusiast of constructed works, many are downright breathtaking. There are so many ways to cross a gap, all kindred in function but contrary in form. And the typical way that engineers classify and name them is in how each design manages the incredible forces involved. Like everything in engineering, terminology and categories vary. As Alfred Korzybski said, “The map is not the territory.” But, trying to list them all is at least a chance to learn some new words and see some cool bridges. And honestly, I can hardly think of anything more worthwhile than that. I’m Grady, and this is Practical Engineering.

One of the simplest structural crossings is the beam bridge: just a horizontal member across two supports. That member can take a variety of forms, including a rolled steel beam (sometimes called a stringer) or a larger steel member fabricated from plates (often called a plate girder). Most modern bridges built as overpasses for grade separation between traffic are beam bridges that use concrete girders. And instead of a group of individual beams, many bridges use box girders, which are essentially closed structural tubes that use material more efficiently (but can be more complicated to construct). Beam bridges usually can’t span great distances because the girders required would be too large. At a certain distance, the beams become so heavy, they can hardly support their own weight, let alone the roadway and traffic on top.

One way around the challenge of the structural members’ self-weight is to use a truss instead of a girder. A truss is an assembly of smaller elements that creates a rigid and lightweight structure. Unlike a beam, the members of a truss don’t typically experience bending forces. The connections usually aren’t actual hinges that permit free rotation, but they are close enough. So, all the load is axial (along their length) in compression or tension. That simplifies the design process because it’s easier to predict the forces within each structural member. The weight reduction allows trusses to span greater distances than solid beams, and there are a wide variety of arrangements, many with their own specific names. In general, a through truss puts the deck on the bottom level, and a deck truss puts it on top, hiding the structural members below the road. A particularly photogenic type of truss is a lenticular truss bridge, named because they resemble lenses, which themselves are named because they resemble lentils! A Bailey bridge is a kind of temporary truss bridge that is designed to be portable and easy to assemble. They were designed during World War II, but Bailey bridges are still used today as temporary crossings when a bridge fails or gets closed for construction. Most covered bridges are timber truss bridges. Since wood is more susceptible to damage from exposure to the elements, the roof and siding are placed to keep the structural elements truss-worthy. A trestle bridge is superficially similar to a truss: a framework of smaller members. Trestle bridges don’t have long spans, but rather a continuous series of short spans with frequent supports which are individually called trestles, but sometimes the whole bridge is just called a trestle, so like so many other instances of structural terminology, it can be a little confusing.

This next bridge type uses a structural feature that’s been a favorite of builders for millennia: the arch. Instead of beams loaded perpendicularly or trusses that experience both compressive and tensile forces, arch bridges use a curved element to transfer the bridge’s weight to supports using compression forces alone. Many of the oldest bridges used arches because it was the only way to span a gap with materials available at the time (stone and mortar). Even now, with the convenience of modern steel and concrete, arches are a popular choice for bridges. They make efficient use of materials but can be challenging to construct because the arch can’t provide its support until it is complete. Temporary supports are often required during construction until the arch is connected at its apex from both sides. In stone arches, the topmost stone is key to keeping the whole thing standing, and, of course, it’s called the keystone. When the arch is below the roadway, we call it a deck arch bridge. Vertical supports transfer the load of the deck onto the arch. The area between the deck and arch has a great name: the spandrel. Open-spandrel bridges use columns to transfer loads, and closed-spandrel bridges use continuous walls. If part of the arch extends above the roadway with the deck suspended below, it’s called a through arch bridge. A moon bridge is kind of an exaggerated arch bridge, usually reserved for pedestrians over narrow canals where there’s not enough room for long approaches. They’re steep, so sometimes you have to use steps or ladders to get up to the top and back down.

One result of compressing an arch is that it creates horizontal forces called thrusts. Arch bridges usually need strong abutments at either side to push against that can withstand the extra horizontal loads. Alternatively, a tied arch bridge uses a chord to connect both sides of the arch like a bowstring, so it can resist the thrust forces. That means a tied arch is structurally more of a truss than an arch, and that provides a lot of opportunities for creativity. For just one example, a network arch bridge uses the tied arch design, plus criss-crossed suspension cables, to support the deck. To tell an arch from a tied arch by eye, it’s usually enough to look at the supports. If the end of each arch sits atop a spindly pier or some other structure that seems insubstantial against horizontal forces, you can probably bet that they are tied together and it’s not a true arch bridge. Similarly, a rigid-frame bridge integrates the superstructure and substructure (in other words, the deck, supports, and everything else) into a single unit. They don’t have to be arched, but many are.

Another way to increase the span of a beam bridge is to move the supports so that sections of the deck balance on their center instead of being supported at each end. A cantilever bridge uses beams or trusses that project horizontally, balancing most of the structure’s weight above the supports rather than in the center of the span. This is such an effective technique that the Forth Bridge crossing the Firth of Forth in Scotland took the title of longest span in the world away from the Brooklyn Bridge in 1890 and held the record for decades. This famous photograph demonstrates the principle of that bridge perfectly: The two central piers bear the compression loads from the bridge. And, the outer-most supports are anchors to provide the balancing force for each arm. This way, you can suspend a load in the middle.

The longest bridges take advantage of steel’s ability to withstand incredible tension forces using cable supports. Cable-stayed bridges support the deck from above through cables attached to tall towers or spars. The cables (also called stays) form a fan pattern, giving this type of bridge its unique appearance. Depending on the span, cable-stayed bridges can have one central tower or more. Their simplicity allows for a wide variety of configurations, giving rise to some dramatic (and often asymmetric) shapes.

For shorter spans, you can combine the benefits of a cable-stayed structure with girders to get an extradosed bridge. Imagine a concrete girder bridge that uses internal tendons to keep the concrete in compression, then just pull those tendons out of the girder and attach them to a short tower. Rather than holding the deck up vertically like a cable-stayed bridge, they’re acting more horizontally to hold the girders in compression, giving them the stiffness needed to support the deck. It’s a relatively new idea compared to most of the other designs I’ve listed, but there are quite a few cool examples of extradosed bridges across the globe.

Where a cable-stayed bridge attaches the deck directly to each tower, a suspension bridge uses cables or chains to dangle the deck below. In a simple suspension bridge, the cables follow the curve of the deck. This is your classic rope bridge. They’re not very stiff or strong, so simple suspension bridges are usually only for pedestrians. A stressed ribbon bridge takes the concept a step further by integrating the cables into the deck. The cables pull the deck into compression, providing stiffness and stability so it doesn’t sway and bounce. This design is also primarily used for smaller pedestrian bridges because it can’t span long distances and the deck sags in the middle.

Then you have the suspended deck bridge, the design we most associate with the category with the longest spans in the world. Massive main cables or chains dangle the road deck below with vertical hangers. Suspension bridges are iconic structures because of their enormous spans and slender, graceful appearance. Towers on either side prop up the main cables like broomsticks in a blanket fort. Most of the bridge’s weight is transferred into the foundation through these towers. The rest is transferred into the bridge’s abutments through immense anchorages keeping the cables from pulling out of the ground. Alternatively, self-anchored suspension bridges connect the main cables to the deck on either side, compressing it to resist the tension forces. Because they are so slender and lightweight, most suspension bridges require stiffening with girders or trusses along the deck to reduce movement from wind and traffic loads. These bridges are expensive to build and maintain, so they’re really only used when no other structure will suffice. But you can hardly look at a suspended deck bridge without being impressed.

Bridges have to support the vehicles and people that cross over the deck, but they often have to accommodate boats and ships passing underneath as well. If it’s not feasible to build the bridge and its approaches high enough, another option is just to have it get out of the way when a ship needs to pass. Moveable bridges come in all shapes and sizes. A lot of people call them drawbridges after their medieval brethren over castle moats. A bascule bridge is hinged so the deck can rotate upward. A swing bridge rotates horizontally so a ship can pass on either side. A vertical lift bridge raises the entire deck upward, keeping it horizontal like a table. A transporter bridge just has a small length of deck that is shuttled back and forth across a river. That’s just a few, and in fact, every moveable bridge is unique and customized for a specific location, so there are some truly interesting structures if you keep an eye out.

On the other hand, sometimes there’s no need for ship passage or a lot of space below, and in that case, you can just float the bridge right on the water. Floating bridges use buoyant supports, eliminating the need for a foundation. These are used in military applications, but there are permanent examples too. Many use hollow concrete structures as pontoons, with pumps inside to make sure they don’t fill up with water and sink. And actually, a lot of bridges take advantage of buoyancy in their design, even if it’s not the main source of support. A design like this presents a lot of interesting engineering challenges, so there aren’t too many of them. Similarly, the pedestrian bridge at Fort de Roovere in the Netherlands (probably pronounced that wrong) has its deck below the water, giving it the nickname of the Moses Bridge.

If space or funding is really tight, one option to span a small stream is a low-water crossing. Unlike bridges built above the typical flood level, low-water crossings are designed to be submerged when water levels rise. They are most common in areas prone to flash floods, where runoff in streams rises and falls quickly. Ideally, a crossing would be inaccessible only a few times per year during heavy rainstorms. However, low-water crossings have some disadvantages. For one, they can block the passage of fish just like a dam. And then there’s safety. A significant proportion of flood-related fatalities occur when someone tries to drive a car or truck through water overtopping a roadway. Water is heavy. It takes only a small but swift flow to push a vehicle down into a river or creek, which means at least some of the resources saved by avoiding the cost of a higher bridge are often spent to erect barricades during storms, install automatic flood warning systems, and run advertisement campaigns encouraging motorists never to drive through water overtopping a roadway.

You may have heard the term viaduct before. It’s not so much a specific type of bridge, but really about the length. Bridges that span a wide valley need multiple intermediate supports. So, a viaduct is really just a long bridge with multiple spans that are mostly above land. There’s really not a lot of agreement on what is one and what isn’t. Some are singular and impressive structures. But many modern cities have viaducts that are, although equally amazing from an engineering standpoint, a little less beautiful. So, you’re more likely to hear them called elevated expressways. And that gets to the heart of a topic like this: without listing every bridge, there’s no true way to list every type of bridge. There’s too much nuance, creativity, and mixing and matching designs. The Phyllis J. Tilly bridge in Fort Worth, Texas combines an arch and stressed ribbons. The Third Millennium Bridge in Spain uses a concrete tied arch with suspension cables holding up the deck which is stiffened with box girders. The Yavuz Sultan Selim Bridge in Turkey combines a cable-stayed and suspension design. In some parts of India and Indonesia, living tree roots are used as simple suspension bridges over rivers. There are bridges for pipelines, bridges for water, bridges for animals, and I could go on. But that’s part of the joy of paying attention to bridges. Once you understand the basics, you can start to puzzle out the more interesting details. Eventually, you’ll see the Akashi Kaikyo Bridge on a calendar in your accountant’s office, and let him know it’s a twin-hinged, three-span continuous, stiffened truss girder suspension bridge with a double-tower system. Or maybe that’s just me.

May 21, 2024 /Wesley Crump

How Bridge Engineers Design Against Ship Collisions

May 07, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On March 26, 2024 (just a few weeks ago, if you're watching this as it comes out), a large container ship struck one of the main support piers of the Francis Scott Key Bridge in Baltimore, Maryland, collapsing the bridge, killing six construction workers, injuring one more, and seriously disrupting both road and marine traffic in the area. There’s a good chance you saw this in the news, and hopefully you’ve seen some of the excellent content already released by independent creators providing additional context. I got a lot of requests to talk about the event, and I usually prefer to wait to discuss events like this until there are more details available from investigations, but I think it might be helpful to provide some context from an engineering perspective about how we consider vessel collisions in the design of bridges like this one, and why the Francis Scott Key bridge may have collapsed. I’m Grady, and this is Practical Engineering. Today we’re talking about vessel collision design for bridges.

The Francis Scott Key Bridge was a steel arch-shaped continuous through truss bridge. I’m working on a video that goes into a lot more detail about the different kinds of bridges and how they’re classified, but this bridge had kind of a medley of structural styles, so let me hand it off to our special guest correspondent, Road Guy Rob, to break that terminology down.

Well, Grady, I'm in Long Beach, California today, standing on top of this brand new bridge that replaced an old arch/truss bridge that used to be right there. It kind of looked like a baby Key Bridge, and the Port of Long Beach is happy that it's gone.

The Gerald Desmond Bridge was a truss bridge. Instead of having one big large beam, a truss has lots of smaller connected structural members all attached together.

This creates a rigid structure that's much lighter weight than a big heavy beam, and that makes trusses efficient and clever when they work. Both the Key bridge and the old bridge that used to be here were “through-truss” bridges. It's a sort of arch shape, and the driving deck is suspended below the truss, so you sort of drive through the arch, but it's not an actual arch with like a keystone and all the pieces pushing horizontally to hold each other together. No, this through-truss bridge has no hinges or joints at the main supports, nothing that breaks it up into sections. So that's why engineers called the Key bridge a continuous truss bridge. It's all one big piece, and it's all bolted and welded into a single rigid truss across its entire length. And then that load distributes across all three spans of the bridge.

Now, the approach roads on each side are entirely separate bridges, even though they link together. They just look like concrete roads sitting on top of simple girder spans.

Well, you ask, what happened to that baby Key bridge in Long Beach? Well, the only way you're going to see it now is to turn on Grand Theft Auto five and look at the fictionalized version of it immortalized in Los Santos for all time. Because when this bridge opened, the port of Long Beach demolished the old bridge and the last scraps of it got all the way back in October.

In its place, this new, fancier looking cable-stayed bridge, the Long Beach International Gateway. And what the Port of Long Beach did in studying to build this bridge and the list of improvements they came up with might give us some clues what Baltimore might want to end up doing when they replace the Key bridge down the line.

And we'll talk about that in just a moment.

When the Dali container ship lost power and drifted into the southwest pier, the support collapsed, and most of the truss and deck fell with it. Both the southwest and central spans fell roughly vertically with the loss of support from the damaged pier. Part of the truss on the northwest side separated from the unsupported section and rotated toward the northeast span, taking several of the approach spans with it. Thankfully, the ship had put out a mayday call before the impact, allowing police officers at either end of the bridge to close it to traffic. Tragically, it wasn’t enough time to get the crew of eight construction workers off the structure before it fell, six of whom lost their lives.

Just dealing with the salvage and removal of the steel and concrete debris left over from the collapse has been a massive undertaking. Within a week, engineering teams were on-site measuring, cutting, lifting, and floating away huge chunks of the wreckage in separate salvage operations for the main bridge, approaches, and the vessel. As of this writing, they’re still working hard on it. At least seven floating cranes were involved, including the famous Weeks 533 that pulled US Airways Flight 1549 from the Hudson River in 2009. This was essentially a massive Jenga tower: the order of operations and the precision of each cut and each lift mattered. With so much debris underwater, they had to map it out to understand how everything was stacked together. Access was a major challenge, and the stresses in the wreckage were hard to characterize, so it’s been a slow and deliberate process requiring careful planning and tons of skill to do safely. Fortunately for Baltimore, there are large industrial facilities in the port that can process the thousands of tons of material that will ultimately be removed. Of course, reopening the port to shipping traffic is a huge priority. A small channel was marked out under one of the approach bridges for smaller vessels like tug and barges traffic, and the Army Corps of Engineers is making good progress on opening up the main channel, but it isn’t clear when full-scale operations at the port will be able to resume. Shipping traffic isn’t the only traffic affected either, the bridge carried thousands of road vehicles per day that now have to be re-routed. There is a tunnel under the harbor that provides a decent alternate route, but trucks with hazardous materials aren’t allowed through, requiring an enormous detour around the city.

It’s been more than a month since the event, but it will likely be a year or more before we get an official report documenting the probable cause of the failure. In the US, events like this are investigated by the National Transportation Safety Board or NTSB. This independent government agency is extremely diligent. And often, diligent also means slow. But events like this are how the field of engineering evolves. Human imagination isn’t limited to past experiences, but in many senses, engineering is. We just don’t have the resources to answer the millions of “what ifs” that might coalesce into a tragedy, so we lean on the hard lessons learned from past failures. When something terrible happens, it’s really important that we collectively get to the bottom of why and then make whatever changes are appropriate to our engineering systems to prevent it from happening again.

But, at the risk of stating the obvious, the failure mode in this case is pretty clear. You probably don’t need an engineer to explain why a massive ship slamming into a bridge pier would cause that bridge to collapse. I think what’s less obvious is how engineers characterize situations like this so that bridges can be designed to withstand them. Collisions with bridges by barges and ships are not a modern problem. Technically they’re called “allisions” since a bridge isn’t moving, but that term is used more in the maritime industry than by bridge engineers. Between 1960 and 2014, there were 35 major bridge collapses resulting from vessel impacts. And, 18 of those were in the US. We just have such a big network of inland waterways, and that means we have a lot of bridges.

Two spans of the Queen Isabella Causeway Bridge in Texas collapsed in 2001 when barges collided with one of the piers. A year later, a bridge carrying I-40 over the Arkansas River in Oklahoma was hit by barges when the captain lost control, collapsing a major portion of the structure. In 2009, Popp’s Ferry Bridge in Mississippi collapsed after being struck by a group of barges. In 2012, the Eggner’s Ferry Bridge in Kentucky fell when a cargo ship tried to go through the wrong channel. Before any of those, though, the Sunshine Skyway Bridge in Florida put a major focus on the problem. In 1980, a bulk carrier ship lost control because of a storm, crashing into one of the piers and collapsing the entire main span of the southbound bridge, killing 35. The event brought a lot of new awareness and concern about the safety of bridges over navigable waterways. But piers aren’t the only parts of a bridge at risk from ships. I’ll let Rob explain.

The Key bridge got into trouble because of a horizontal allision. That's where a ship moves side to side in the wrong way and hits something it's not supposed to.

Here in Long Beach, that really wasn't their concern, primarily because the old bridge columns were way inland here, so there was no way for a ship to exit the waterway and hit the column because the column was in lots of dirt. And the new replacement bridge takes no chances at all. Look how much farther onshore those columns are now!

Now, the Port of Long Beach were far more worried of the old Gerald Desmond Bridge getting hit vertically. The old bridge was 155ft tall. That's like a 15 story building. And if that sounds pretty tall to you, it sounded pretty tall to them back in 1968 when they built the bridge. But as we now know, ships are getting bigger and fatter and taller and 155ft wasn't cutting it for some of the modern ships that were trying to get into the back part of the port, where there's a lot of cranes and action happening over there. So the new bridge adds another 50ft, takes it over 200ft. That's like a 20 story building to get up from the waterline to that new bridge.

And this new, taller, Long Beach International Gateway helps the port scratch off one designation they didn't want - having the shortest bridge over a port in the United States. Well, that's gone now, and thankfully in a less tragic manner than what's happening on the East Coast.

In the aftermath of the Sunshine Skyway collapse, the federal government and the professional community, both from the engineering and maritime sides, invested a serious amount of time and investigation into the issue. One culmination was updated bridge codes that included requirements for consideration of vessel collisions. For highway bridges in the US, those specifications are put out by an organization called the American Association of State Highway and Transportation Officials (or AASHTO), but there are similar requirements worldwide, including in the Eurocode.

A lot of infrastructure is designed for worst-case scenarios, but at a certain point, it just isn’t feasible. This is something I’ve talked a lot about in previous videos: you have to draw a line somewhere that balances the benefits and costs. If the code required us to design bridges with Armageddon meteorite or Godzilla protection, we just wouldn’t build any bridges. It would be too expensive. And that’s kind of true for ship collisions too. The mass and kinetic energy of the cargo vessels today is tough to even wrap your head around. We just couldn’t afford to build bridges if they all had to be capable of withstanding a worst-case collision. Instead, for what engineers call “high consequence, low probability” events, codes often set the standard as some acceptable amount of risk. There’s always going to be some possibility of an event like this, but how much risk are we as a society willing to bear for the benefit of having easy access across navigable waterways? In the U.S., that answer, at least according to AASHTO for critical structures like the Key Bridge, is 0.01 percent probability in a given year. For some perspective, it’s roughly the chance of rolling a Yahtzee (five-of-a-kind) in a single throw. But it’s an annual probability, so you have to roll the dice once every year. If you did it forever, it would average out to once every 10,000 years, but that doesn’t mean it couldn’t happen twice in a row. So an engineer’s job is to design the structure not to survive in a worst-case scenario but to have a very low probability of collapsing from a vessel impact. And there’s a lot that goes into figuring that out.

This is the general formula for the annual probability of bridge collapse due to a ship collision. You have all these factors that get multiplied together. The first one is just the number of ships that pass under the bridge in a year. And there’s a growth factor in there for how that number might change over time. Then there’s what’s called the probability of vessel aberrancy; basically, the chance that one of those ships loses control. AASHTO has some baseline numbers for this based on long-term accident statistics in the US, and the designer can apply some correction factors based on site-specific issues like water currents and navigation aids. Then, there’s the geometric probability of a collision if a ship does lose control. When a vessel is aberrant, you don’t know which way it’s going to head. This gets a little complicated, but if you’re familiar with normal distributions it will make perfect sense. You can plot a normal distribution curve centered on the transit path with one standard deviation equal to the length of the aberrant ship to give you an approximation of where it might end up. The area under that curve that intersects with the bridge piers is the probability that the ship will impact the bridge if it loses control. And this is really the first knob an engineer can turn to reduce the risk, because the farther the piers are from the transit path, the lower the geometric probability of a collision. And this factor can be modified if ships have tethered tugs to assist with staying in the channel, something that wasn’t required in Baltimore at the Key Bridge.

But, even if there is a collision, that doesn’t necessarily mean the bridge will collapse. This is where the structural engineering comes into play. The probability of collapse depends both on the lateral strength of the pier and the impact force from the collision. But, that force isn’t as simple as putting a weight on a scale. It’s time-dependent, and it varies according to the size and type of vessel, its speed, the amount of ballast, the angle of the collision, and a lot more. Usually, we boil that down to an equivalent static load. And based on some testing, this is the equation most engineers use. It’s just based on the deadweight tonnage (basically how much the ship can carry) and its velocity. It’s interesting that they settled on deadweight, which doesn’t include the weight of the ship itself. But again, this analysis is pretty complicated, especially because you have to do it for every discrete grouping of vessel size and bridge component, so some simplifications make sense, and since this one assumes every ship is fully loaded, it’s relatively conservative.

Just for illustration, the ship that hit the Sunshine Skyway Bridge had a deadweight of 34,000 tonnes. The NTSB report doesn’t estimate the speed at which it hit the bridge, but let’s say around 5 knots. That would be equivalent to a static force of around 56 meganewtons or 13 million pounds if the ship was fully loaded, which it wasn’t (but there’s no way to account for that in this equation). The Dali has a deadweight of 117,000 tonnes and was traveling at roughly 5 knots on impact. That’s equivalent to more than 100 meganewtons or 24 million pounds, again, assuming the ship was fully loaded (which, again, it wasn’t). But you can validate this with some back-of-the-envelope physics. Force is equal to mass times acceleration. We know the mass of the ship and its cargo from records: about 112,000 metric tons. To decelerate that mass from 5 knots to a standstill over the course of, let’s say, 4 seconds requires, roughly, a force of 72 meganewtons or 16 million pounds. Even as a rough guess, that is a staggering number. It’s 5 SpaceX Starships pointed at a single bridge pier.

Designing a bridge to handle these forces is obviously complicated. It’s not just the pier itself that has to survive, but every element of the bridge along the load path, including the foundation, and (assuming the pier isn’t perfectly rigid), the superstructure too. Again, it’s not impossible to design, but it gets pricey fast, which is why designers have more knobs to turn to meet the code than just the strength of the bridge itself. One of those knobs is pier protection systems. Fenders can be installed to soften the blow of a ship impact, but for ships of this size, they would have to be enormous. Islands can be built around piers to force ships aground before the hit the bridge. But islands create environmental problems because of the fill placed on the river bottom, plus they get really big for deeper channels, so the bridge span has to be wider to keep the channel from being blocked. Islands can even affect currents in the water and the bridge structure, creating additional load on the foundation as they settle after construction. Another commonly used protection structure is called a dolphin. This is usually a circular construction of driven sheet piles, filled with sand or concrete. Dolphins can slow a ship, stop it altogether, or redirect it away from critical bridge elements like piers. The new Sunshine Skyway Bridge used islands and dolphins to protect the rebuilt span, and actually, the Key Bridge had four dolphins, one on either side of each main support. Unfortunately, because it came at an angle, the Dali slipped past the protection when it lost control.

It’s important to point out that everything I’ve discussed is a modern look at how engineers consider vessel impacts to bridges. When the Francis Scott Key Bridge was finished in 1977, there were no requirements like this, and the bridge never had a major rehabilitation or repair that would have triggered adherence to the newer codes. Container ships the size of Dali didn’t even exist until around 2006. And we don’t know what the ships of the future will look like. It’s easy to say with hindsight that a bridge like this should have been better protected against errant ships, but if you say it for this one, you really have to say it for all the bridges that see similar maritime traffic. And that represents an enormous investment of resources for, potentially, not a lot of benefit to the public, given how rare these situations are. That’s not me saying it shouldn’t be done; it’s just me saying that a decision like that is a lot more complicated than it might seem. I don’t expect we’ll see bridge design code changes come out of this event, but vessel collisions will certainly be on the minds of the designers for the replacement in Baltimore. I’ll let Rob explain.

When you take a look at photos of the Key Bridge, it looks like Maryland was doing a good job taking care of their bridge. So if the NTSB report comes back and says the bridge was in good shape, it's 100% the ship that's at fault, well, I don't think any of us are going to be really that shocked.

But for the old Gerald Desmond Bridge here in Long Beach, that used to be right here, well, the environmental impact report, where they studied to build this new replacement bridge, the port staff really didn't seem too concerned about a maritime navigation failure. Of a structural failure? Let's just say engineers scored bridges out of 100 points. So you have a brand new bridge, it gets 100 points. On that scale, the old Gerald Desmond Bridge that was right here scored a 43. I mean, anything below 80 points, you get federal money to work on the bridge to try to rehab it and get it back into good shape. And anything, anything under 50 points, it's so bad the federal government starts throwing money in trying to help you replace the bridge.

That's how bad off the Gerald Desmond Bridge was. Salt from the air of the sea and decades of it sitting above sea water and all of that, just nice salt in the air, eating away at the paint. Well, that paint was rated very poor on the old Gerald Desmond Bridge. And, you know, paint protects all the bridge members, all the metal from rusting out. And as Grady points out, every single member of a truss is really important if you, you know, want the bridge to stay in good shape and not fall down, right?

Engineers also conducted a load analysis. They tested to see as trucks drove over it, how the bridge was holding up. And they found members of the arch main span were overstressed for all design trucks. So that didn't matter if you drove a big truck or a little truck. They were all causing problems with the bridge. And the concrete that those trucks drove on? It was all cracking up. It was rated critical. The port had to install big nets to catch big chunks of pavement that were falling off the bridge and could hit somebody on the head down here.

So, Long Beach had four objectives that this new bridge needed to meet in order to build it. And those goals may mirror some of the ones Baltimore may want to have when building their replacement bridge. 1. This bridge had to have a design life of 100 years. Say, stay structurally sound for that long. 2. Long Beach wanted to reduce the approach grades on both sides, even getting up to 155ft before. Sometimes you were driving up a 6% grade and now that this thing is over 200 ft high, that would be way too steep. So they instead built these huge freeway viaducts that go on and on for like a quarter of a mile to lift people and trucks gently up to that new bridge height. Baltimore's bridge already has some very long approaches to it, so I don't know whether they're going to replace the, uh, ramps approaching the bridge or not. It'll be interesting to see what they end up deciding to do. 3. Provide sufficient roadway capacity to handle future car and truck traffic. The old bridge here was two lanes in each direction. Four lanes. This widens it to six. The Key bridge in Baltimore was also only four lanes. But this bridge handles twice the traffic every day. You know, compared to the key bridge back when it was open, right?

And both the Key Bridge and the old Gerald Desmond Bridge had no shoulders for emergency vehicles and stalled cars to pull off to the side. And as you can see, the new bridge has these excellent shoulders on both the outside and the inside of travel lanes. So that makes the road a lot safer, because you're not going to run into the back of a stalled truck in the dark.

It's also a lot safer if you're not in your car, because this bridge has a way you can cross it without being in a car. They've added this 12ft wide pedestrian and bicycle pathway, which is about 12ft wider than what they had before. Used to be on the old bridge, The only way across was inside a car. It's a good start, certainly not perfect. Right now, the path just hits this gate and stops. The city of LA owns the next harbor bridge down that way. It's called the Vincent Thomas Bridge. It's also old, so it doesn't have a pedestrian walkway, so this pathway sort of just ends at the city limit because there's nowhere for it to go. But adding in a multi-use path like this one onto the new Key Bridge would be such a no-brainer. It could take a four mile bike ride from like, Hawkins Point to Sparrows Point, down from four miles with the bridge path from 22 miles right now, without it.

And finally the fourth goal: providing vertical clearance for new generation of larger vessels, which the new bridge certainly has. And that must make the port very happy. And I'm willing to bet that Baltimore will take that goal and maybe turn it on its side and talk about horizontal clearance and insist on a design that eliminates the risk of an allision, like what happened on March 26th from ever happening again, Grady.

Thanks Rob. If you love deep dives into transportation infrastructure, go check out his channel after this. But, it’s important to point out that this wasn’t just a bridge failure; it was a bridge failure precipitated by a maritime navigation failure. Obviously, engineers who design bridges don’t have a lot of say in the redundancies, safety standards, and navigation requirements of the vessels that pass underneath them. But if you look at the whole context of this tragedy and ask, “How can our resources best be used to prevent something like this from happening again?”, reducing the probability of a ship this size losing control has to be included with the structural solutions like pier protection systems. I don’t know a lot about that stuff, so I couldn’t tell you what that might include, but I’m sure NTSB will have some recommendations when their report eventually comes out. Having tugs accompany large ships while they traverse lightly protected bridges seems like a prudent risk reduction measure, but that’s just coming from a civil engineer.

And, speaking of risk reduction, I have to say that using risk analysis as a tool for design is really not that satisfying. We humans are notoriously bad at understanding probabilities and risks, and engineers are not that great at communicating what they mean to people who don’t speak that language. That’s how we get confusing terminology like the hundred-year flood. And it’s unsettling to come face-to-face with the idea that, even if our bridges are designed and built to code, there’s still a chance of something like this. Everything’s a tradeoff, but the people driving over the bridge (or working on it) had no direct say in where the line was drawn or whether it applied retroactively, even as ships got bigger and bigger. But I hope it’s clear why we do it this way. The question isn’t “Can we design bridges to be safer?” The answer to that is always “yes.” The real question is, “How much risk can we tolerate?” or put in a different way, “How much are we willing to spend on any incremental increase in safety?” Because the answers to those questions are much more complex and nuanced. And if all bridges were required to survive worst-case collisions with ships like Dali, we would just have a lot fewer bridges. But sometimes it takes an event like this to remind us that risks aren’t just small numbers on a piece of paper. They represent real consequences, and my heart goes out to the families of the victims affected by this event. I hope we can honor them by learning from it and making improvements, both to our infrastructure and our maritime systems, so that it doesn’t happen again.

May 07, 2024 /Wesley Crump

Connecting Solar to the Grid is Harder Than You Think

April 16, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

On June 4, 2022, a small piece of equipment (called a lightning arrestor) at a power plant in Odessa, Texas failed, causing part of the plant to trip offline. It was a fairly typical fault that happens from time to time on the grid. There’s a lot of equipment involved in producing and delivering electricity over vast distances, and every once in a while, things break. Breakers isolate the problem, and we have reserves that can pick up the slack. But this fault was a little bit different.

Within seconds of that one little short circuit at a power plant in Odessa, the entire Texas grid unexpectedly lost 2,500 megawatts of generation capacity (roughly 5% of the total demand), mainly from solar plants spread throughout the state. For some reason, a single 300-megawatt fault at a single power plant magnified into a loss of two-and-a-half gigawatts, dropping the system frequency to 59.7 hertz. The event nearly exceeded Texas’s “Resource Loss Protection Criteria,” which is minimum loss of power that requires having redundancy measures in place. Another fault in the system could have required disconnecting customers to reduce demand. In other words, it was almost an emergency.

If you lived in Texas at the time, you probably didn’t notice anything unusual, but this relatively innocuous event sent alarm bells ringing through the power industry. Solar plants, large-scale batteries, and wind turbines don’t produce power like conventional thermal power plants that make up such a big part of the grid. The investigation into the 2022 Odessa disturbance found that it wasn’t equipment failures that caused all the solar plants to drop so much production all at once, at least not in the traditional sense. Instead, a wide variety of algorithms and configuration settings in the power conversion equipment reacted in unexpected ways when they sensed that initial disturbance.

The failure happened just before noon on a sunny summer day, so solar plants around the state were at peak output, representing about 16% of the total power generation on the grid. That might seem high, but there have already been times when solar was powering more than a third of Texas’s grid, and that number is only going up. The portion of the grid comprised of solar power is climbing rapidly every year, and not just in Texas, but worldwide. So the engineering challenges in getting these new sources of power to play nicely with the grid that wasn’t really built for them are only going to become more important. And, of course, I have some demos set up in the garage to help explain. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about inverter-based resources on the grid.

Solar panels and batteries work on direct current, DC. If you measure the voltage coming out, it’s a relatively constant number. This is actually kind of true for wind turbines as well. Of course, they are large spinning machines, similar to the generators in coal or natural gas plants. But unlike in thermal power plants that can provide a smooth and consistent source of power through a steam boiler, winds vary a lot. So, it’s usually more efficient to let the turbine speed vary to optimize the transfer of energy from the wind into the blades. There are quite a few ways to do this, but in most cases, you get a variable-speed alternating current from the turbine. Since this AC doesn’t match the grid, it’s easier to first convert it to DC. So you have this class of energy sources, mostly renewables, that output DC, but the grid doesn’t work on DC (at least not most of it).

Nearly all bulk power infrastructure, including the power that makes it into your house, uses an alternating current. I won’t go into the Tesla versus Edison debate here, but the biggest benefit of an AC grid is that we can use relatively simple and inexpensive equipment (transformers) to change the voltage along the way. That provides flexibility between insulation requirements and the efficiency of long-distance transmission. So we have to convert, or more specifically invert, the DC power from renewable sources onto the AC grid. In fact, batteries, solar panels, and most wind turbines are collectively known to power professionals as “inverter-based resources” because they are so different from their counterparts. Here’s why.

The oldest inverters were mechanical devices: a motor connected to a generator. This is pretty simple to show. I have a battery-powered drill coupled to a synchronous motor. When I pull the trigger, the drill motor spins the synchronous motor, generating a nice sine wave we can see on the oscilloscope. Maybe you can see the disadvantages here. For one, this is not very efficient. There are losses in each step of converting electricity to mechanical energy and then back into electrical energy on the other side. Also, the frequency depends on the speed of the motor, which is not always a simple matter to control. So these days, most inverters use solid-state electronic circuits, and look what I found in my garage.

These are practically ubiquitous these days, partly because cars use a DC system, and it’s convenient to power AC devices from them. I just hook it up to the battery, and get nice clean power from the other end… haha just kidding. These cheap inverters definitely output alternating current, but often in a way that barely resembles a sine wave. Connecting a load to the device smooths it out a bit, but not much. That’s because of what’s happening under the hood. In essence, switches in the inverter turn on and off, creating pulses of power. By controlling the timing of the pulses, you can adjust the average current flowing out of the inverter to swing up and down into an approximate sine wave. Cheaper inverters just use a few switches to create a roughly wave-like signal. More sophisticated inverters can flip the switches much more quickly, smoothing the curve into something closer to a sine wave. This is called pulse width modulation. Boost the voltage on the way in or the way out, add some filters to smooth out the choppiness of the pulses, and that’s how you get a battery to run an AC device… but it’s not quite how you get a solar panel to send power into the grid. There is a lot more to this equipment.

For one, look at the waveform of my inverter and the one from the grid. They’re similar enough, but they’re definitely not a match. Even the frequency is a little bit off. I will not be making an interconnection here, since I don’t have a permit from the utility, but even if I did, this inverter would let out the magic smoke. A grid-tie inverter has to be able to both synchronize with the phase and frequency of the grid and be able to vary the voltage of the waveform to control how much current is flowing into or out of the device. The synchronization part often involves a circuit called a phase-locked loop. The inverter senses the voltage of the grid and sets the timing of all those little switches accordingly to match what it sees. So, these are often called grid-following inverters. They synchronize to the grid frequency and phase and only vary the voltage to control the flow of power. And that hints at one of their challenges: they only work when the grid is up.

I’ve done a video all about black starts, so check that out after this if you want to learn more, but (in general), inverter-based resources like solar, wind, and batteries can only follow what’s already on the grid. When the system’s down, they are too, regardless of whether the sun’s shining or the wind’s blowing. That’s why most grid-tied solar systems on houses can’t give you power during an outage.

There’s another interesting thing that inverters do for solar panels, and I can show you how it works in my driveway. I have a solar panel hooked up to a variable resistor, and I’m measuring the voltage and current produced by the panel. You can see as I lower the resistance, the output voltage of the panel goes down and the current it supplies goes up. But this isn’t a linear effect. I recorded the voltage and current over the full range, and multiplied them together to get the power output. If you graph the power as a function of voltage, you get this shape. And you can see there’s an optimum resistance that gets you the most power out of the panel. It’s called the maximum power point. If you deviate on either side of it, you get less power out. In other words, you’re leaving power on the table. You’re not taking full advantage of the panel’s capacity.

What’s even more challenging is that point changes depending on the temperature of the panel and the amount of sun hitting it. I ran this test again with a few more clouds, and you can see how the graph changes. So nearly all large solar photovoltaic installations use what’s called a Maximum Power Point Tracker (or MPPT) that essentially adjusts the resistance to follow that point as it changes with sunniness and temperature. It’s really a separate device from the inverter, but often they’re located right next to each other or inside the same housing. Even this panel came with a charge controller that has this MPPT function, and you can see it adjusting the flow of current to constantly try and stay at the peak of the curve while it charges this battery. These can be used for an entire installation, but in many cases, each panel or group of panels gets its own MPPT because that curve is just a little bit different for each one. Tracking the peak power output individually can often squeeze a little more capacity out of the system.

Squeezing out capacity is essential to address another challenge associated with inverter-based resources on the grid: frequency. The rate at which the voltage and current on the grid swing back and forth is an important measure of how well generation and demand are balanced. If demand outstrips the generation capacity, the frequency of the grid slows down. Lots of equipment, both on the generation side and the stuff we plug in, is designed to rely on a stable grid frequency, so if it deviates too far, stuff goes wrong: Devices malfunction, motors can overheat, generators get out of sync, and more. It’s so important that rather than let the frequency get too far out of whack, grid operators will disconnect customers to get electrical demands back in balance with the available supply of power, called an under-frequency load shed. Things go wrong on the grid all the time, so generators have to be able to make up for contingencies to keep the frequency stable. Here’s the quintessential example: an unexpected loss of generation.

Say a generator trips offline, maybe because of a failed lighting arrestor like the Odessa example. The system frequency immediately starts dropping, since power demand now exceeds the generation. And the frequency will keep dropping unless we inject more power into the system. The first part of that, called Primary Frequency Response, usually comes from automatic governors in power plants. If we do it fast enough, the frequency will reach a low point, called the nadir (NAY-dur), and then recover to the nominal value. The nadir is a critical point, because if it gets too low, the grid will have to shed load in order to recover. The other important value is called the rate-of-change-of-frequency, basically the slope of this line. It determines how much time is available to get more power into the system before the frequency gets too low, and there are several factors that play into it: How much generation was lost in the first place, how quickly we can respond, and how much inertia there is on the grid. Thermal power plants that traditionally make up the bulk of generating capacity are gigantic spinning machines. They’re basically a bunch of synchronized flywheels. That kinetic energy helps keep them spinning during a disturbance, reducing the slope of the frequency during an unexpected loss.

Maybe you can see the problem with a simple grid-following inverter. It’s locked in phase with the frequency, even if that frequency is wrong. And it has no physical inertia to help arrest a deviation in frequency. If we keep everything the same and just increase the share of inverter-based resources, any loss of generation will mean a steeper slope, reducing the time available to get backup supplies onto the grid before it’s forced to shed load. Larger renewable plants, like solar and wind farms, are increasingly required to participate in primary frequency response, injecting power into the grid immediately when the frequency drops. And some inverters can even create synthetic inertia that mimics a turbine’s physical response to changes in frequency. But there’s another challenge to this.

Dealing with an over-frequency event is relatively straightforward: just reduce the amount of energy you’re sending into the grid. But, response to an under-frequency event requires you to have more energy to inject. In other words, you have to run the plant below its maximum capacity, just in case it gets called on during an unexpected loss somewhere else in the system. For a power company, that means leaving money on the table, so in most places, the energy markets are set up to pay power plants to maintain a certain level of reserve capacity, either through operating below maximum output or including battery storage in the plant.

The last big thing that inverter-based resources have to manage is faults. Of course, you need protective systems that can de-energize solar or wind resources when conditions on the grid could lead to damage. These are expensive projects, and there’s almost no limit to the things that can go wrong, requiring costly repairs or replacement. But, for the stability of the grid, you can’t have those protective systems being so sensitive that they trip at the hint of something unusual, like what happened in Odessa. This concept is usually referred to as “ride-through.” Especially for under-frequency events, you need inverters to continue supplying power to the grid to provide support. If they trip offline, or even reduce power, in response to a disturbance, it can lead to a cascading outage. This is kind of a tug of war between owners trying to protect their equipment and grid operators saying, “Hey, faults happen, and we need you not to shut the whole system down when they do.” And reliability requirements are getting more specific as the equipment evolves, because every manufacturer has their own flavor of protective settings and algorithms.

As inverter-based resources continue to grow rapidly in proportion to the overall generation portfolio, their engineering challenges are only becoming more important. We talked about a few of the big ones: lack of black start ability, low inertia, and performance during disturbances. And there are a lot more. But inverters also provide a lot of opportunities. They’re really powerful devices, and the technology is improving quickly. They can chop up power basically however you want, and they aren’t constrained by the physical limitations of large generating plants. So they can respond more quickly, and, unlike physical inertia that will eventually peter out, inverters can provide a sustained response. There are even grid-forming inverters that, unlike their grid-following brethren, can black start or support an isolated island without the need for a functioning grid to rely on. We’re in the growing pains stage right now, working out the bugs that these new types of energy generation create, but if you pay attention to what’s happening in the industry, it’s mostly good news. A lot of people from all sides of the industry are working really hard on these engineering challenges so that we’ll soon come out with a more reliable, sustainable, and resilient grid on the other end.

April 16, 2024 /Wesley Crump

How Do Fish Ladders Work?

April 02, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Building a dam imparts a stupendous change to the environment, and as with any change, there are winners and losers. The winners are usually us, people, through hydropower generation, protection from flooding, irrigation for farming, and a stable water supply for populated areas. But, we've known for a long time, probably since we started building dams in the first place, that many of the losers are fish (especially migratory fish) through fragmentation of their habitat. Even in 1890, the state of Washington in the US had laws on the books requiring consideration of fish when building dams. And, not just consideration, but specific infrastructure that would allow fish around a dam if they were quote-unquote “wont to ascend.” I recently took a tour of McNary Dam on the Columbia River in Washington (operated and maintained by the U.S. Army Corps of Engineers, Walla Walla District) and an aquatic research laboratory at the Pacific Northwest National Lab to learn more about the ways we balance our own needs with those of the aquatic wildlife impacted by the infrastructure we build. You should check out that video after this one if you want to see the whole tour. But one of the biggest pieces to that puzzle was the enormous fish ladders that allowed salmon and other migratory fish to swim up and over the dam. And it got me wondering: how do engineers design a structure like this? So this video is a follow-up, a chance to dive a little bit deeper into the intersection between engineering and wildlife. I'm Grady, and this is Practical Engineering. Today we're talking about fishways.

You've probably seen a fish ladder before, but if you haven't, this one on the Oregon side of McNary Dam is just one of many designs. The way it works is simple in practice: adult fish swim upstream toward the dam. The goal is that they don't even realize the dam is there. They simply continue upstream through the fishway and out into the forebay on the other side. Water flows in one direction—fish flow in the other. But, designing a fishway isn't simple at all. In a way, it's like engineering life support systems for manned space missions: all the design criteria are biological. How fast can fish swim, and for how long? How are they motivated? How high can they jump? What temperature, dissolved oxygen, pH, and salinity can they handle? And how do all these factors vary across seasons and species? These are difficult questions to answer, and in fact, a big part of my tour at the Pacific Northwest National Laboratory was all about how scientists study exactly the limits and preferences of migratory fish. I saw the wide variety of tracking systems they use to observe the behaviors of fish in the wild and lots of different ways they study fish in a lab as well. From research like that and decades of trial and error with the fish passage systems in the real world, engineers and biologists have started to zero in on a few designs that work best.

All fish are different, which means every fishway needs to be specially designed for the particular species that they handle. At McNary, that mainly means salmonids, a group of fish species that spend most of their adult lives in the ocean but return to shallow freshwater headstreams to reproduce. Fortunately, NOAA Fisheries has a detailed Anadromous Salmonid Design Manual that boils a lot of this knowledge down. And I’ve built a scale model of a few fish ladder designs in the garage to show you how they work.

Salmon encounter all kinds of obstacles in natural streams and rivers, even ignoring the human-made ones. They're quite capable of moving upstream in a wide variety of conditions like rapids, small waterfalls and even the presence of hungry bears. Their species literally depends on it. So, the goal of a salmon fish ladder is to mimic natural conditions, to trick the salmon into thinking they're simply making their way up a section of the river, if a somewhat steep and concrete one, without delay, stress, or injury. Part of that trick is in the flow rate. In fact, the flow of water through a fishway is one of the most essential parts of the design. And like every engineering decision, it’s a balance. Every drop of water that flows through a fish ladder is a drop that isn't stored behind the dam, so it can't be used for hydropower or water supply. But the flow of water is obviously important to the fish, too. If the flow velocity is too high, the fish struggle to swim against it. And if it’s too low, there might not be enough water to swim through. But, fish not only need specific flow to swim through; they also need it to navigate. If flows through a ladder are too low, fish can become disoriented trying to find which way is upstream. And that’s especially true at the entrance. A dam stretches the entire width of a channel, but the entrance to a fish ladder usually doesn’t, so there has to be some way to draw them to the entrance. That’s called attraction flow. Salmon use the sound and turbulence of flowing water to know which direction to swim, so a big part of fish ladder design is simply encouraging the fish inside. In fact, the flow of water through the ladder itself is often not enough for attraction, so many fish ladders have auxiliary water systems. At McNary, two enormous pumps draw water from the tail race of the dam up and into the entrance channel just so it can fall back down, creating the sound and hydraulic conditions required for salmon to find their way in. In addition, a huge valve and conduit system under the fish ladder pulls additional water from the forebay and releases it at an intermediate point down the ladder. Both of these systems provide some redundancy (since no piece of infrastructure can operate 24/7) and operators some control over the conditions along the entire length of the fishway, ensuring it can always mimic ideal conditions for the fish. But once they’re in, another challenge begins.

Dams are tall. At least, a lot of them are. And most fish can't climb actual ladders. They can't walk upstairs, and although there are some fish elevators, they’re a lot more complicated and usually less efficient than a system that allows fish to swim in a somewhat natural channel. So, the overall hydraulic design of most fishways is to break up that elevation into manageable “chunks” that fish can navigate at their own pace. They need to go kind of horizontal, but still make their way upward over the dam. A steeper channel is shorter, but it can make the water flow too quickly. A shallower channel has slower flow, but it’s a longer distance, increasing the cost and complexity of getting up to the top. So, for salmon, at least, the engineers and biologists have generally settled on something in between (usually a 10-15% slope) that breaks up the total height into passable increments with some kind of baffle.

The simplest and oldest fishways are called “pool and weir” designs. The idea here is that fish can use a burst of energy to swim up the fast moving flow over the weir and then rest in the pool above. When they’re ready, they swim up the next one, and so on. Nothing like a grown man playing with fish in his garage. Lots of fish can handle this no problem, but not all species can manage the challenge of swimming up a high velocity jet of water over and over again. Pool and weir designs are generally considered one of the less effective designs because they can limit the species and fitness of the fish that can ultimately make it through. Many of the newer fishways use more sophisticated geometry to try and address that shortcoming.

The fish ladder at McNary modifies the concept a little bit by breaking the weir into two parts with a non-overflow section in the center and including submerged holes through each baffle, called orifices. This design provides a wider variety of flow conditions, allowing more types of fish to find their way to the top. McNary even sees a good number of Lamprey, a jawless fish species with similar migratory behavior to salmon, pass through the ladder each year. Most of the salmon prefer to use the submerged orifices rather than jump over the top. My model isn’t quite scaled to my toy fish, so I’ll demonstrate that here with some movie magic.

This particular configuration is sometimes called an Ice Harbor design because it was first implemented at Ice Harbor Dam on the Snake River, just upstream from McNary. Both the pool-and-weir and Ice Harbor designs have a major limitation in that they’re sensitive to the water level above the dam. Small changes in the forebay can significantly alter the amount of flow passing through the ladder just due to the hydraulics of weirs and orifices. So the designs only work when the reservoir or forebay above a dam is regulated to a tight margin. McNary has several large crest gates that can be used to control this, but that’s not always feasible. One type of fish ladder solves it in an interesting way.

Vertical slot fishways are exactly what they sound like. Instead of a weir or orifice, they use a slot along the entire height of the baffle. That makes it possible for fish to move upstream under a wide variety of flow conditions. When I remove some of the stoplogs in my model to lower the level, the vertical slot baffles continue working in essentially the same manner. Plus, the velocity is fairly consistent from the top to the bottom of the slot, giving fish ample opportunity to pass through each one. The protrusion on the upstream face creates a gentle area for fish to rest if they need it.

The big question with these designs and all artificial fishways is this: how well do they work? Again, a difficult question to answer, especially considering that many are designed with a specific group of species in mind. The fish ladders along the Columbia and Snake rivers are surprisingly good for salmon. One study found that 97% of Chinook, Sockeye, and steelhead that entered a dam tailrace made it up the ladder and into the forebay. But, millions of dollars of engineering, research, and testing were put into those structures because of the huge cultural and economic value of these fish in the region. The results vary wildly for other species or other dams.

The primary way we know this is through tagging fish, introducing them downstream of a fishway, and measuring how many make it to the upstream side. That type of study comes with all kinds of complications, and of course, it’s impossible to compare those numbers to how many fish might make it upstream if there were no dam in the first place. The Pacific Northwest National Lab showed me some of the mind-bogglingly tiny tags that can be implanted into fish and the fascinating tools they use to track them, so again, check out that other video if you want to learn more about the process. One of the simplest ways we use to measure the effectiveness of fishways is to count the number and type of fish that pass through them. Many of the largest of these structures are equipped with counting stations, often a simple window into one side of the ladder where someone watches and marks the fish as they pass by. This is extremely useful data, not just used to measure effectiveness but to keep track of overall fish migration year over year, and a lot of it is available online. But even this requires a little ingenuity.

Most fishways are too wide to see from one side to the other in the sometimes murky water, so it’s necessary to funnel fish toward the window. Sloped grates called pickets allow water to flow through while the fish are corralled to the counting station. Even these pickets can discourage fish from continuing upstream, so fish ladder designs have to consider whether they’re really necessary. A trash rack at the upstream exit is usually installed to keep debris from getting into the ladder and clogging up the baffles. Fish swim through the rack and into the forebay to continue their upstream journey.

Of course, this is a drop in the bucket of all that it takes to manage fish passage. You won’t be surprised that adult salmon migrating upstream is a small subset of the vast array of challenges in getting fish around the barriers we build. Even McNary has juvenile passage facilities for younger fish traveling downstream and another fish ladder on the Washington side with a totally different design. Dams worldwide, particularly those installed where migratory fish species live, often have significantly different systems custom-designed for the species needing through. This is important. It’s not just for biodiversity’s sake, although that’s pretty important on its own. We depend on these fish for food, for recreation, for cultural identity, and more. So, we’re constantly innovating. I mentioned fish elevators and locks. In some cases, we do the migrating for the fish by barging up or downstream. You may have seen viral videos of the Whooshh fish cannon that aims to make fish passage possible where traditional ladders aren’t feasible. We’ve even tried dropping fish from airplanes. It didn’t work very well, by the way. All this to say, it’s something we care deeply about. For better or worse, a big part of engineering is fixing the problems we created through engineering of the past when we either didn’t know or didn’t care about the impacts our projects could cause. Everyone has a different perspective about what it means for humanity to live harmoniously with all the other life we share the planet with. I think it’s fascinating how those ideas and endeavors trickle down through engineering into the real world.

April 02, 2024 /Wesley Crump

How the Hawaiian Power Grid Works

March 19, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

In January of 2024, right on the heels of a serious drought across the state, a major storm slammed into the Hawaiian islands of Oahu and Kauai. Severe winds caused damage to buildings, and heavy rain flooded roadways. At the Waiau Steam Turbine plant, the rain reached some of the generator unit controls, tripping two units and knocking 100 megawatts of power off the tiny grid (roughly 10% of demand). The overcast weather also meant solar panels weren’t producing much electricity, and the colossal battery systems at Kapolei and Waiawa were running out of juice. Other generating units were out of service due to maintenance scheduled during the cool winter months when power demands were lowest. Then, the H-POWER trash-to-energy plant tripped offline as well. By the evening of January 8th, all of Hawaiian Electric’s power reserves on Oahu were depleted, and it was clear that they weren’t going to have enough generation to meet all the needs. And if you can’t increase supply, the only other option is to force a reduction in demand.

At around 8:30 PM, the utility implemented rolling outages across the island of Oahu to bring power demands down to a manageable level. For about 2 hours, the utility blacked out different sections of the island for 30 minutes each to minimize the inconvenience. Twice since then, as of this writing, rolling outages have been forced on Hawaii Island from unexpected trips at generators and scheduled maintenance at backup facilities, making them unavailable to pick up the slack.

When we say “power grid” we’re used to imagining interconnections that cover huge areas and serve tens to hundreds of millions of people. But populated islands need a stable supply of electricity too. Those recent power disturbances highlight some really interesting challenges that come from building and operating a small power grid, so I thought it would be fun to use the 50th state as a case study to dive into those difficulties. I’m Grady, and this is Practical Engineering. Today we’re talking about the Hawaiian power grid.

Really, I should say Hawaiian power grids, because each populated island in the state has its own separate electrical system. Around 95% of customers are served by a single utility, Hawaiian Electric, which maintains grids on Oahu, Maui, Hawaii Island, Lanai, and Molokai. Kauai is the only island with its own electric cooperative. There have been a few proposals and false starts to connect the islands through undersea transmission cables and form a single grid. It is an enormous challenge to install and maintain cables of that depth and distance. When you add in the volcanic and seismic hazards of the area and the sensitive ecology of the surrounding ocean, so far, no one has figured out how to make it feasible. So, each island has its own power plants, high-voltage transmission lines, substations, and distribution system entirely disconnected from the others. And that makes for some interesting challenges.

“Reliability” is the name of the game when it comes to running an electrical grid. It’s not that complicated to build generators, transmission lines, transformers, et cetera. What’s hard is to keep them all running 99.9% of the time, day and night, rain or snow. Yeah, some parts of Hawaii occasionally get snow. This is a graph of a typical reliability curve that helps explain why it’s a challenge. At the left end of the curve, you can get big increases with a small investment. But the closer you get to 100 percent uptime, each increment gets a lot more expensive. It really boils down to the fact that, in many ways, reliability comes from redundancy. When something goes wrong, you need flexibility to keep the grid up. But, in practice, that means you have to pay for and maintain equipment and infrastructure that rarely gets used, or at least not to its full capacity.

Hopefully, it’s clear that the graph I showed is idealized. It’s much harder to put concrete numbers to the question. The random nature of problems that arise, our inability to predict the future, and the fact that everything in a bulk power system is interconnected all make it practically impossible to know how much investment is required to achieve any incremental improvement in reliability. But it’s useful anyway because the graph helps clarify the benefits of a large power grid, also known as a “wide area interconnection.”

For one, it smooths out demand. One part of a region may have storms while another has good weather. From east to west, the peak power demand comes at different times. Some areas get sun, some get shade. But overall, demands average out and become less volatile as the grid gets bigger geographically. Larger interconnections also have more redundant paths for energy to flow, reducing the impacts of major equipment problems like transmission line outages. They have more power plants, again creating redundancy and making it easier to schedule offline time to maintain those facilities. And, the power plants themselves can be bigger, taking advantage of the economies of scale to make energy less expensive and more environmentally beneficial. Finally, larger areas have more resources. Maybe it’s windy over here, so you can take advantage and build wind turbines. Maybe this area has lots of natural gas production, so you can produce power efficiently without having to pay for expensive fuel transportation. In general, a wide area interconnection allows the costs of equipment, infrastructure, resources, and operations to be shared, making it easier to keep things running reliably. Hawaii has none of that.

Roughly 75% of the electric power in the state currently comes from power plants that run on petroleum. There are no oil or natural gas reserves in Hawaii, which means the vast majority of power on the islands comes from fuel imported from foreign countries. That makes the state very susceptible to factors outside of its control, including international issues that affect the price of oil. Each island has only a handful of major power plants and transmission lines. And when storms happen, they often hit the entire place at once. It’s easy to see why retail energy costs in Hawaii are around 3 times the average price paid across the US. Every increment of reliability costs more than the one before it, and each island has no one else to share those costs with. So, they get passed down to consumers. But, it’s not just that the grids are small.

The bulk of the remaining roughly 25% of Hawaii’s electric power not produced in oil-fired power plants comes from renewable sources: wind, solar, and a single geothermal plant. This has the obvious benefit of reducing CO2 emissions, but it also reduces the state’s exposure to the complexities of the fuel supply chain and price volatility, taking advantage of resources that are actually available on the islands. But, renewable sources come with their own set of engineering challenges, particularly when they represent such a large percentage of the energy portfolio.

Of course, renewable sources are intermittent. You don’t get power when the wind doesn’t blow or the sun doesn’t shine. That sporadic nature necessitates options for storage or firm baseload to make up the difference between supply and demand. It also makes it more complicated to forecast the availability of power to plan ahead for maintenance, fuel needs, and so on. And, it requires those storage facilities or baseload plants to ramp down and up very quickly as the sun and wind come and go. But that’s not all. Solar and wind sources are also considered “low-inertia”. Thermal and hydroelectric power plants generally use enormous turbines to generate electricity. Those big machines have a lot of rotational inertia that stabilizes the AC frequency. The frequency of the alternating current on the grid is basically its heartbeat. It’s a measure of health, indicating whether supply and demand are properly balanced. If frequency starts to deviate too much, equipment on the grid will sense that something’s wrong and disconnect themselves to prevent damage. The same is true for lots of industrial equipment and even consumer devices. When conditions on the grid fluctuate - say a transmission line or generator suddenly trips offline - the rotational inertia in those big spinning turbines can absorb the changes and help the grid ride through with a stable frequency. Solar panels and most wind turbines connect to the grid through inverters. Instead of heavy spinning machines creating the alternating current, they’re basically just a bunch of little switches. That means disturbances can create a faster and more significant effect on the grid, reducing the quality of power and making it more difficult to keep things stable. I’m planning a deep dive into how inverter-based energy sources work, so stay tuned for that in a future video. But, it gets even more complicated than that.

Of all the renewable energy on the Hawaiian islands, about half currently comes from small-scale solar installations, like those on residential and commercial rooftops. They’re collectively known as “distributed energy resources.” This has the obvious benefit of bringing resources closer to the loads, reducing strain on transmission lines. It also takes advantage of space that is already developed and builds capacity on the grid without requiring the utility to invest in new facilities. But, distributed sources come with tradeoffs. Most parts of the grid are built for power to flow in one direction, so injecting electricity at the downstream end can create unexpected loads on circuits and equipment not designed to handle it. Distributed sources also affect voltage and frequency, since something as simple as a cloud passing over a neighborhood can dramatically swing the flow of power on the network. The inverters on small solar installations are generally dumb. And I’m using that as a technical term. They can’t communicate with the rest of the grid; they only respond based on what they can measure at the point of connection. The grid operator doesn’t get good data on how much power the distributed sources are putting into the grid, and they have little control over those inverters. They can’t tell them to reduce power if there’s too much on the grid already or increase power to provide support. And inverters, especially consumer-grade equipment, can behave in unexpected and unintended ways during faults and disturbances, magnifying small problems into larger ones.

Those inverters can also make the grid more vulnerable to cyberattacks since their security depends on individual owners. It’s not hard to imagine how someone nefarious could take advantage of a large number of distributed sources to sabotage parts of the grid. And finally, distributed resources affect the revenue that flows into the utility, and this can get pretty contentious. The rates a customer pays for electricity cover a lot of different costs, many of which don’t really evaporate on a kilowatt-per-kilowatt basis if you remove that demand from the grid. Fixed costs like maintenance of infrastructure still come due, even if that infrastructure is being used at a lower capacity on sunny days. With net metering, it gets even more complicated to figure out how much that power injected into the grid is really saving, not to mention how those savings should be distributed across the customer base.

And, these challenges are only becoming more immediate. Hawaii’s Clean Energy Initiative, launched in 2008, set a goal of meeting 70 percent of its energy needs through renewables and increased efficiencies by 2030. In 2014, they doubled down on the commitment, setting a goal of completely eliminating fossil fuel use by 2045. That would take them from one of the most fossil-fuel-dependent states in the US to the most energy-independent. And, they’ve taken some big steps toward that goal. Renewable generation has gone from less than 10% to about 25% of the total already, and a host of policies have been changed to create more opportunities for renewables on the grid. Solar water heaters are now required for most new homes. Rebates are available for solar installations. The only coal-fired plant in the state was controversially shut down in 2022. And, there is a big list of solar, battery storage, and biofuel turbine projects expected to come online in the near future.

For better or worse, Hawaii has become a full-scale test bed for renewables and the challenges involved as they become a larger and larger part of the grid. Many consider natural gas to be a bridge fuel to renewables, a firm resource that is generally cheaper, cleaner, and often more stable in price than other fossil fuels. But Hawaii is hoping to leapfrog the bridge. For the climate and their own energy security, they’ve gone all in on renewables, making them a leader in the world, but also forcing them to work out some of the bugs that inevitably arise when there’s no one ahead of you to work them out first. There are some really cool innovations on the horizon as Hawaii grows closer to its goal. Smart grid technologies will add sensors and communications tools to automate fault detection, recovery, and restoration, and enable power to flow more efficiently across distributed resources. Hawaiian Electric is also testing out time-of-use rates to encourage customers to shift their power use to off-peak hours, hopefully smoothing out demands and reducing the need for expensive generators that only get used for a few hours per day.

That idea really underscores the significant challenge Hawaii faces in keeping its grids operating. Improvements and capacity upgrades help everyone, but they cost everyone too, and they cost more for every additional increment of uptime. There’s no reliability menu, and kilowatt-hours don’t come a la carte. If you’re a self-sufficient minimalist or frequent nomad who isn’t bothered by the idea of intermittent power, you can’t pay a cheaper rate for less dependable service. And if you use a powered medical device or work a high-powered, always-connected job at home, you can’t pay extra for more reliability. In many ways, Hawaiians are all in it together. Drawing that line between what’s worth the investment and what’s just gilding the electric lily is tough already with such a diverse array of needs and opinions. Doing it on such a small scale, multiplied by several islands, and with such a quickly growing portfolio of renewable energy sources only magnifies the challenge. But it also creates opportunities for some really cool engineering to pave the way for a more resilient, secure, and flexible energy future, not just for Hawaii, but hopefully all the rest of us too.

March 19, 2024 /Wesley Crump

How Fish Survive Hydro Turbines

March 05, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Most of the largest dams in the US were built before we really understood the impacts they would have on river ecosystems. Or at least they were built before we were conscientious enough to weigh those impacts against the benefits of a dam. And, to be fair, it’s hard to overstate those benefits: flood control, agriculture, water supply for cities, and hydroelectric power. All of our lives benefit in some way from this enormous control over Earth’s freshwater resources.

But those benefits come at a cost, and the price isn’t just the dollars we’ve spent on the infrastructure but also the impacts dams have on the environment. So you have these two vastly important resources: the control of water to the benefit of humanity and aquatic ecosystems that we rely on, and in many ways these two are in direct competition with each other. But even though most of these big dams were built decades ago, the ways we manage that struggle are constantly evolving as the science and engineering improve. This is a controversial issue with perspectives that run the gamut. And I don’t think there’s one right answer, but I do know that an informed opinion is better than an oblivious one. So, I wanted to see for myself how we strike a balance between a dam’s benefits and environmental impacts, and how that’s changing over time. So, I partnered up with the folks at the Pacific Northwest National Laboratory (or PNNL) in Washington state to learn more. Just to be clear, they didn’t sponsor this video and had no control over its contents.They showed me so much, not just the incredible technology and research that goes on in their lab, but also how it is put into practice in real infrastructure in the field, all so I could share it with you. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about hydropower!

This is McNary Dam, a nearly 1.5-mile-long hydroelectric dam across the Columbia River between Oregon and Washington state, just shy of 300 miles (or 470 km) upriver from the Pacific Ocean. And this is Tim Roberts, the dam’s Operations Project Manager and the best dam tour guide I’ve ever met.

“These are 1x4 hand-nailed forms that got built for the entire facility.”

But this was not just a little walkthrough. We went deep into every part of this facility to really understand how it works. McNary is one of the hydropower workhorses in the Columbia River system, a network of dams that provide electricity, irrigation water, flood control, and navigation to the region. It’s equipped with fourteen power-generating turbines, and these behemoths can generate nearly a gigawatt of power combined! That means this single facility can, very generally, power more than half-a-million homes. The powerhouse where those turbines live is nearly a quarter mile long (more than 350 meters)! It’s pretty hard to convey the scale of these units in a video, but Tim was gracious enough to take us down inside one to see and hear the enormous steel shaft spinning as it generates megawatts of electrical power. All that electricity flows out to the grid on these transmission lines to power the surrounding area.

McNary is a run-of-the-river dam, meaning it doesn’t maintain a large reservoir. It stores some water in the forebay to create the height needed to run the turbines, but water flows more or less at the rate it would without the dam. So, any extra water flowing into the forebay that can’t be used for hydro generation has to be passed downstream through one or more of these 22 enormous lift gates in the spillway beside the powerhouse.

As you can imagine, all this infrastructure is a lot to operate and maintain. But it’s not just hydrologic conditions like floods and droughts or human needs like hydropower demands and irrigation dictating how and when those gates open or when those turbines run; it’s biological criteria too. The Columbia and its tributaries are home to a huge, diverse population of migratory fish, including chinook, coho, sockeye, pink salmon, and lampreys, and over the years, through research, legislation, lawsuits, advocacy, and just plain good sense by the powers at be, we’ve steadily been improving the balance between impacts to that wildlife and the benefits of the infrastructure. In fact, just about every aspect of the operation of McNary Dam is driven by the Fish Passage Plan. This 500-page document, prepared each year in collaboration with a litany of partners, governs the operation of McNary and several other dams in the Columbia River system to improve the survival of fish along the river.

“It’s kind of a bible. It tells us how we operate. It tells us what turbine we can run, what order to run them in, what megawatts to run them at, what to do when a fish ladder or a fish pump goes out of service. So it’s a pretty good overall operating procedure for us.”

“So it’s the fish plan driving how you operate the dam?”

“Yeah, It dictates a lot of how we operate the powerhouse.”

This fish bible includes prescriptive details and schedules for just about every aspect of the dam, including the fish passage structures too. Usually, when we build infrastructure, the people who are going to use it are actual people. But in a very real sense, huge aspects of McNary and other similar dams are infrastructure for non-humans. On top of the hydropower plant and the spillway, McNary is equipped with a host of facilities meant to help wildlife get from one side to the other with as little stress or injury as possible. Let’s look at the fish ladders first. McNary has two of them, one on each side.

A big contingent of the fish needing past McNary dam are adult salmon and other species from the ocean trying to get upstream to reproduce in freshwater streams. They are biologically motivated to swim against the current, so a fish ladder is designed to encourage and allow them to do just that, and it starts with attraction water. Dams often slow down the flow of water, both upstream and downstream, which can be disorienting to fish trying to swim against a current. Also, dams are large, and fish generally don’t read signs, so we need an alternative way to show them how to get around. Luckily, in addition to a strong current, salmon are sensitive to the sound and motion of splashing water, so that’s just what we do. At McNary, huge electric pumps lift water from the tailrace below the dam and discharge it into a channel that runs along the powerhouse. As the water splashes back down, it draws fish toward the entrances so they can orient with the flow through the ladder. Some of this was a little tough to understand even seeing it in person, so I had a couple of the engineers at the dam explain it to me.

“So there’s water coming in the actual ladder and in the parallel conduit?”

“Right, right. So, it’s very complicated, huh? They’re going to approach the dam and enter from one of three spots on the Oregon side. There’s a north fish entrance on the north end of the powerhouse, south fish entrance on the south side of the powerhouse and there’s an adult collection channel that runs across the face.”

All these entrances provide options for the fish to come in, increasing the opportunity and likelihood that they will find their way.

“Between the regulating weirs on the north end, the regulating weirs on the south end and those floating orifices here, you back up that water. You need a massive amount of water to keep that step, that whole corridor.”

“I see.”

Once they’re in, they make their way upstream into the ladder itself. Concrete baffles break up the insurmountable height of the dam into manageable sections that fish can swim up at their own pace. Most of the fish go through holes in the baffles, but some jump over the weirs. There’s even a window near the top of the ladder where an expert counts the fish and identifies their species. This data is important to a wide variety of organizations, and it’s even posted online if you want to have a look. Once at the top, the fish pass through a trash rack that keeps debris out of the ladder and continue their journey to their spawning grounds.The goal is that they never even know they left the river at all, and it works. Every year hundreds of thousands of chinook, coho, steelhead, and sockeye make their way past McNary Dam. If you include the non-native shad, that number is in the millions.

“These pictures helps tremendously.”

And it’s not just bony fish that find their way through. Some of the latest updates are to help lamprey passage. These are really interesting creatures!

“I mean, in some parts of the country, they’re like, invasive. People want to get rid of them. Here, we’re trying to nurture them along because they’re a native uh, species, so there are some small changes we’ve been doing um, to try and make those make passage for lamprey more successful.”

I’m working on another video that will take a much deeper look at how this and other fish ladders work, so stay tuned for that one, but it’s not the only fish passage facility here. Because what goes up, must come down, or at least their offspring do (most adult salmon die after reproducing). So, McNary Dam needs a way to get those juvenile fish through as well. That might sound simple; thanks to gravity, it’s much simpler to go down than up. But at a dam, it’s anything but.

“And the way I explain to them is the adults are mission oriented. They’re coming back to spawn. The juveniles are just kinda dumb kids riding the wave of the ocean. I mean honestly, that’s what they’re doing. The main focus has been centered around the juveniles migrating out, right? How do we get the majority of them out? And so, when they’re coming down and they’re approaching the structure, uh, they got two basic paths to take, either the spillway or the powerhouse.”

I definitely wouldn’t want to pass through one of these, but juvenile fish can make it through the spillway mostly just fine. In fact, specialized structures are often installed during peak migration times to encourage fish to swim through the spillway. McNary Dam has lift gates where the water flows from lower in the water column. But salmon like to stay relatively close to the surface and they’re sensitive to the currents in the flow. Many dams on the Columbia system have some way to spill water over the top, called a weir, that is more conducive to getting the juveniles through the dam.

The other path for juveniles to take is to be drawn toward the turbines. But McNary and a lot of other dams are equipped with a sophisticated bypass system to divert the fish before they make it that far. and that all starts with the submersible screens. These enormous structures are specially designed with lots of narrow slots to let as much water through to the turbines while excluding juvenile fish. They are lowered into place with the huge gantry crane that rides along the top of the power house. Each submersible screen is installed in front of a turbine to redirect fish upwards while the water flows continues on. Brushes keep them clean of debris to make sure they fish don’t get trapped against the screen. They might look simple, but even a basic screen like this requires a huge investment of resources and maintenance, because they are absolutely critical to the operation of the dam.

“...incredibly labor intensive screens, we spend a lot of time cause, you know, you saw those brushes running up and down them. They’ve got submerged gearboxes, submerged motors, submerged electrical.”

“Oh my gosh.”

“Yeah, every December we pull them out for four months, we, we work on fish screens. Not to mention, so like, and if there’s a problem, these are a critical piece of equipment here, um, during fish passage season if that, if something goes wrong with that screen, this turbine has to shut down. You can’t run them without it.”

Once the fish have been diverted by the screens, they flow with some of the water upward into a massive collection channel. This was originally designed as a way to divert ice and debris, but now it’s basically a fish cathedral along the upstream face of the dam.

“Pretty cool huh?”

“That’s amazing!”

The juveniles come out in these conduits from below. Then they flow along the channel, while grates along the bottom concentrate them upward. Next they flow into a huge pipe that pops out on the downstream face of the dam. Along the way, the juveniles pass through electronic readers that scan any of the fish that have been equipped with tags and then into this maze of pipes and valves and pumps and flumes. In the past, this facility was used to store juveniles so they could be loaded up in barges and transported downstream. But over time, the science showed it was better to just release them downstream from the dam. Every once in a while, some of the juveniles are separated for counting so scientists can track them just like the adults in the ladder. Then the juveniles continue their journey in the pipe out to the middle of the river downstream.

Avian predation is a serious problem for juveniles. Pelicans, seagulls, and cormorants love salmon just like the rest of us. In many cases, most of the fish mortality caused by dams isn’t the stress of getting them through the various structures, but simply that birds take advantage of the fact that dams can slow down and concentrate migrating fish. This juvenile bypass pipe runs right out into the center of the downstream channel where flows are fastest to give the fish a fighting chance, and McNary is equipped with a lot of deterrents to try and keep the birds away.

All this infrastructure at McNary Dam to help fish get upstream and downstream has changed and evolved over time, and in fact, a lot of it wasn’t even conceived of when the dam was first built. And that’s one of the most important things I learned touring McNary Dam and the Pacific Northwest National Lab: the science is constantly improving. A ton of that science happens here at the PNNL Aquatics Research Laboratory. I spent an entire day just chatting with all the scientists and researchers here who are advancing the state of the art.

For example, not all the juvenile salmon get diverted away from those turbines. Some inevitably end up going right through. You might think that being hit by a spinning turbine is the worst thing that could happen to a fish, but actually the change in pressure is the main concern. A hydropower turbine’s job is to extract as much energy as possible from the flowing water. In practice, that means the pressure coming into each unit is much higher than going out, and that pressure drop happens rapidly. It doesn’t bother the lamprey at all, but that sudden change in pressure can affect the swim bladder that most fish use for buoyancy. So how do we know what that does to a fish and how newer designs can be safer? PNNL has developed sensor fish, electronic analogs to the real thing that they can send through turbines and get data out on the other side. Compare that data to what we already know about the limits fish can withstand (another area of research at PNNL), and you can quickly and safely evaluate the impacts a turbine can have.

What’s awesome is seeing how that research translates into actual investments in infrastructure that have a huge effect on survivability. New turbines recently installed at Ice Harbor Dam upstream were designed with fish passage in mind to reduce injury for any juveniles that find their way in. One study found that more than 98% of fish survived passing through the new turbines, and nearly all the large hydropower dams in the Columbia river system are slated to have them installed in the future. And it’s not just the turbines that are seeing improvements. I talked to researchers who study live fish, how they navigate different kinds of structures, and what they can withstand. Just the engineering in the water system to keep these fish happy is a feat in itself. I talked to a coatings expert about innovative ways to reduce biological buildup on nets and screens. I talked to an energy researcher about new ways to operate turbines to decrease impacts to fish from ramping them up and down in response to fluctuating grid demands.

“It doesn’t have to be that, you know, what’s good for the grid is necessarily bad for the fish.”

“Exactly.”

And I spent a lot of time learning about how we track and study the movement of fish as they interact with human made structures. Researchers at PNNL have developed a suite of sensors that can be implanted into fish for a variety of purposes. Some use acoustic signals picked up by nearby receivers that can precisely locate each fish like underwater GPS. Of course, if you want to study fish behavior accurately, you need the fish to behave like they would naturally, so those sensors have to be tiny. PNNL has developed miniscule devices, so small I could barely make out the details. You also want to make sure that inserting the tags doesn’t injure the fish, so researchers showed me how you do that and make sure they heal quickly. And of course, those acoustic tags require power, and tiny batteries (while extremely impressive in their own right) sometimes aren’t enough for long-term studies. So they’ve even come up with fish-powered generators that can keep the tags running for much longer periods of time. A piezoelectric device creates power as the fish swims… and they had some fun ways to test them out too.

Of course, migratory fish aren’t the only part of the environment impacted by hydropower, and with all the competing interests, I don’t think we’ll ever feel like the issue is fully solved. These are messy, muddy questions that take time, energy, and big investments in resources to get even the simplest answers.

“It’s really, it’s a complicated question. If you want to look at overall survivability from point A to point B, you can do that. But you’ve got to start talking about species. Is it a spring? Is it a fall? Is it a chinook? Is it a steelhead? Cause we have different models and studies that have been done. So it varies from species to species. People ask that question. I get really hesitant to respond, because I’m like, you don’t know how complicated a question you’re asking. You want to simplify it into one little number, and it’s not that simple.”

The salmon pink and blue paint in the powerhouse at McNary really sums it up well, with the blue symbolizing the water that drives the station, and the pink symbolizing the life within the water, and its environmental, economic, and cultural significance. This kind of balancing act is really at the heart of what a lot of engineering is all about. I’m so grateful for the opportunity to see and learn more about how energy researchers, biologists, ecologists, policy experts, regulators, activists, and engineers collaborate to make sure we’re being good stewards of the resources we depend on. I think Alison Colotelo, the Hydropower Program Lead at PNNL put it best:

“When you think about salmon and why we need to protect them, why we need to put all this money into understanding how do we, how do we coexist with our energy needs. It's because they're important from an ecological perspective, right?For the nutrients that they're bringing back, from an economic perspective, from a cultural perspective and if the salmon go away then so do a lot of other things.”

March 05, 2024 /Wesley Crump

How To Install a Pipeline Under a Railroad

February 20, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Union Pacific Railroad’s Austin Subdivision in central Texas. It’s a busy corridor that moves both freight and passengers north and south between Austin and San Antonio… But it’s mostly freight. Trains run twenty-four-seven here, carrying goods like rock from nearby quarries, cement, vehicles, intermodal freight, and more. So, when Crystal Clear Special Utility District was planning a new water transmission main that would connect a booster pumping station to a new water tower to meet the growing demand along I-35, the biggest question was this: how do you get the line across the tracks without shutting them down and trenching across? It’s only about 250 feet or 76 meters from one side to the other, but this small part of a large water transmission project takes more planning, coordination, engineering, and innovative construction than the rest of the project combined. Maybe you’ve never even wondered what it takes to move fresh water across the distances from where it’s stored to where it’s used. But, I really think you’re going to find this fascinating.

Crystal Clear and their general contractor, ACP, invited me on-site to see it happen in real-time and document the process for you! Most of the water lines are already installed, but getting this one across these tracks is going to be a different challenge. I’m your host, Grady Hillhouse, and this is a Practical Construction.

There are actually a lot of ways to install underground utilities without disrupting things at the surface, collectively known as trenchless technologies. This project is using a method called horizontal earth boring, but really, it’s pretty exciting. Before any dirt gets bored, there’s a lot that has to happen first. So much can go wrong if an operation like this isn’t carried out thoughtfully and carefully. One of those risks is hitting something that’s already buried at the site, and just about every subsurface utility contractor can tell horror stories about what happens if a water, sewer, gas, fiber optic, or telephone line is severed during construction. The right-of-way along a railroad track is a common place to install linear utilities, because they can just run parallel to the tracks, avoiding the complexity of dealing with multiple property owners and obstacles. The owners of all the utilities that run along these tracks have already been out to mark their location using spray paint on the ground and flags. But, that’s not enough to make sure they are avoided. Before the drill can get started, a vacuum excavation crew comes to the site to confirm their location not just along the ground, but how far each one is below it.

This truck has an enormous vacuum that sucks up soil as it’s blasted loose by a pressure washer. The benefit of a vacuum excavator is that, although the water is strong enough to dislodge and excavate soil, it’s not strong enough to damage the utility lines below. Compare that to using a hydraulic excavator with a bucket where one wrong move could rip a pipe or cable out like a wet noodle. It also disturbs a lot less of the area at the surface, so this process is often called potholing. It’s a crucial step if the margins are tight when avoiding existing utilities, like they are on this site. For each utility, the vacuum excavator locates the exact position and depth of the line so that it can be marked by a surveyor and compared to the proposed alignment of the bore. And there’s hardly any mess once the process is done. On this site, there are lines both above and below the proposed bore, so the drilling contractor will be threading a needle.

Safety is also critical, especially when working around railroads and trains. Since this job requires people on the tracks and construction below them, there’s a specialized crew on site who coordinates between the Union Pacific dispatchers, train engineers, and crews on site to make sure no one gets hurt. They’ve established a specific zone along the tracks, which requires the train engineers to check in with them first before any train gets near the work. When a train is on the way, the safety crew sounds a horn, and everyone on site stops working and gets clear of the tracks. Once the train is past, work starts right back up.

The process of horizontal earth boring, also known as jack-and-bore, starts with an entrance pit. Unlike some trenchless methods that can curve down and back up again from the surface, this waterline needs to be as straight and precise as possible. So you have to start underground. This enormous excavation is where almost all the work will happen. And, because it’s so close both to a roadway and the railroad tracks, there’s no room to slope the sides to avoid the risk of a collapse. Instead, huge steel trench boxes are installed in the pit to shore it up and keep it from collapsing or affecting the adjacent structures. Once the trench boxes are installed, the boring machine can be lowered into place. And before long, it’s up and running, or I guess you could say it’s down and running.

In practice, horizontal earth boring is relatively straightforward. The boring machine really only has two jobs: excavating the soil and advancing the casing pipe. For the first job, it uses a string of augers that connect to a boring head. It’s just an oversized drill bit. As the auger turns, the boring head breaks up the soil ahead of the casing pipe, and the flights draw the cuttings back toward the pit. The cutting head has wings that open when rotated in one direction. Those wings extend just slightly beyond the edges of the casing pipe, over-excavating the bore hole to minimize the friction of pushing the casing pipe forward. The soil cuttings from the boring are discharged from the side of the machine into a pile in the pit. Every so often, they have to be removed. The excavator at the surface uses a clamshell bucket to scoop the cuttings out of the pit and stockpile them nearby. They’ll eventually be disposed of off-site or used as backfill.

The machine’s second job is to advance the casing pipe into the bore. This pipe provides support to the hole to keep it from collapsing and prevent the overlying soil from shifting or settling over time. The boring machine sits on tracks. The back of the machine uses a hydraulic ram attached to a locking system that affixes to the rails. The ram provides thrust, pushing both the machine and the casing pipe forward with the tremendous force required to advance it through the ground. Newton’s third law is in play here. To provide that thrust to the casing, the machine needs something to react against. So, those tracks have been firmly concreted into the bottom of the entrance pit to make sure it’s the machine that moves and not the tracks.

Of course, every contractor knows as soon as you start making good progress, it’s going to rain. Water flows downhill, and this pit is the lowest spot of ground on site. But the crew doesn’t let it slow them down too much. The concrete bottom in the pit helps keep things from turning into a muddy mess, and an electric pump makes pretty quick work of the water that gets in. Tarps over the top of the pit also help keep it dry, if also making it a little tough to film the work inside.

Railroad operators are rightly strict about the what, where, when, and why when it comes to construction on their rights-of-way. Disrupting the movement of freight and passengers is simply not an option. So an essential part of this operation is continuous monitoring to make sure the boring is not affecting the tracks above. A surveying crew comes to the site every six hours to carefully measure for any changes in elevation along the tracks. They’ve installed these reflective markers and use a piece of equipment called a total station that can precisely pinpoint each length of the rail. They process the data as it comes in and compare it to the baseline measurements. If they notice any settling or movement, everything would have to stop (but, spoiler alert, they never did).

Another requirement from the railroad is that this work happens nonstop. They don’t want an open excavation sitting idle below the tracks, so they require that the boring happen continuously night and day. The longer it takes to get this casing pipe to the other side, the more opportunity for something to go wrong. The boring contractor works in double shifts. When one crew leaves, there’s already another one to take their place, so the site is never unattended.

Once one segment of casing pipe is pushed as far as it can go, the boring machine is pulled to the back of the pit. A new segment of pipe is collected from the stack. And, it’s lowered in. The next length of the auger is already inside. The auger is attached to the string. And then the casing segment is welded to the end of the previous one.

Segments go in faster at first, but each one takes a little bit longer than the last. That’s because, every two or three segments, they have to check and make sure the bore is following the right path. There are utilities to avoid, dimensional tolerances from the railroad, and location requirements from the engineer and property easements. So, having the alignment wander is not an option. Every so often, the crew has to remove the entire auger string from the bore to make sure it’s headed in the right direction. The way they do it might unnerve you, especially if you’re claustrophobic: they just send a worker on a skateboard to the end of the casing pipe. There are more sophisticated tools, but some contractors prefer the old-school, reliable method, and they have a slew of safety measures in place as required by OSHA, including ventilation, communication, and safety spotters. The person inside the pipe uses a rule to check for any deviations in grade from the precision laser installed in the bore pit. But, what happens if the bore gets off alignment?

Horizontal earth boring is not a very “steerable” operation, but there is some opportunity to make corrections if they’re needed. Take a look back at the first length of the casing pipe. Notice the shoes cut from each quadrant of the pipe. If the bore starts to deviate, a hydraulic jack can be used to bend one or more of the shoes outward and deflect the operation back into alignment. You’re not going to turn a corner this way, but it gives some control over alignment and grade. It’s why it’s so critical that the first length of casing pipe be installed perfectly; all the rest of the casing will follow right behind it.

The operation runs night and day. The machine bores and pushes each length of casing pipe. Soil is removed from the bore and then the pit. Alignment is checked. The auger string is re-inserted. A new length of casing is welded on. Rinse and repeat. All the while, trains are running constantly back and forth along this busy corridor. When the drilling crew starts getting toward the end of the line, an excavator arrives to dig the receiving pit. And, after just about a week of boring 24/7, the cutter breaks through on the other side. Even the guys who do this every day gathered around to watch it happen. It’s a perfect sight, especially for the fact that they broke through in the exact spot they were aiming for.

Only a few days later, it was time to push the water pipe through. The casing’s job is just to hold the bore open, but the water will run in rated plastic pressure pipe. These pipes connect using a bell-and-spigot design; they literally push together. A fiberglass rod is hammered into a groove around the inside of the spigot to lock each segment together. Spacers are installed to hold the line up off the casing to keep it from rubbing during installation or being damaged over time. Just like the boring, the pipes are lowered into the entrance pit, attached, and pushed through to the other side (although, this operation goes quite a bit faster). In some projects, the annular space between the casing and pipe is grouted in, but in this job they opted to keep the space open. It was a ton of work and coordination to get this line under the railroad, so if it ever breaks or leaks, Crystal Clear will be able to pull it out and repair or replace it. This line will be tied into the pipes already installed on either side of the bore, leak-tested, and backfilled, but the hard part is over. It won’t be long before it’s pressurized and put into service, moving fresh water to this quickly growing area in central Texas, quietly and invisibly meeting a crucial need. And not a single train was delayed while it went in.

Huge thanks to Crystal Clear Special Utility District, ACP, and their subcontractors for having me on their site.

February 20, 2024 /Wesley Crump

Why Locomotives Don't Have Tires

February 06, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Formula 1 is, by many accounts, the pinnacle of car racing. F1 cars are among the fastest in the world, particularly around the tight corners of the various paved tracks across the globe. Drivers can experience accelerations of 4 to 5 lateral gs around each lap. That’s tough on a human body, but think about the car! 5 times gravity is about 50 meters per second… per second, and an F1 car weighs 800 kilograms (or 1800 pounds). If you do a little quick recreational math, that comes out to a force between the car and the track of more than 4 tons. And all that force is transferred through four little contact patches below the tires.

Traction is one of the most important parts of F1 racing and the biggest limitation of how fast the cars can go. Cornering and braking at such extreme speeds requires a lot of force, and all of it has to come from the friction where the rubber meets the road. Pirelli put thousands of hours of testing and simulations into the current design. Nearly a hundred prototypes were whittled down to 8 compounds: two wet tires and six slicks of various levels of hardness that offer teams a balance between grip and durability during a race.

And yet, when you look at another of the most extreme vehicles on earth you see something completely different. A single modern diesel freight locomotive can deliver upwards of 50 tons of forward force (called tractive effort) into the rails, but it’s somehow able to do that through the tiny contact patches between two smooth and rigid surfaces. It’s just slick on slick. It seems impossible, but it turns out there’s a lot of engineering between those steel wheels and steel rails. And I’ve set up a couple of demonstrations in the garage to show how this works. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about why locomotives don’t need tires.

In a previous episode of this series on railway engineering, I talked about how hard it is to pull a train based on the various aspects of grade, speed, and curves. I even tried to pull a train car myself. The whole point of locomotives is to overcome that resistance, to take all the force required to pull the train and deliver it to the tracks to keep the whole thing rolling. Most modern freight locomotives use a diesel-electric drive. The engine powers a generator, which powers electric traction motors that drive the wheels. There are a lot of benefits to this arrangement, including not needing a super complicated gearbox to couple the engine and wheels. But, even with electric traction motors, locomotives are still limited by the power rating of those motors, and power is the product of force and velocity. So if you graph the speed of a locomotive against the force it can exert on a train, you get this inverse relationship. But, this isn’t quite right. Of course, there are physical and mechanical limits on how fast a train can go, so the graph gets cut off there, but there’s another limitation that governs tractive effort on the slow side. Even if the motors could generate more force at slow speeds (and they usually can), the friction between the rails and wheels limits how much of that force can be mobilized (called the adhesion limit).

The graph makes it clear why this is such a major challenge for a railroad: you can’t even use the full power of the engine because you’re limited by the friction at the wheels. It’s why dragsters do a burnout before the race: to warm up the tires for more friction. I was reading this Federal Railroad Administration report, and I love that it called friction the “last frontier” of vehicle/track interaction; it’s just so important to nearly every aspect of railway engineering. The lack of friction is really the reason railways work in the first place: it means the rolling resistance of enormous loads can be overcome by relatively tiny locomotives. But, of course, some friction is necessary so that trains can accelerate and brake without slipping and sliding on the rails. There are alternatives, like the cog railways that carry trains up steep mountains, but most freight and passenger trains use simple “adhesion” for traction; just the steel-on-steel friction and nothing else. The area that’s physically touching between a wheel and rail, called the contact patch, is roughly the size of a US dime: maybe 2 to 3 square centimeters or half of a square inch. Imagine gluing a dime to the wall and then hanging two average sized cars from it. That’s a loose approximation of the traction force below each wheel of a locomotive; it’s a lot of friction!

Incredibly, friction really boils down to two numbers, one that’s simple (weight, or more generally, the normal force between the two surfaces), and a coefficient that’s a little more complicated. Let me show you what I mean. I have a little demonstration set up here in the garage. It’s just a sled attached to a spring scale. I can add a weight to the sled, and then slide different materials underneath. The reading on the scale is the kinetic friction between the materials. Even if the weight stays the same, the force changes because every material interacts differently with the steel sled, and this can get super complicated: asperity interlocking, cold welding, modified adhesion theory, interfacial layers, et cetera. I’m not going to get into all that, but it’s important to engineers who think about these problems. All that complexity gets boiled down into a single, empirical value called the coefficient of friction. Double the coefficient; double the friction. And the same is true of the normal force. If I double the weight on the sled, I get roughly double the reading on the scale for each of the materials I pulled underneath it.

In some ways, it really is that straightforward. You have two knobs to manage tractive effort: the weight of the locomotive and the friction coefficient. But you don’t always have a lot of control over that second knob. Environmental contaminants like oil, grease, rust, rain, and leaves lower the coefficient of friction, making it harder to keep the wheels stuck to the track. So you kind of just have the one knob to turn. Very generally, the math looks like this: You look at the steepest section of track where the highest tractive effort is required and divide that force by the “dispatchable adhesion,” a complicated-sounding term which is really just the friction coefficient that you can count on for the specific locomotive and operating conditions. Maybe it’s 30% for a modern locomotive on dry rail or 18% for an older model on a frosty winter morning. Now you have the total weight needed to develop that tractive effort. For longer and heavier trains, you can’t just use a single massive locomotive, because there are limits to the weight you can put on a single wheel before the tracks fail or you damage a bridge. That’s why many large freight trains use two, three, four, or more locomotives together.

But, that friction coefficient isn’t set in stone. You do have some control there. Even since the days of steam locomotives, sandboxes have been used to drop sand on the tracks to increase the friction between wheels and rails. If you look closely, you can sometimes see the pipes that deliver sand in front of the wheels. Some railways use air, water jets, chemical mixtures, and even lasers to clean the rails, carry away moisture, or just generally increase control over wheel/rail friction. And there’s another way to turn that knob that’s a little tricky to understand, because there’s really not a hard line between a wheel sticking to a rail through friction and a wheel sliding on it from not enough. Actually, all locomotive wheels under traction exist somewhere in between the two! Let me show you what I mean.

Even though both locomotive wheels and rails are made from hardened steel, that doesn’t mean they’re infinitely stiff. Everything deforms to some extent. But, it would be pretty tough to show the deformation of a steel-on-steel surface under hundreds of thousands of pounds in a garage demonstration, so I have the next best thing: a rug and a circular brush that spins on a shaft. This brush simulates a locomotive wheel, and right now, it can spin freely. So, when I pull the rug underneath it, nothing unexpected happens. There’s essentially no traction here. The force between the brush and the rug (representing a wheel on a rail) is negligible, and there’s no slip. The brush turns at the same rate as the rug moves. But I can change that.

I have a little homemade shaft brake made from a camera clamp, and I can tighten the clamp to essentially lock up the rotation of the brush. Now when I pull the rug under the wheel, it’s noticeably more difficult. The brush is applying a strong traction force to the surface, and also, it’s completely slipping. The relative movement between the wheel and the rail is basically infinite, since the wheel isn’t moving at all. Again, maybe this isn’t too surprising of a result. What’s interesting, I think, is what happens in between these two conditions. If I loosen the clamp so that the brush can rotate with some resistance and pull the rug through again, watch what happens.

The bristles deform as the brush rolls along. They’re applying a traction force, even as the brush rolls. If you look closely, the bristles stick to the rug at the front, but at a point within the contact area, they lose that connection to the rug and slip backwards. And this is exactly what happens to locomotive wheels as well. The surface layer of the wheel is stretched forward by the rail, but toward the back of the contact area, there’s not enough adhesion, and they separate as the elastic stress is released. The stick and the slip happen simultaneously. What’s fascinating about this behavior is that the locomotive wheels actually spin faster than the locomotive is moving along the rails, an effect called creep. And the brush makes it obvious why. The bristles in contact with the rug are flexing, making that part of the wheel rim essentially longer. So the wheel has to turn faster to make up for the difference, or in this demo (since the brush is static), the rug has to travel a greater distance for the same amount of rotation. I can make this clearer with a bit of tape.

With the brake off and no traction, I can pull the rug through and mark the length the rug traveled for half a rotation of the brush. Now, with the brake on, I can pull the rug through again. And you see that the rug traveled a longer distance, even though the brush rotated the same amount as before. If we graph the behavior of a wheel across these various conditions, you get something like this. With no traction, there’s no slip, and so there’s also no creep. But as traction goes up, a bigger part of the contact patch is slipping, and so its relative motion to the track, its creep, goes up. Eventually you reach a point where the entire contact patch slips, and the traction force levels off. You can spin and spin, but you’ll never develop more force.

Of course, that graph is a theoretical situation under ideal conditions. Your intuitions might be saying that a wheel that’s fully sliding on the rail has less traction than one that has at least some stick, and you’d mostly be right. For lots of materials, the “dynamic” friction coefficient when something is sliding, like my little sled demo, is less than the coefficient of friction when there’s no relative movement. That gives rise to this effect called stick-slip, where you get oscillation between sliding and sticking. A violin bow is a great example: the friction from the hairs in the bow stick, then slide, along the string, causing it to vibrate and create beautiful music.

On a locomotive, it’s less desirable. Stick-slip can lead to corrugation of the rail and unwanted noise. It was a notorious problem for steam locomotives because the traction force at the wheel rim was always fluctuating. But the other effect this difference in static versus dynamic friction creates is that the traction versus creep curve in the real world often looks more like this. There’s a maximum in there, and if you go past it toward greater slip, you get a lot less traction.

And that’s the trick many modern locomotives take advantage of. Sophisticated creep control systems can monitor each wheel individually and vary the tractive force to try and stay at the peak of that curve. Eeking out a few more percentage points on the friction coefficient means you can take better advantage of your power, and sometimes even use fewer locomotives than would otherwise be required, saving fuel, cost, and wear and tear.

All that complexity, and you still might be wondering, why all the trouble when you could just use a different material with a higher friction coefficient, like the rubber tires on cars? And the answer is just that everything comes with a tradeoff. Some passenger rail vehicles do use rubber tires, and some locomotives have steel “tires” that can be removed and replaced. But I think those F1 tires are a perfect analogy. You generally use the soft sticky ones when you want to gain track position and switch to the harder, more durable tires to maintain position without losing too much time in the pits. But pit stops for freight trains are pretty expensive. If you keep following that logic to more and more durable tires that can carry multiple tons of weight across hundreds of thousands of miles, you just end up with a steel wheel on a steel rail, and you find other ways to get the traction that you need.

February 06, 2024 /Wesley Crump

How The Channel Tunnel Works

January 16, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

2024 marks thirty years since the opening of the channel tunnel, or chunnel, or as they say in Calais, Le tunnel sous la Manche. This underground/undersea railroad tunnel connects England with France, crossing the narrowest, but still not that narrow, section of the English Channel. The tunnel allows passengers (and, in many cases, their cars, too) to cross the channel in just over half an hour at speeds as high as 99 mph! While there are longer tunnels out there, this is the longest underwater tunnel in the world.

When it was proposed in the mid-1980s, it was set to be the most expensive construction project ever, and like so many mega projects, it went way over budget and opened a year late. But unlike many megaprojects, this one was funded entirely by private investors. That’s a good thing, too, because it hasn’t exactly been a mega-financial success. The BBC once said that, "Depending on your viewpoint, the Channel Tunnel is one of the greatest engineering feats of the 20th Century or one of the most expensive white elephants in history.”

Elephant or not, the tunnel is legendary among engineers, and in light of the 30th anniversary, I thought it was about time I dug into it. It is a challenging endeavor to put any tunnel below the sea, and this monumental project faced some monumental hurdles. From complex cretaceous geology, to managing air pressure, water pressure, and even financial pressure, there are so many technical details I think are so interesting about this project. I’m Grady, and this is Practical Engineering; today, we’re talking about the channel tunnel.

[musical transition]

The idea of building a permanent connection between England and France across the English Channel isn’t a new one. An engineer way back in 1802 came up with a plan for a horse-and-buggy tunnel intended to be lit with oil lamps, featuring an artificial island midway for horse changes, and some pretty scary ventilation chimneys. Needless to say, that idea didn’t get there. In 1882, another tunnel proposed got a bit further, a few kilometers further, in fact. Several thousand meters of tunnel were actually dug before political pressures regarding the fear of future potential invasions killed the project. By the 1970s, another attempt to build a tunnel broke ground, but that project fell through, too. It wasn’t until the mid-1980s that the proposal for the tunnel as we know it was accepted and work began in earnest. A handful of other proposals were also considered at the time, including an even more ambitious project featuring an absolutely enormous suspension bridge 70 meters above the sea using an exotic fiber called parafil and carrying traffic within huge concrete tubes.

Unsurprisingly, this monster bridge did not get selected. The plan for an underground electric railroad connection won, and work began on the channel tunnel. But it’s not so much a tunnel as three separate tunnels with a variety of connections between them. There are two main railway tunnels, each with one-way service across the channel, and a third service tunnel that runs between the two. The three service tunnels each began on either side of the channel and, pretty impressively, met in the middle, deep under the sea bed, with an offset of less than two feet. They were even able to incorporate some of the work of those previous failed attempts.

The accuracy of this dig is even more impressive when you consider that the tunnels aren’t level or straight. The geology of the English channel is, putting it mildly, a bit complicated. There are layers of different kinds of sedimentary formations, and the project was designed to follow the path of a layer known as chalk marl, although some geologists call it marly chalk. This layer was less permeable and had fewer cracks and fissures than the overlying material. But that doesn’t mean there were NO fissures. The marly chalk was the best option for tunneling under the channel, but it was still far from simple.

In some ways, those past proposals and attempts to build the channel tunnel failed because the technology just hadn’t reached the level to make a project like this feasible. But by the 1980s, one piece of equipment had made huge strides in efficiency and safety. With the creative flair you’d expect from any civil engineer, they are aptly named: Tunnel Boring Machines, or TBMs. Drilling is just one of the multitude of jobs that happen in a tunneling operation, and TBMs manage to combine and accomplish them all in one massive and incredibly complicated machine.

There are lots of different styles and sizes of TBM, and the channel tunnel used a total of eleven separate machines to finish the job. Most of us are familiar with the process of drilling a hole, but doing it through soil and rock, underwater, across a vast distance, as you can imagine, adds some nuance to the process. For one, there are no drill bits that extend for miles, so the whole machine has to fit inside the tunnel it’s creating. For two, there are no big hands to push at the back of the drill. Instead, tunnel boring machines grip onto the tunnel walls and use hydraulic cylinders to provide the thrust forces needed to advance forward. For three, except in the most ideal circumstances, the hole of a tunnel is always trying to collapse. TBMs use a cylindrical shield at the front to support the walls of the tunnel until they can be permanently lined with cast iron or concrete and sealed with grout for strength and water resistance.

Also, there’s pressure. The soil, rock, and water deep below the ground are under immense pressure. When you try to excavate, especially in softer soils like were experienced on the French side of the project, they have the potential to collapse or flood the operation. Many of the TBMs used in the channel tunnel project were called earth pressure balance machines. Here’s how they work: The rotating cutter head chews through rock and soil, allowing it to pass through openings into a chamber behind where it is mixed into a pliable paste. As the machine moves forward, the pressure in the excavation chamber builds to match the earth and water pressure on the tunnel face, supporting it against collapse and preventing uncontrolled inflow of water. A screw conveyor creates a controllable plug. Its speed is carefully adjusted to remove only enough of the cuttings to maintain this balance.

Even that wasn’t enough in some cases. Water flowing into the excavated tunnel was a constant problem, making it difficult to work and damaging equipment. In many cases, the crews would inject grout into the rock ahead of the machine, effectively making it stronger before drilling through it. Imagine trying to drill a hole through a big bag of rocks and water. The drill bit would be easier to push, but it sure would make a mess. Grouting the rock ahead of the operation made it more physically challenging to drill through, but it simplified the process considerably. There are so many examples like that, where tunneling knowledge and experience improved drastically, just from running into problems and using trial-and-error to solve each one.

Most TBMs come with a train of equipment to support and power the operation behind the cutter head and lining systems. Each machine is basically its own factory with a workshop, cranes, transportation facilities, and more. And like any factory, you need a way to get materials and people in and out. Workers, lining segments, equipment, and materials travel to the machine from the entrance of the tunnel, often over miles on a temporary railway. And all the excavated spoils have to travel the same distance, often on conveyor belts, in the opposite direction. On the French side of the Channel Tunnel, the spoils were pumped as a wet slurry to a nearby area known as Fond Pignon. On the British side, the spoil was used to construct an extra 111 acres of new England. Well, not New England, but a portion of England that was new. This is now the site of the UK side’s cooling plant, but also a new nature reserve called Samphire Hoe.

Keeping the tunnel headed in the right direction was another challenge. For one, they needed to stay in the right geological layer to reduce the challenges of drilling through unstable ground. Of course, engineers had mapped the geology ahead of time but only using core samples from the surface. Those cores only provide a thin, tiny snapshot of what lies below, like trying to navigate a car by looking through a paper towel tube. And for two, they were drilling from both directions with the goal of meeting in the middle. The TBMs were guided with a sophisticated laser system to keep them on track as they tunneled through the marly chalk. Without a direct line of sight to the surface, surveyors had to set benchmarks along the tunnels with extreme accuracy. Any error in the measurements would propagate, since there was no way to “close the loop.” Crews also regularly took core samples, horizontally and vertically, along the way to keep the tunnel within the target geologic layer.

One of the ingenious parts of the channel tunnel design was for the service tunnel to lead the rest of construction. In a way, this tunnel was the pilot. It was a way to explore the geology with less risk, encountering the challenges on a smaller scale before making progress on the main tunnels. It was also a way to confirm the guidance and ensure that the tunnels were aligned properly when they met in the middle, which, to the relief of many, they famously did in 1990. For the first time since the ice age, there was a dry-land route from mainland Europe to Great Britain. Several of the TBMs were left and buried underground after they finished, since the cost of getting them out was too high. Now they serve as an electrical earth connection

Connecting a hole in the ground all the way across the channel is only part of the story, though. Many more engineering challenges lay ahead. As I mentioned, there are three tunnels: two large, one-way rail tunnels with diameters of 7.6 meters (nearly 25 feet) with a 4.8 meter (16 ft) diameter service tunnel running between them. But that’s not all the tunnels. There are two enormous crossover caverns where the two rail tunnels merge. During normal operation, gigantic steel doors keep the two sides separated, but they can be opened, allowing trains to cross over from one tunnel to the other. This means the tunnel can shut down large sections without the need to fully suspend train service.

The service tunnel connects both rail tunnels every 375 meters with cross passages. These allow for emergency escape from the rail tunnels should an accident or fire occur. And they’ve been used for evacuation in several cases in the past 30 years, including fires in 1996 and 2008. The air pressure in the service tunnel is higher than that in the rail tunnels so smoke can’t travel in. There are special, rubber-tired vehicles that are kind of like miniature trains, called the Service Tunnel Transport System or STTS. Of course, passenger egress is possible with these vehicles, but they are primarily, and ideally, used for shuttling staff to various locations along the tunnel.

Another engineering problem is created by the nature of trains passing through very long tunnels. On ordinary outdoor tracks, the air in front of a train gets pushed aside fairly effortlessly by the leading face of the locomotive. In a tunnel, the train acts kind of like a big piston, driving a pressurized slug of air in front of it the whole way down the tube. The rapid fluctuations in air pressure create drag on the trains, affect passenger comfort, and mess with ventilation systems. To solve this piston effect problem, a series of 2-meter-wide connections called piston relief ducts allow for controlled passage of air from one tunnel into the other, giving that chunk of air a place to go instead of just riding in front of the locomotive the whole way. A funny part of the engineering of the tunnel was investigating whether this long tube with regularly spaced holes would function like a big flute. Thankfully, it didn't end up being an issue.

Getting fresh air along the tunnels is another concern. And here again, the service tunnel shows its value. In addition to providing access to maintenance vehicles and an evacuation route, it also acts as a duct, delivering fresh air along the length of the main tunnels, allowing the stale air to discharge at the tunnel entrances. There is also a supplementary ventilation system that can pump air directly into the rail tunnels in the event a passenger train becomes immobilized.

Along with ventilation, the tunnel also has to manage heat. The trains use electricity for traction, but some of that energy is lost as heat through inefficiencies and friction. In ordinary railroad situations, this would be no big deal since the atmosphere can easily dissipate this heat. But engineers estimated that the trains would raise the temperature in the tunnel to 122 F or 50 Celsius. So, the project also required Europe’s largest cooling system. Enormous chilling plants were built on either side of the tunnel, and miles and miles of pipes carry chilled water throughout the tunnel at a cool 95 F, 35 C. Air conditioners on the trains bring this down to something more bearable for passenger comfort, rejecting more heat that has to be managed by the tunnel cooling system.

Of course, being a rail link between the two countries, the Channel tunnel is flanked by enormous rail terminals on either side, one in Folkestone, UK, and an even larger terminal located near Calais. There’s a shuttle that allows passengers to bring their vehicles along with them, effectively connecting the highways of France and the UK at the terminals. There’s also a passenger train service that crosses through the tunnel, and with the addition of High Speed 1, or HS1, in 2007, it is now possible to take the train from London to Paris and beyond.

The ordinary shuttle trains run on a loop, meaning that at each terminal, there is a track that goes from the exit of one tunnel, loops around, and then enters the other tunnel. In order to avoid uneven wheel wear from always turning in one direction like a NASCAR race, the French side features a crossover, which makes the whole tunnel loop into a huge figure 8. People aren’t the only cargo that passes through the channel tunnel, though. Freight makes its way as well. There are services for heavy trucks that get placed on trains, and there’s even a club car for the drivers to hang out in during passage under the channel. Full-on freight trains also pass through the tunnel, with service continuing past the terminals on either side.

Clearly, the channel tunnel is a triumph of modern civil engineering, and engineers around the world study its design and construction today. It wasn’t all something to celebrate, though. Like so many mega projects, there was a human cost to building the tunnel. More than ten workers perished in the construction of the project. Of course, it is absolutely unacceptable to trade safety for construction speed, even on the biggest construction project in the world, and after multiple lawsuits and investigations, things improved, and the remainder of the project saw far fewer safety incidents. The tunnel has also played a complicated role in illegal immigration and asylum-seeking in the UK, including some tragic incidents involving migrants.

The project also went significantly over budget, which is saying something since it was already slated to be the MOST EXPENSIVE construction project in history. I have a whole video that talks about some of the reasons projects like this end up costing more than we expect, so I won’t go into all those details here. The Channel Tunnel is unique in that it was privately funded, unlike most large infrastructure projects of its kind. The vast majority of the financial burden and risk was taken by banks and individual investors, and there was even a public offering. There aren’t many infrastructure projects that you can buy a share of. Over time, the tunnel has slowly turned a profit, but it’s been less lucrative than predicted. While it may be the most epic way to cross the English channel, it certainly isn’t the ONLY way. Discount airlines in Europe are far more prevalent than they were in the 1980s, and in many cases, it is more desirable and economical for travelers to just fly, especially if their ultimate destination is not the south coast of England or the north coast of France. Plus, for thousands of years, people have crossed the channel by sea. Ferries are still a totally viable and economically competitive way to cross. It might seem a little crazy to choose a ferry over the sense of wonder and delight that comes with passage through one of the most incredible tunnels in history, but maybe some people just like boat rides.

A lot has changed over the 30 years since the Channel Tunnel was completed. Construction technologies, of course, but transportation infrastructure as a whole has evolved as well. There’s probably a lot we would change about the channel tunnel if we could go back to those days when the project was first conceived, but actually, many would argue that perhaps it shouldn’t have been built at all. Knowing what we know now about the complexity of the job in a world of cheap flights, ferries, dynamic international relations, and 21st-century financial markets, it might be a bit harder to show that the costs would be outweighed by the benefits. But that’s part of the rub with megaprojects: it’s impossible to separate their wide-ranging impacts on the world, and the benefits they provide compared to an alternative where they don’t exist. Just last year, construction finished on a high-voltage electric interconnection between the UK and France through the tunnel, a project that may not have even been considered if the tunnel wasn’t already there. It’s easy to criticize the optimism required to justify huge, expensive projects in the face of an uncertain future, but projects like the Channel Tunnel create opportunities and benefits that permeate society in unique and often intangible ways.

I’m an engineer, so I see the achievement through a technical lens. It is, without a doubt, one of the most spectacular engineering feats of history. For me, that’s worth celebrating in its own right, from the intensive geological research leading up to the project, to the massive TBMs eating through so many miles of marl, from the creative ventilation and piston relief systems, to the unsung hero of the service tunnel. Whether or not it was a strictly practical idea, I’m glad it’s there. I haven’t had the opportunity to travel from Folkestone to Calais just yet, but if and when I do, I know how I’m getting there, and it’s not a ferry.

January 16, 2024 /Wesley Crump

How Railroad Crossings Work

January 16, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

If you’ve ever ridden a bike, driven a car, or operated pretty much any other vehicle on earth, there’s a fact you’ve probably taken for granted: you can see farther than it takes to stop. Within the span between seeing a stationary hazard and colliding with it, you have enough time to recognize it, apply the brakes, and come to a stop to avoid a collision. Your sight distance is greater than your stopping distance; it sounds almost silly, but this is a critical requirement for nearly all human-operated machines. But it’s not true for trains.

Engineers can see just as far as the rest of us, but the stopping distance of a fully laden freight train can be upwards of a mile. That means if an engineer can see something on the tracks ahead, it’s often already too late. So, trains need a lot of safety infrastructure to make up for that deficiency. For one: trains almost always have the right-of-way when they cross a road or highway at the same level, or at grade. The cars have to wait; And, we use a litany of warning devices at grade crossings to enforce that right-of-way and try to prevent collisions. In most cases, these devices have to detect the impending arrival of a train and give motorists enough time to clear the tracks or come to a stop. It sounds simple, but the engineering that makes that possible is, I think, really interesting, and of course, I built some demonstrations to help explain. This video is part of my series on railroads, so check out the rest after this if you want to learn more! I’m Grady, and this is Practical Engineering. Today, we’re exploring how grade crossings work.

It’s inevitable that roads cross railroad tracks, and it’s just not feasible to build a bridge in every case. In the US alone, there are over 200,000 grade crossings where cars and trains must share the same space. A car to a freight train is an aluminum can to a car: in other words, there’s a pretty big disparity in weight. So we’ve put a lot of thought into how to keep motorists, cyclists, and pedestrians safe from the trains that can’t swerve or stop for a hazard. You’ve probably stopped for a train at a crossing, but you may not have consciously added all the safety features up.

Of course, the locomotives at the front of trains themselves have warning devices, including bells, bright headlights, smaller flashing ditch lights, and most noticeably, the blaring horn. The standard pattern at a crossing is two long blasts, one short blast, and one final long blast. But the crossing has warnings too. Passive warning devices don’t change with an approaching train. They include a stop or yield signs, the crossbuck, which is the international symbol for a railroad crossing, and sometimes a plate saying how many tracks there are so you know whether to look for one train or many. Another crossbuck is usually included as a pavement marking to make sure you know what’s coming up. Many low-traffic crossings have only passive safety features, leaving it up to the driver to look out for trains and proceed when it’s safe. But, many crossings demand a little less margin for error. That’s when the active warning devices are installed.

A typical grade crossing features both visual and audible warning signals that a train is coming: red lights flash, a mechanical or electronic bell sounds, and usually a gate drops across oncoming lanes. That seems pretty simple, but there’s quite a bit of complexity in the task and the consequences if anything goes wrong are deadly. And the first part is just knowing if a train is coming.

Detecting a train is important for grade signals (it's also important for signaling trains about OTHER trains, but that's a topic for another video). It can be handled in a bunch of ways, but the simplest take advantage of the electrical conductivity of the steel rails and wheels themselves. A basic track circuit runs current up one rail, through a device called a relay I’ll explain in a minute, and back down the other rail. When a train comes along with its heavy steel wheels and axles, it creates a short circuit, a preferential path for the current in the track circuit. That deenergizes the relay, triggering all the connected warning devices or signals. But why use an ordinary old diagram when you have a model tank car, and an old railroad relay you got off eBay? Let me show you how this works in a real demonstration.

On the left, I’ve hooked up a power supply to the tracks, putting a voltage between the two rails. On the right side, I’ve attached a relay. Let’s take a look inside it to see what it does. I love playing with stuff like this. At its simplest, a relay is just an electromechanical switch: a way to turn something on or off with an electrical signal. When I energize the coil (at the bottom), it acts as an electromagnet, pulling a lever towards it. On the other side of the lever, you can see the movement interacting with several electrical contacts. It’s a little tough to see here, but these contacts are like switches that can control secondary circuits. Some will be switched on when the relay is energized, and others are switched off. When the relay is energized or de-energized, it basically flips the switch on these circuits, allowing various devices, like lights, bells, and gate arms, to be activated or deactivated. In my case, I have a simple battery and LED to indicate whether or not a train is being detected on the rails.

When there’s no train, current passes through the relay from one rail to the other, energizing the coil and holding the switch open so the LED stays dark. When I put a railcar on the tracks, the circuit changes. The wheels and axles create a short circuit (or shunt), a low-resistance path for current to flow, essentially bypassing the relay. The coils in the relay de-energize, closing the switch and lighting the LED to warn any nearby tiny drivers that a train is present on the tracks. It all depends on the train giving a preferential current path, which can be a problem if there are leaves or rust on the rails. You can see how shiny and clean tracks look when they’re in frequent use. Tracks that haven’t seen a train in a day or more often impose a speed restriction on the first train just in case there is rust that could affect the track circuits along the way.

If all this circuitry seems a little convoluted to simply detect the presence of a train, it’s because of how this simple track circuit handles when things go wrong. Let’s say the track circuit loses power; what happens? The relay deenergizes and falls back to the safest condition: assuming a train is occupying the tracks. Same thing if a rail cracks or breaks: the relay deenergizes and the light comes on. This is called failsafe operation, or as the engineers prefer to call it: fail to a known condition. If anything goes wrong, we want the default assumption to be that there’s a train coming because it might be true. Fail safe operation isn’t just in the track circuit but the warning devices too. Gates are actively held up with a powered brake. If power is lost, they fall just by gravity alone. And the bells and lights are usually powered by banks of batteries that can last for hours or days. Most modern train detection systems have moved to more sophisticated equipment, but relays are still used around the world because of their reliability. In fact, this is called a “vital” relay because of all the features that make it extremely unlikely to fail. You can see it acts slowly so that the inevitably noisy signal of a train shunting the tracks can’t cycle it on and off over and over; The armature assumes the de-energized position even if the spring breaks; The contacts use special materials to keep from welding together; And they’re just really robust and beefy to make sure they last for decades.

But even though assuming a train is coming is the safest way to manage problems, it’s not without its own challenges. Warning devices depend on trust, and that’s an extremely tenuous confidence to ask of a motorist. We are naturally dubious of automated equipment. Every time a grade crossing activates and no train comes, that trust is eroded, making motorists more likely to drive around the gates. So failing safe isn’t enough; we also need to make sure that failure is rare. Current leaking between the tracks through water, plant growth, or debris can falsely trigger warning devices. So railroads put a lot of time into keeping tracks clean and the coarse gravel below the tracks (called ballast) freely draining to prevent water from pooling up. In addition, even though maintenance workers can manually trigger devices by shunting current across the tracks, this is done rarely to avoid impacts to road traffic.

But maybe you’ve spotted a flaw in this simple track circuit. If not, let me point it out. It’s all to do with where you put the boundaries. If the circuit is close to the crossing on either side, there’s no warning time. By the time the train is detected, the motorists wouldn’t be able to clear the intersection or come to a stop. But if the circuit extends far enough beyond the crossing to give adequate warning time, motorists will have to sit and wait well after the train is past before it comes off the track circuit and the warning devices turn off. So, instead of a single track circuit, most crossings use three: two approaches and an island. Let me show you how this works with another demo.

Now I have three track circuits set up with power going to each one. The rails are separated by a small gap to avoid an inadvertent connection across the circuits. On actual railroads, you can often identify insulated joints used to isolate the track circuits. They can be hard to distinguish if the insulating material matches the profile of the rail itself, but they’re often painted to be easy to spot. A three-circuit configuration requires a little bit of logic to decide when to turn on the warning devices and when to turn them off. So, despite the fact that I have the coding skills of a civil engineer, I put this demo together using an arduino microcontroller. The model railroad folks are surely shaking their heads at this. You can see my LEDs as I roll the train along the tracks indicating which of the circuits is detecting the presence of a train; from approach to island to other approach. And here’s how the logic works.

When a train is detected on either approach circuit, it immediately activates the warning devices. The lights flash, bell sounds, and gates drop. As the train keeps moving toward the crossing, it’s detected on the island circuit too. The circuit effectively takes over control of the warning devices. They’ll stay on for as long as a train is occupying the island circuit. But as soon as the island is unoccupied, the warning devices turn off (even though one of the approach circuits is still detecting a train). You can see how just a little bit of logic makes it possible to give some warning time for motorists before the train arrives at the intersection without keeping them stuck behind gates after the train has passed. But, how much warning time is enough?

In the US, the minimum requirement is 20 seconds between activation of the warning devices and the arrival of a train, but it’s typical to see 30 or 45 seconds. You might think that the more warning time the better, but it’s a balance. Too much warning time, and motorists might become impatient and drive around the gates, so more time can actually make crossings less safe. For the three-circuit example in the demonstration, the only control you have over warning time is where to start the approach circuit. The farther away from the crossing it begins, the more warning time you get. But the exact time depends on the speed of a train. Since the approach is fixed in place, a slow train will provide lots of warning time, and a fast train will provide less. And a train stopped on an approach circuit before it even reaches the crossing will hold the gates down indefinitely. So the next step in grade crossing complexity takes speed into account.

I put a little acoustic distance sensor on my arduino so I can try to estimate the speed of an oncoming train. The large cardstock cutout just helps my sensor to ‘see’ the train a little better. The arduino measures the distance over time, converts that to an approximate speed, and guesses how long it will take the train to arrive at the crossing. If the expected arrival time is longer than the warning time I programmed in, nothing happens. But if an arrival is expected within the warning time, the devices are activated.

You can see if I approach the intersection slowly, the gates don’t drop until I’m relatively close to the crossing. And if I speed things up, the gates drop when I’m farther away, anticipating the faster arrival of the train. In theory, this type of sophistication means that the warning time at a crossing will always be the same, no matter the speed of the train. But it doesn’t just solve that problem. If you have ever sat at a railroad crossing while a train is stopped on the approach circuit, you know the frustration it causes. A grade crossing predictor avoids the issue. You can see as I move my train toward the crossing, the devices activate assuming the train will cross. But when I stop short, the predicted arrival time goes effectively to infinity, and the controller opens the gates back up.

Of course, actual crossings don’t use sonar to predict the speed of a train. In most cases, they use track circuits with an alternating current. A train interacts with the frequencies of the circuit as it travels along the rails, giving the sensors enough information to detect the presence and speed. Sometimes you can even hear these frequencies since they’re often in the audible range. AC track circuits are also used for electric train systems because they are less susceptible to interference from the traction currents in the rails used to drive the trains.

Another challenge with grade crossings happens in urban areas where signalized intersections are present near the railway. Red lights form a line of vehicles that can back up across the tracks. You should never drive over a railway until you know it’s clear on the other side. But, if you’re not paying attention, it can be easy to misjudge the available space and find yourself inadvertently stopped right on top of the tracks. Traffic signals near grade crossings are usually coordinated with automatic warning devices. When a train is approaching, the signal goes green to clear the queue blocking the tracks.

Equipment for the most basic track circuits to the most sophisticated, including relays, microcontrollers, backup batteries, and more are usually housed in a nearby bungalow or cabin that is easy to spot. In the US, every grade crossing has its own unique identifier, and they all have a phone number to call if something isn’t working correctly. Railroads take reports seriously, so give them a call if you ever see something that doesn’t look right. If you want to see a lot of these grade crossing systems in action, check out my friend Danny’s channel, Distant Signal, for some of the best railfan videos out there. We depend on trains for a lot of things, and in the opinion of many, we could use a few more of them in our lives. Despite the hazard they pose, trains have to coexist with our other forms of transportation. Next time you pull up to a crossbuck, take a moment to appreciate the sometimes simple, sometimes high tech, but always quite reliable ways that grade crossings keep us safe.

January 16, 2024 /Wesley Crump

How Engineers Straightened the Leaning Tower of Pisa

December 19, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Long ago, maybe upwards of 1-2 million years ago, a river in the central part of what’s now Italy, emptied into what’s now the Ligurian Sea. It still does, by the way, but it did back then too. As the sea rose and fell from the tides and the river moved sediment downstream, silt and soil were deposited across the landscape. In one little spot, in what is now the city of Pisa, that sea and that river deposited a little bit more sand to the north and a little bit more clay to the south. And no one knew or cared until around the year 1173, when construction of a bell tower, or campanile (camp-uh-NEE-lee) for the nearby cathedral began. You know the rest of this story. For whatever reason, we humans love stuff like the Leaning Tower. There’s just something special about a massive structure that looks like it’s about to fall over. But you might not know that it almost did. Over the roughly six centuries from when it was built to modern times, that iconic tilt continued to increase to a point in 1990 when the tower was closed to the public for fear that it was near collapse. The Italian Government appointed a committee of engineers, architects, and experts in historical restoration to decide how to fix the structure once and for all (or at least for the next several centuries, we hope). And the way they did it is really pretty cool, if you’re into recreational geology and heavy construction. And, who isn’t!? I’m Grady, and this is Practical Engineering. Today we’re talking about the Leaning Tower of Pisa.

Five-and-a-half degrees. That was the average tilt of the tower in 1990 when all this got started. I have to say average because the tilt isn’t the same all the way up. And actually, that fact makes it possible to track the history of the lean back before it was being monitored. The tower started construction in 1173 and reached about a third of its total height by 1178 when work was interrupted by medieval battles with neighboring states. When work started back up nearly a century later, the tower was already tilting. But the masons didn’t tear it down and start over; they just made one side taller than the other to bring the structure back into plumb. By 1278, the tower had reached the seventh cornice, the top of the main structure minus the belfry, when work was interrupted again. One short century later, the belfry was finally built, and again with a relative tilt to the rest of the structure to correct for the continued lean. On the south side of the belfry, there are 6 stairs down to the main tower; on the north side, only four. The result of all this compensation by the builders is that the Leaning Tower of Pisa is actually curved. Knowing the timeline of construction and how the tilt varies over the height of the structure allowed historians to estimate how much sinking and settling the foundation underwent over time. By 1817, when the first recorded measurement was taken, the inclination of the tower was about 4.9 degrees, and it just kept going.

The new committee charged with investigating the issue first spent a lot of their time simply characterizing the situation. They drilled boreholes and tested the soil. They estimated stability using simple hand calculations. They built a scale model of the tower and tested how far it could lean before it toppled. They developed computer models of the tower and its foundation to see how different soil characteristics would affect its stability. All of the analysis and various engineering investigations all pointed toward the same result: the tower was very near to collapse. In 1993, one researcher estimated the factor of safety to be 1.07, meaning (generally) that the underlying soil could withstand a mere 7 percent more weight than the tower was imposing on it. There was basically no margin left to let the tower continue its lean. A similar tower in Pavia had collapsed in 1989, and the committee knew they needed to act quickly.

To start, they installed a modern monitoring system that could better track any movement over time, including surveying benchmarks and inclinometers. I have a video all about this type of instrumentation if you want to learn more after this. The committee also opted to take immediate temporary measures to stabilize the tower with something that could eventually be removed before developing a permanent fix. They built a concrete ring around the base of the tower and gradually placed lead ingots, about 600 tons in total, on the north side to act as a counterweight to the overhanging structure. As they added each layer of counterweights, they monitored the tilt of the tower. It was ugly, but it worked. For the first time in history, the tower was moving in the right direction. A few months after they finished the project, the tower settled into a tilt that was about 48 arcseconds or a hundredth of a degree less than before.

In fact, it worked so well, the committee decided to take it one step further. To reduce the visual impact of all those lead weights, they proposed to replace them with ten deep anchors that would pull the northern side of the tower downward to the ground like huge rubber bands. This fix didn’t go quite so smoothly. The engineers had assumed that the walkway around the base of the tower, called the Catino, was structurally separate from the tower. But what they found during construction of the anchor solution was that some of the tower was resting on the Catino. The project required removal of part of the catino to make room for a concrete block, and when they did, the tower started tilting again, this time in the wrong direction, and fast (about 4 arc seconds per day, enough for serious concern that the tower might collapse). They quickly abandoned the anchoring plan and added 350 more tonnes of lead weights to stop the movement and focus on a permanent solution.

Engineering ANY solution to a structure of this scale with such a severe tilt is a challenge in the best circumstances. But adding on the fact that the solution had to maintain the historical appearance of the building (including leaving the right amount of lean!) made it even tougher. And after the near disaster of the temporary fix, the committee knew they would have to be extremely diligent. They ultimately came up with three ideas to save the tower. The first one was to pump out groundwater from the sand below the north side of the tower, but they didn’t feel confident that they could predict how the structure would respond over the long term. Another idea was electroosmosis.

If you’ve seen some of my other videos about settlement, you know that it’s hard to get water out of clay, and there are quite a few clever ways engineers use to make it happen faster. One of those ways involves inserting electrodes into the soil and passing electric current through it. Clay particles have a negative surface charge, so the majority of the ions in the water between the particles are positively charged. Electro-osmotic consolidation takes advantage of this by applying a voltage across the soil, causing the water to migrate toward the cathode where it can be pumped to the surface. The idea seemed promising because, by carefully choosing the location of electrodes, engineers hoped they could selectively consolidate the clay below the north side of the tower, reducing its overall tilt. They even performed a large-scale field test near the tower to shake out some of the kinks and gather data on the effectiveness of the technique. But, it didn’t work at all. Turns out the soil was too conductive, so things like electrolysis, corrosion, heat, and all the other effects of mixing electricity and saturated soil made the process pretty much useless for this particular case.

So, the committee was down to one last idea: underexcavation. If they couldn’t get the soil below the tower to consolidate, they could just take some out. And again, they would need to test it out first. So, in 1995, they built a large concrete footing on the Piazza grounds not far from the Tower. Then, they used inclined drills to bore underneath the footing and gradually remove some of the underlying soil. Guide tubes kept the boring in the right direction, and a hollow stem auger inside two casings was advanced below the footing. The outer casing stayed in place while the inner casing moved with the auger. The auger and the inner casing were advanced past the outer casing to create a void, and when they were retracted, the cavity would gently close. At first, it wasn’t looking good. After an initial tilt in the right direction, the test footing started leaning the wrong way. But the crew continued refining the process and eventually got it to work, even finding it was possible to steer the movements by changing the sequence of underexcavation. It was finally time to try it on the real thing.

Knowing the risks and uncertainties involved, the engineers first designed a safeguard system for the tower if things started to go awry. Cable stays were attached between the tower and anchoring frames. The cables could each be tightened individually, giving the engineers opportunity to stop movement in any undesirable direction if the drilling didn’t go as planned. In 1999, they started a preliminary trial with 12 holes. And the plan went perfectly. Over the course of 5 months, the underexcavation brought the tilt up by 90 arcseconds, and after a few more months, it settled in at 130 arcseconds, about four hundredths of a degree. This gave the committee confidence to move on to the final plan.

Starting in 2000, 41 holes were drilled to slowly tilt the tower upright. Over the course of a year, 38 cubic meters of soil were removed from below the tower, roughly 70 tonnes. The lead counterweights were removed. A drainage system was installed to control the fluctuating groundwater levels that exacerbated the tilt. And, the tower was structurally attached to the Catino, increasing the effective area of the foundation. In the end, the project had reduced the tilt of the tower by about half a degree, in effect reversing time to the early 1800s when its likelihood of toppling was much lower. Of course, they didn’t straighten it all the way. The lean isn’t just a fascinating oddity; it is integral to the historical character of the tower. It’s a big part of why we care. Tilting is in the Campanile’s DNA, and in that way, the stabilization project was just a continuation of an 850-year-old process. Unlike the millions of photos with tourists pretending to hold the tower up, the contractors, restoration experts, and engineers actually did it (for the next few centuries, at least).

December 19, 2023 /Wesley Crump

Why Railroads Don't Need Expansion Joints

December 05, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

One of the most common attributes folks imagine when they think of trains is the clickety-clack sound they make as they roll down the tracks. The thing is, most trains don’t make that sound anymore. Or really, I should say, most rails don’t make that sound anymore. Trains are still pretty clickety-clacky, but they’re far less so than they used to be. And here’s why: those rhythmic clicks and clacks came from joints in the tracks. Those joints were a solution to a transportation problem: you can only roll out a length of rail so long before it gets difficult to move them around. It’s easier to have short segments of rail that can be bolted together in place. But, they were also a solution to a thermal problem.

You might be familiar with the idea of an expansion joint: a gap in a sidewalk or handrail or bridge deck or building meant to give a structure room to expand or contract from changes in temperature. I actually made a video on that topic a few years back. The joints on railroads were bridged by fish plates, but with a gap, so on hot days, the rail would have room to grow. But look for a joint on modern railway, and you might have a hard time.

We’re in the middle of a deep dive series on railway engineering, so don’t forget to check out the other videos after this one. A lot of new track these days uses continuous welded rail or CWR that eliminates most joints. Large structures subjected to swings in temperature that don’t account for thermal expansion and contraction can run into serious problems or even fail. So how do modern railways get away with it? I have a bunch of demonstrations to show you. I’m Grady, and this is Practical Engineering. Today we’re talking about continuous welded rail.

As much as I enjoy a good conspiracy, the railroad companies don’t have access to some kind of special steel that doesn’t expand or contract. Rails really do experience thermal contraction and expansion. In the US, they would be installed in roughly 39-foot sections. In general, tracks would be laid out so that on the hottest days, the gap between sections would just barely close. But, this style of jointed rail (although it solved some of the practical problems of railroad construction) had some serious drawbacks, too. First, it was noisy! The famous clickety-clack of railroads was caused by each wheel passing over each joint on the track. I’m a simple man. I grew up listening to that clickety clack, or as they say in Korean, “chikchikpokpok”. It brings a certain nostalgia. But when you consider how long a train is, and the fact that most cars have at least eight wheels, and that train journeys can be hundreds of miles long, that’s a lot of clicks and clacks.

The railroad companies might say too many, because noise is just a symptom. Each time a wheel clacks over a joint in the rail, that impact batters the steel, eventually wearing it down at each location. Try as they might, railroads could never make these joints quite as rigid as the rest of the rail, meaning that (in addition to the extra wear) they would create additional load on the ballast below, and the flexing would cause freight cars to rock side-to-side in a phenomenon called rock and roll. All this creates a maintenance headache, increasing the cost of keeping railroads in service. And it’s why most modern railroads use continuous welded rail: it’s a huge reduction in the maintenance costs associated with the wear and tear from joints. In CWR, rail segments are welded together using electric flash butt welding, arc welding, or in some cases, THERMITE welding. These welds have much higher stiffness than the old joints and, of course, are ground smooth, so they lack clickety clacks. But they still expand and contract with changes in temperature like most materials do. Let me show you how this works.

I’ve set up an aluminum rod on the workbench with one end clamped down and the other free to move. I put a dial indicator at the end so we can observe even tiny changes in the length of the rod. You can see on the thermal camera that we’re already starting at a fairly warm temperature; that’s Texas for you. But, rather than wait for the weather to get even warmer, I’ll speed things up with my sunny day simulator. Notice the dial on the indicator climbing steadily as the heat is applied.

This is an example of unrestricted thermal expansion. That just means nothing is keeping the rod from growing under the increase in temperature. And, engineers can predict the change in length from most materials with a pretty simple formula. Multiply the difference in starting and ending temperatures by a coefficient of thermal expansion that’s easy to look up in a table. This aluminum rod expands by about 0.002% for every degree celsius it increases in temperature. Steel is about half that. Structures like bridges with expansion joints and jointed rail are designed to allow unrestricted thermal expansion. When the hot day comes, the materials expand into the gap. That’s usually a good thing. The structure doesn’t build up stress and stress is what breaks things. But, part of the reason CWR can get away from expansion joints is that changes in temperature aren’t the only way to change the dimensions of a material.

I’ve set up another demo using that same aluminum rod. This time I put it inside this length of pipe and put a nut and washer on both sides. I put the dial indicator on the end, just like before. Now, watch what happens when I turn one of the nuts. Well, if you’re not careful, the whole rod twists. But if you can keep the rod centered in the pipe, and the nut on the other end from twisting, you can see the dial indicator registering the rod getting longer. There’s no change in temperature here; this is a totally different phenomenon: elastic deformation. Turning this nut applies a tension force to the rod, and it stretches out in response.

Just as all materials have a mostly linear relationship between temperature change and length change, all materials also have a similar relationship between stress and change in length (often called strain). If you stress a metal too far, it will undergo a permanent (or plastic) deformation. But within a certain range, the behavior is elastic. It will return to its original length if the stress is removed. And just like the slope of the line for thermal expansion is the thermal coefficient, the slope of the elastic part of a stress/strain curve is called the elastic modulus. And this is part of the secret to continuous welded rail: restrained thermal expansion. You can overcome one with the other. Let me show you a demonstration.

Here you can see me using a hydraulic press in a way that’s not exactly how it was designed. First, I get this iron pipe set up in the press with enough pressure to hold it tight between the cylinder and table, about 3 tons. Then I heat up the pipe with the sunny day simulator. What do you think will happen? Will the hydraulic press break as the steel expands, or something else? Well, it wasn’t quite as dramatic as I was hoping, but that little movement in the gauge still corresponds to about a quarter of a ton of additional force in the hydraulic cylinder. You can kind of think of this in two separate steps: the steel expanded from the heat, but then the additional force from the hydraulic press unexpanded it back to its original size. The thermal and elastic deformations canceled each other out and the pipe stayed the same size. In reality, the force required to counteract thermal expansion should have been more than that, so I think the frame of my hydraulic press wasn’t quite stiff enough to hold the ends perfectly rigid. But you still get the point: you can trade temperature changes for stress and keep the material from changing in size. With a little recreational math, we can combine the two equations to get a single one that gives you the stress in a restricted material from a change in temperature.

So that’s just what railroads with CWR do: they connect the rail at each tie to hold it tight and restrict its movement, allowing it to build up tensile or compressive stress as its temperature changes. Of course, too much stress can fail a material, but steel can handle quite a bit before it gets close to that. Railway here in Texas can range in temperature from below freezing to over 100 degrees F or 40 C. That means every mile of steel wants to be more than 2 feet longer on the hot days than the cold ones. In metric, every kilometer of rail would expand by roughly half a meter, if it wasn’t restrained. Using the formula we developed here, we can see that fully restraining the rail across that temperature range results in a stress of about 15,000 psi or 100 megapascals, way below the tensile or compressive strength of any modern steel, especially the fancy alloys they use these days. But it’s not quite that simple, particularly for compression. Just because a material has a high compressive strength (and steel does), that doesn’t mean it won’t fail under compressive loading. Let me show you another demo.

We’re back to the aluminum rod, but this time I clamped both ends to create a restricted condition. Now watch what happens when I apply the blowtorch. Our equation says the rod should build up stress so that the elastic strain is equal to the thermal expansion. But that’s not what happens. Instead, the rod just deflects sideways, an effect known as buckling. Even though aluminum is relatively strong under compression, the long skinny shape of the rod (just like the rails on tracks) is particularly prone to buckling. Obviously, if a rail buckles on a hot day, it’s a pretty serious problem. The material itself doesn’t fail, but the track does fail at being a railway since trains need rails to be precisely spaced without crazy curves. Many train derailments have happened because a continuous welded rail got too hot and buckled, an effect colloquially known as sun kink. So railroad owners have to be really careful about compressive stress in a rail, and in the US, safety regulations require them to follow detailed procedures for installing, adjusting, inspecting, and maintaining continuous welded rail. One of the tricks they use to manage buckling is adding restraint. I’ve got one more formula and one more demo for you. The formula for the critical force required to buckle a structural member like this is pretty simple. Notice that the force goes up in inverse relation to the length of the structural member squared. This is much clearer in a demonstration. I have a length of welding wire, and I can apply a force with my finger that is measured by the scale. You can see it takes about 375 grams to buckle the rod. But watch what happens when I restrain the rod at the centerpoint, effectively halving its length. I can still buckle it, but it takes a lot more force from my finger. It happens right around 1500 grams, exactly what is predicted by the formula. Halve the length, quadruple the critical force for buckling. The spacing of railroad ties is really important because it affects whether or not a rail will buckle under thermal stress. And one of the most important jobs of all that crushed rock, called ballast, is to hold the ties in place and keep them from sliding horizontally and allowing the rail to buckle.

The other way railroads use to manage buckling is, I think, the most clever: just keeping rails from undergoing compression at all. Any continuous welded rail has a neutral temperature which is essentially the temperature it was the day it was installed. It’s the temperature at which the rail experiences no stress at all. If it’s colder than the neutral temperature, the rail experiences tensile stress, and if it’s hotter than the neutral temperature, the rail experiences compressive stress. The secret is that railroads use a really high neutral temperature to ensure the rail almost never undergoes compression. The Central Florida Rail Corridor has a neutral temperature of 105 F or just over 40 C. They only install rail on hot days, and if they can’t do that, they use heaters to bring the temperature up. And if they can’t do that, they use massive hydraulic jacks to induce enormous tensile forces in the rails before they’re welded together. On cold days when stresses are highest, they have to go out and inspect the rails to make sure they haven’t pulled apart, but a small break in a rail is nothing compared to a buckled track when it comes to the risk of derailment, so it just makes sense to use as high a neutral temperature as you can get away with.

Of course, you always get to the end of a continuously welded section at a bridge or older length of jointed rail. To keep the CWR from buckling at these locations, you need something more than a small gap. Instead, expansion joints on rails (sometimes called breathers) use diagonal tapers. This oblique joint allows train wheels to transition smoothly from one section of rail to another while still leaving enough room for thermal movement. And joints are also needed to break up the electrical circuits used for grade crossings and signals. So railroads often use stiff plates surrounded by insulation material to electrically isolate two sections of rail while keeping it stable in the field. We’ll cover track circuits in a future video of this series on railway engineering.

Even with its challenges, continuous welded rail extends the life of rails and wheels and makes for a much smoother and quieter ride. Even if you’re nostalgic for the soothing clickety-clack of jointed rail, it’s comforting to know that railways are continuously innovating with continuous welded rail.

December 05, 2023 /Wesley Crump

Engineering The Largest Nuclear Fusion Reactor

November 21, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is my friend Jade, creator of the Up and Atom channel. She makes these incredible math and physics explainers that I absolutely love, and she recently got the opportunity to visit ITER (eater) in France. You may have seen this place in the news: 35 nations working together to build an enormous, industrial-scale nuclear fusion reactor. The size of the project is mind-boggling. It’s been under construction since 2013, and… I like construction. So, when Jade and I were chatting about her tour, she said, “Why don’t you just make a video about it too?!”

If everything goes to plan, ITER’s tokamak reactor will house plasma at temperatures in the hundreds of millions of degrees, ten to twenty times hotter than the center of the sun, hopefully paving the way for an entirely new form of electricity generation. I don’t know much about superconducting coils or cyclotron resonance heating or breeder blankets, but I do know it takes a lot of earthwork and steel and concrete to build the biggest nuclear fusion reactor on earth. So let me give you the civil engineer’s tour of what might be the most complicated science experiment in human history. I’m Grady,

Jade: And I’m Jade, and this is Practical Engineering. Today we’re exploring the ITER megaproject.

Jade: I was fairly new to fusion when I went to visit, and although I'm still no expert, I still felt I should explain it rather than let a civil engineer. Before we dive into the mechanics. Here's a question. Why is the world so interested in nuclear fusion? Basically, it comes down to the huge potential payoff. If we could harness the power of nuclear fusion here on Earth.

It would be a way more powerful energy source than fossil fuels, without the environmental baggage. This water bottle full of seawater plus one gram of lithium could provide electricity to a family of four for a whole year. Unlike nuclear fission, there's no long lived waste and no chance of nuclear meltdowns. It's a clean, sustainable and powerful energy source.

Some scientists go so far as to say that commercial nuclear fusion is the next step for humanity. That's exactly what ITER, which translates to the way in Latin aims to do. To nail down the technologies needed for a fully functioning commercial fusion reactor. For you to get an idea of how ambitious their goal is, they plan to import 50 megawatts of thermal power and get out 500 megawatts of fusion power, a gain of ten in fusion talk.

Nothing close to this has ever been achieved or even attempted in fusion history. So how are they going to do it? Right in here. "So this is where the Tokamak is going to be built?" This is the Tokamak pit where ITER is assembling the largest nuclear fusion device in the world, a giant tokamak. Here's a man for comparison. It's going to be huge.

A tokamak is a nuclear fusion machine that works by magnetic confinement. It will hold about 840 cubic meters of piping hot plasma. Why plasma? Plasma is what the sun is primarily made of. And it has the perfect conditions for fusion. To get fusion started in the ITER tokamak, two isotopes of hydrogen, deuterium and tritium are pumped into a large donut shaped chamber.

This is just one of the six vessels that will make up the chamber. The fuel is heated to temperatures of up to 150 million degrees celsius. When they fuse, the energy they unleash is of epic proportions. But here's a question for you engineers. How is it possible to contain so much plasma? No regular material can withstand those kinds of insane temperatures.

Imagine trying to hold onto a piece of the sun. These giant magnets produce magnetic fields of almost 12 tesla, over 200,000 times stronger than Earth's magnetic fields. Plasma is electrically charged. And just like iron filings align with magnetic fields, so does plasma. How cool is that? But how does this fusion stuff actually lead to electricity? ITER itself will not actually produce any electricity.

It's our learning ground, an experimental arena to fine tune how a real reactor might operate. But in a real reactor, the walls of the tokamak will be filled with cooling fluid. When the deuterium and tritium atoms fuse, they release a neutron and a helium atom. About 80% of the energy released is carried by the neutrons and being electrically neutral, they pass straight through the magnetic field.

When these high energy neutrons strike the tokamak walls, they heat up the fluid, turning it into steam. Then, just like a regular power plant, the steam will spin turbines, which will generate electricity. But how will ITER heat the plasma to such insane temperatures? And when can we expect commercial nuclear fusion to get off the ground? Check out my video after you've finished watching Grady's and find out.

Grady: Jade’s video goes into a lot more of the groundbreaking science at ITER, but all that science requires a lot of actual breaking ground. This is a bird’s eye view of the whole facility, and this is where the Tokamak lives. So if all the nuclear fusion is going to happen in there, what are all these other buildings and structures for? Fortunately, there’s a civil engineer there in France amongst all the technicians and scientists who knows the answer, and I was lucky enough to chat with him. This is Laurent Patisson, the civil engineering and interface section leader at ITER, and he’s been there almost since the very beginning, including taking delivery of some truly massive pieces of equipment.

Laurent: “So the largest one is the vacuum vessel sector which is more or less 600 tons. And which is 600 tons, okay, 600-tons yes, on a multi-wheel truck. Very impressive. And with the protection around, it's like transporting an house, two-story house. It's very large. So all the roads are closed. They are dismantling some traffic light just for the passage, some specific display, you know...”

Laurent walked me through the whole campus, and gave me an overview of how construction is progressing across the facility. Many of those big deliveries get stored in one of the many tents scattered around the site until they’re ready to be installed, and then onto one of the various buildings. For example, the poloidal field coils that form superconducting magnets to help shape and contain the plasma in the reactor are just too big to be completed offsite and shipped to ITER, so instead, they built a manufacturing facility right on campus in this long building on the south side. Similarly, the cryostat workshop was built to assemble the massive, vacuum-tight structure that will surround the reactor and magnets. The cryostat parts, the poloidal field coils, and lots of other truly large pieces of equipment destined for the Tokamak itself are then moved to the adjacent assembly hall as needed. Pretty much every part of the Tokamak reactor is not only huge but sensitive to environmental conditions too, so this building makes it possible to protect, stage, assemble and install each one without having to worry about temperature or weather.

Laurent: “It’s one of the highest building and longest buildings, 120 meter long, 70 meter high, very large, 80 meter wide, and actually very large place dedicated really for assembly purpose.”

That’s about 21 stories tall and longer (and wider) than an American football field, end zones included! And maybe the most critical part of the whole building is what runs along the top of it.

Laurent: “We have two 700-tonne overhead cranes. I didn’t mention that. But those are coupled to transfer the modules, the central solenoid. So those are very impressive cranes.”

These two bridge cranes combine to become one of the largest cranes in the world with a combined capacity of 1500 tonnes needed to assemble all the parts of the tokamak. And everything has been tested and tested before each critical lift operation happens with dummy loads before they do the real thing. But material and equipment aren’t the only things flowing through this project site. There’s also a lot of electricity. Imagine what your utility bill would be if your toaster got as hot as the sun!

ITER connects to the European power grid from a 400 kilovolt transmission line. During peak periods of plasma production, the facility may need upwards of 600 megawatts! That’s the capacity of a small nuclear power plant. Obviously you can’t just turn the reactor on with a flip of a switch. ITER has to coordinate with the power grid manager to carefully time the huge power draws with surrounding power plants to make sure it doesn’t cause brownouts or surges on the grid. The 400 kV line feeds a large switchyard and substation on the ITER campus. Electricity is stepped down to a lower voltage using transformers. Then it flows through busbars, cables, and breakers to feed all the various buildings and equipment.

Like many electronic devices, the superconducting magnets that surround the tokamak run on direct current, DC. So the AC power from the grid has to be rectified. For a phone or a flashlight, an AC to DC converter looks like this. But at ITER, it takes up two full buildings. The magnet power converter buildings have enormous rectifiers dedicated to each one of the magnet systems. Once energized, those magnets can collectively store upwards of 50 gigajoues of energy in their fields, though, so you also need a way to quickly get rid of that energy if the magnets lose superconductivity (called a quench). Fast discharge units, located in this building, allow ITER to dissipate that stored energy as heat in a matter of seconds. There are also a lot of critical safety systems and parts to maintaining the expensive and delicate equipment at ITER that require power 24/7/365. So, there are two huge diesel generators that can provide backup power in case the grid goes down.

The flow of electricity is closely tied to the flow of heat through all the parts of ITER. Really the whole thing is an experiment in heat, and there are so many ways things are being warmed or cooled throughout the campus. Of course, you have heating, ventilation, and air conditioning in all the buildings, and it’s not just for the comfort of the people working in them. Even tiny temperature swings can affect the size of these huge components, complicating the assembly.

Laurent: “What we are facing for civil is to merge, at the end, tolerances of equipment which are at the level of millimeter with tolerance of construction building which is at centimeter. And the main challenge that we face in the past and we are continuing to face is that, not to merge but to make compliant, to make compliant the tolerance scales.”

And it’s not just temperature, but humidity and cleanliness as well. So, ITER has a robust ventilation and chilling system located in the site services building along with a lot of the other industrial support systems like air compressors, water treatment, pipes, pumps, and more.

Heat is also important for the electromagnets, which have to be cooled to cryogenic temperatures so they act as superconductors. That’s made possible by the Cryoplant, a soccer-field-sized installation of helium refrigerators, liquid nitrogen compressors, cold boxes, and tanks that keep the various parts of the tokomak supercool during operation. But, although some parts of the machine have to be cryogenically cooled, to create nuclear fusion, you need to heat the plasma to incredible temperatures, and there’s three external heating systems at ITER. One, called neutral beam injection, fires particles into the plasma where they collide and transfer energy. The other two, ion and electron cyclotron heating (say that three times fast), use radio waves, like huge microwave ovens. Those systems are located in the RF Heating building near the Tokamak complex.

And then there’s the matter of the heat output. The whole point of exploring nuclear fusion is to use it as an energy source, to convert tiny amounts of tritium and deuterim into copious amounts of heat. ITER’s goal is to produce a Q of ten, to get ten times as much thermal energy out as it puts into the reactor. But there’s no electrical generator on site. In a commercial fusion facility, you would need to convert that output heat to electricity, probably using steam generators like typical nuclear fission plants. That part of the process is pretty well understood, so it’s not part of this research facility. Instead, ITER needs a way to dissipate all that heat energy they hope the fusion will create. That’s the job of the water cooling system and the enormous cooling tower nearby. Water is circulated around the tokomak and then to the tower where it can reject all that heat into the atmosphere.

That brings us back to where we started, the Tokamak complex itself. That machine, once its finished, will weigh an astounding 23,000 tonnes, more than most freight trains. And with all the heating and cooling going on, there are some serious challenges in just holding the thing up. As the tokomak is cooled cryogenically, it shrinks, but the building stays the same size.

Laurent: “And actually, we had to find out some solution to decouple, physically, the movement of the machine and the building. And for that purpose, we designed some specific bearings allowing displacement, but keeping always the capacity to support and to restrain the machine. So it's one important thing, I could speak about that hours, because it was maybe one of the most challenging parts we had in the design of the building. The support of the machine, which is quite simple when now it is built, but to reach this robust supporting system, it took years.”

And, because, you know, this is an actual nuclear reactor, it has to follow all the safety regulations of any nuclear power plant. No one will be inside the Tokamak complex when it’s running. They’ll be nearby in a separate control building, physically distant from the reactor. And the complex itself has been engineered to withstand a host of disastrous conditions, from floods to plane crashes to explosions on the nearby highway. Like all nuclear power plants, it has a containment structure to confine any fusion products that might be released into the atmosphere in the event of an accident. And that’s made using a special concrete formula developed over two years just for this application that contains extra heavy aggregate and boron to provide radioactive shielding.

Laurent: “So you can see the dark, those are the, the aggregate with content of iron inside, okay. And the white inclusions are colemanite, okay?”

And, it’s not just thermal movement that the designers planned for, but seismic movement too. An earthquake could ruin the entire structure in an instant if the Tokamak was violently shaken, so engineers had to get creative.

Laurent: “One thing I need to mention as well, that the Tokamak complex building is built on elastomeric bearings. For seismic reason, allowing to decouple as much as possible horizontal movement of the soil with the building. And we have 493 anti-seismic bearings. The same type of bearing that you can see underneath bridges. So not large, 90 by 90 centimeters, 18 cm high, but we have a forest of plinths supporting those anti-seismic bearings, and then all the buildings are located on the anti-seismic bearings. It's incredible, incredible.”

Big thanks to the folks at ITER for taking the time to help me understand all this. I only had time to scratch the surface of all the incredible engineering involved. And, go check out Jade’s video to learn more about this awesome project; she actually got to be inside some of the buildings we showed. The civil engineering at the Tokamak building just wrapped up, but there’s a long way to go before fusion experiments start. Like all ambitious projects, this one has struggled through its share of setbacks and iterations. But with 10 times the plasma volume of any fusion reactor operating today, they’re hoping to eventually demonstrate the potential for fusion as a viable source of energy. And that might eventually change the world. Only time will tell if it happens, but it’s exciting right now to see countries across the world collaborating on such a grand scale to invest in the long-term future of energy infrastructure.

November 21, 2023 /Wesley Crump

Which Is Easier To Pull? (Railcars vs. Road Cars)

November 07, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Imagine the room you’re in right now was filled to the top with gravel. (I promise I’m headed somewhere with this.) I don’t know the size of the room you’re in, but if it’s anywhere near an average-sized bedroom, that’s roughly 70 tons of material. Fill every room in an average-sized apartment, and now we’re up to 400 tons. Fill up an average-sized house. That’s 900 tons. Fill up 30 of those houses, that’s roughly 25,000 tons of gravel. A city block of just pure gravel. Imagine it with me… gravel… chicken soup for the civil engineer’s soul. And now imagine you needed to move that material somewhere else several hundred miles away. How would you do it? Would you put it in 25,000 one-ton pickup trucks? Or 625 semi-trucks? Imagine the size of those engines added together and the enormous volume of fuel required to move all that material. You know what I’m getting at here. That 25,000 tons is around the upper limit of the heaviest freight trains that carry raw materials across the globe. There are heavier trains, but not many.

I’m not trying to patronize you about freight trains. It’s not that hard to imagine how much they can move. But it is harder to imagine the energy it takes. Compare those 625 semi trucks to a handful of diesel locomotives, and the difference starts to become clear just by looking at engines and the fuel required to move that mountain of material. We’re in the middle of a deep dive series on railway engineering, and it turns out that a lot of the engineering decisions that get made in railroading have to do with energy. When you’re talking about thousands of tons per trip, even the tiny details can add up to enormous differences in efficiency, so let’s talk about some of the tricks that railroads use to minimize energy use by trains. And I even tried to pull a railcar myself. I’m Grady, and this is Practical Engineering. In today’s episode we’re running our own hypothetical railway to move apartments full of gravel (and other stuff too, I guess).

By energy, I’m not just talking about fuel efficiency either. If it was that simple, do you think there would be a 160-page report from the 1970s called “Resistance of a Freight Train to Forward Motion”? I’ll link it below for some lightweight bedtime reading. Management of the energy required to pull a train affects nearly every part of a railroad. Resistances add up as forces within the train, meaning they affect how long a train can be and where the locomotives have to be placed. Resistances vary with speed, so they affect how fast a train can move. Of course they affect the size and number of locomotives required to move a train from one point to another and how much fuel they burn. And they even affect the routes on which railroads are built. Let me show you what I mean. Here’s a hypothetical railroad with a few routes from A to B. Put yourself in the engineer’s seat and see which one you think is best. Maybe you’ll pick the straightest path, but did you notice it goes straight over a mountain range?

If you've ever read about the little engine that could, you’re familiar with one of the most significant obstacles railways face: grade. A train moving up a hill has to overcome the force of gravity on its load, which can be enormous. Grade is measured in rise over run, so a 1% grade rises 1 unit across a horizontal distance of a hundred units. There’s a common rule of thumb that you need 20 pounds or 9 kilograms of tractive effort (that’s pull from a locomotive) for every ton of weight times every percent of grade. By the way, I know kilograms are a unit of mass, not weight, but the metric world uses them for weight so I’m going to too in this video. And metric tonnes are close enough to US tons that we can just assume they’re equal for the purposes of this video.

A wheelchair ramp is allowed to have a grade of up to 8.3 percent in the US. Pulling our theoretical gravel train up a slope that steep would require a force of more than 5 million pounds or 2 million kilograms, way beyond what any railcar drawbar could handle. That’s why heavy trains have locomotives in the middle, called distributed power, to divide up those in-train forces. But it’s also why railway grades have to be so gentle, often less than half a percent. Next time you’re driving parallel to a railway, watch the tracks as you travel. The road will often follow the natural ground closely, but the tracks will keep a much more consistent elevation with only gradual changes in slope.

You might think, “So what?” We’ll spend the energy on the way up the mountain, but get it back on the other side. Once the train crests the top, we can just shut off the engines and coast back down. And that’s true for gentle grades, but on steeper slopes, a train has to use its brakes on the way down to keep from getting over the speed limit. So all that energy that went into getting the train up the hill, instead of being converted to kinetic energy on the way down, gets wasted as heat in the brakes. That’s why direct routes over steep terrain are rarely the best choice for railroads. So let’s choose an alternative route.

How about the winding path that avoids the steep terrain by curving around it? Of course, the path is longer, and that’s an important consideration we’ll discuss in a moment, but those curves also matter. Straight sections of track are often called tangent track. That’s because they connect tangentially between curved sections of rail that are usually shaped like circular arcs. Outside the US, curves are measured by their radius, the distance from the center of curvature and the center of the track. Of course, in the US, our systems of measurement are a little more old-fashioned. We measure the degrees of curvature between a 100-foot chord. A 1-degree curve is super gentle, appropriate for the highest speeds. Once you get above 5 degrees, the speed limit starts coming down, with a practical limit at slow-speed facilities of around 12 degrees. In an ideal world, you only have to accelerate a train up to speed once, but on a windy path with speed restrictions, slowing and accelerating back up to speed takes extra energy.

But those curves don’t just affect the speed of a train, they also affect the tractive effort required to pull a train around them. Put simply, curves add drag. As you might have seen in the previous video of this series, the wheels of most trains are conical in shape. This allows the inside and outside wheels to travel different distances on the same rigid axle. But it’s not a perfect system. Train wheels do slip and slide on curves somewhat, and there’s flange contact too. Listen closely to a train rounding a sharp curve and you’ll hear the flanges of each wheel squealing as they slide on the rail. A 1-degree curve might add an extra pound (or half a kilogram) of resistance for every ton of train weight (not much at all). A 5-degree curve quadruples that resistance and a 10-degree curve doubles it again. When you’re talking about a train that might weigh several thousand tons, that extra resistance means several thousand more pounds pulling back on the locomotives. It adds up fast. So, depending on the number of curves along the route, and more importantly, their degree of curvature, the winding path might be just as expensive as the one straight up the mountain and back down.

Sometimes terrain is just too extreme to conquer using just grades and curves. There comes a point in the design of a railroad where the cost of going around an obstacle like a mountain or a gorge is so great that it makes good sense and actually saves money to just build a bridge or a tunnel! Many of the techniques pioneered for railroad bridges influenced the engineering of the massive road bridges that stir the hearts of civil engineers around the world. And then there’s tunnels. You know how much I like tunnels. There are even spiral tunnels that allow trains to climb or descend on a gentle grade in a small area of land. I could spend hours talking about bridges and tunnels, but they’re not really the point of this video, so I’ll try to stay on track here. Hopefully you can see how major infrastructure projects might change the math when developing efficient railroad routes.

Of course, I’ve talked about grades, curves, and acceleration, but even pulling a train on a perfectly straight and level track without changing speed at all requires energy. In a perfect world, a wheel is a frictionless device and an object in motion would tend to stay in motion. But our world is far from perfect. I doubt you need that reminder. And there are several sources of regular old rolling resistance. Let me give you something to compare to.

I put a crane scale on a sling and hooked it to my grocery hauler in the driveway to demonstrate. This car just keeps showing up in demos on the channel. Doing my best to pull the car at a constant speed, I could measure the rolling resistance. With no friction, my car would just keep rolling once I got it up to speed, but those squishy tires and friction in the bearings mean I have to constantly pull to keep the car moving. It was pretty hard to keep this consistent, so the scale jumps around quite a bit, but it averages around 30 pounds or 14 kilograms. Very roughly, it’s about a percent of the car’s weight. I put half the car on the gravel road to compare the resistance, and it took about twice the force to keep it rolling. 60 pounds (around 2% of the car’s weight) is a little much for a civil engineer, so I had to get some help pulling. We tried it with a lighter car, but the scale must not have been working right.

At slow speeds like in the demo, drag mostly comes from the pneumatic rubber tires we use on cars and trucks. They’re great at gripping the road and handling uneven surfaces or defects, but they also squish and deform as they roll. Deforming rubber takes energy, and that’s energy that DOESN’T go into moving the load down the road. It’s wasted as heat. At faster speeds, a different drag force starts to become important: fluid drag from the air. I didn’t demo that in my driveway, but it’s just as important for trains as it is for cars. Let’s take a look back at that 1970s report to see what I mean.

One of the most commonly used methods for estimating train resistance is the Davis Formula, originally published in 1926 and modified in the 70s after roller bearings became standard on railcars. It says there are three main types of resistance in a train for a given weight. The first is mechanical resistance that only depends on the weight of the train. This comes from friction in the bearings and deflections of the wheels and track. Steel is a stiff material, but not infinitely so. As a steel wheel rolls over a steel track, they squish against each other creating a contact patch, usually around the size of a small coin. The pressure between the wheel and track in this contact patch can be upwards of 100,000 psi or 7,000 bar, higher than the pressure at the deepest places in the ocean. There is an entire branch of engineering about contact mechanics, so we’ll save that for a future video, but it’s enough to say that, just like the deformation of a rubber tire down a road, this deformation of steel wheels on steel rails creates some resistance.

The second component of resistance in the Davis formula is velocity dependent. The faster the train goes, the more resistance it experiences. This is mainly a result of the ride quality of the trucks. As the train goes faster, the cars sway and jostle more, creating extra drag. The final term of the Davis formula is air resistance. Drag affects the front, the back, and the sides of the train as it travels through the air. This is velocity dependent too, but it varies with velocity squared. Double the speed, quadruple the drag. Add all three factors together and you get the total resistance of the train, the force required to keep it moving at a constant speed.

But why use an equation when you can just measure the real thing. I took a little trip out to the Texas Transportation Museum in San Antonio to show you how this works in practice. Take a look at these classic Pullman passenger cars. You can see the square doors on the bearings where lubrication would have been added to the journal boxes by crews. This facility has a running diesel locomotive, a flat car outfitted with seats for passengers, and a caboose. This little train’s main job these days is to give rides to museum patrons, but today it’s going to help us do a little demonstration.

First [choo choo] we had to decouple the car from the caboose. Then we used the locomotive to move the flat car down the track. This car was built in 1937 and used on the Missouri Pacific railroad until it was acquired by the museum in the early 1980s. The painted labels have faded, but it weighs in the neighborhood of 20 tons empty (about 15 times the weight of my car). So I set up a small winch with the force gauge and attached it to the car. The locomotive provides an ideal anchor point for the setup. But on the first try, the scale maxed out before the car started to move. It turns out the rolling resistance of a rail car is pretty high if you don’t fully disengage the brakes first. Who would’ve thought?

Now that the wheels are allowed to turn, it’s immediately clear that the tracks aren’t perfectly level. Even without the car rolling at all, it’s pulling on the scale with around 100 pounds or 45 kilograms. Once I start the winch to pull the car, the force starts jumping around just like the car, but it averages around 150 pounds or 68 kilograms. If I subtract the force from the grade, the rolling resistance of the car, the force just required to keep it moving at a constant speed, is just about 50 pounds or 32 kilograms. That’s about the same force required to move my car on the gravel road even though this car is 15 times its weight. And it’s not far off from what the Davis Formula would predict either.

We tried this a few times, and the results were pretty much the same each time. This is an old rail car on an old railway, so there’s quite a bit of variation to try and average out of the results. Little imperfections in the wheels and rail make a huge difference when the rolling resistance is so low. A joint in the track can double or triple the force required to keep the car moving, if only for a brief moment. Kind of like getting a pebble under the wheel of a shopping cart: It seems insignificant, but if it’s happened to you, you know it’s not.

Watching the forces involved, I couldn’t help but wonder if I could move the car myself. But there was no safe way for me to start pulling the car once it was already moving. I would have to try and overcome the static friction first… aaaaand that turned out to be a little beyond my capabilities. If you look close, you can see the car budging, but I couldn’t quite get it started. On a different part of the track with the wheels at a different position, maybe I could have moved it, but considering most of the working out I do is on a calculator, this result might not be that surprising. Those joints between rails don’t only add drag, but maintenance costs too, but that’s the topic of the next episode in this series, so stay tuned if you want to learn more. It’s still remarkable that the rolling resistance between a 20 ton freight railcar and my little hatchback is in the same ballpark. And that’s a big part of why railways exist in the first place. Those steel wheels on steel rails get the friction and drag low enough that just a handful of locomotives can move the same load as hundreds or trucks with a lot less energy and thus a lot less cost.

November 07, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 5

October 24, 2023 by Wesley Crump

This is the fifth and final episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

October 24, 2023 /Wesley Crump

Why There's a Legal Price for a Human Life

October 17, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

One of the very first documented engineering disasters happened in 27 AD in the early days of the Roman Empire. A freed slave named Atilius built a wooden amphitheater in a town called Fidenae outside of Rome. Gladiator shows in Rome were banned at the time, so people flocked from all over to the new amphitheater to attend the games. But the wooden structure wasn’t strong enough. One historian put it this way: “[Atilius] failed to lay a solid foundation and to frame the wooden superstructure with beams of sufficient strength; for he had neither an abundance of wealth, nor zeal for public popularity, but he had simply sought the work for sordid gain.” When the amphitheater fell, thousands of people were killed or injured. That historian put the number at 50,000, but it’s probably an exaggeration. Still, the collapse of the amphitheater at Fidenae is one of the most deadly engineering disasters in history.

Engineering didn’t really even exist at the time. Even with the foremost training in construction, Atilius would have had almost no ability beyond rules of thumb to predict the performance of materials, joints, or underlying soils before his arena was built. But there’s one thing about this story that was just as true then as it is today: The people in the amphitheater share none of the blame. They needn’t have considered (let alone verified) whether the structure they occupied was safe and sound. This idea is enshrined in practically every code of ethics you can find in engineering today: protection of the public is paramount. An engineer is not just someone who designs a structure; they are the person who takes the sole responsibility for its safety.

But if that were strictly true that safety is paramount, we would never engineering anything, because every part of the built environment comes with inherent risks. It’s clear that Atilius’s design was inadequate, and history is full of disasters that were avoidable in hindsight. But, it’s not always so obvious. The act of designing and building anything is necessarily an act of choosing a balance between cost and risks. So, how do engineers decide where to draw the line? I’m Grady, and this is Practical Engineering. Today, we’re exploring how safe is safe enough.

You might be familiar with the trolley problem or one of its variations. It’s a hypothetical scenario of an ethical dilemma. A runaway trolley is headed toward an unsuspecting group of five workers on the tracks. A siding only has a single worker. You, a bystander, can intervene and throw the switch to divert the trolley, killing only one person instead of five. But, if you do, that one person lost their life solely by your hand. There’s no right answer to the question, of course, but if you think harder about this ethical dilemma, you can find a way to blame an engineer. After all, someone engineered the safety plan for the track maintenance without an officer or lookout who could have warned the workers. And someone designed the brakes on that trolley that failed.

Hopefully, you never find yourself in such a philosophically ambiguous situation, but a large part of engineering involves making decisions that can be boiled down to a tug-of-war between cost and safety, and comparing those two can be an enormous challenge. On one side, you have dollars, and on the other, you have people. And you probably see where I’m going with this: sometimes you need a conversion factor. It sounds morbid, but it’s necessary for good decision-making to put a dollar price on the value of a human life. More technically, it’s the cost we’re willing to bear to reduce risks such that the expected number of fatalities goes down by one. But that’s not quite as easy to say.

Of course, no one is replaceable. You might say your life is priceless, but there are countless ways people signal how much value they put on their own safety. How much are people willing to pay for vehicles with higher safety ratings versus those that rank lower? How much life insurance do people purchase, and for what terms? What’s the difference in wages between people who do risky jobs and those who aren’t willing to? Economists much smarter than me can look at this type of data, aggregate it, and estimate what we call the Value of a Statistical Life or VSL. The US Department of Transportation, among many other organizations, actually does this estimation each year to help determine what safety measures are appropriate for projects like highways. The 2022 VSL is 12.5 million dollars.

Whether that number seems high or low, you can imagine how this makes safety decisions possible. Say you’re designing a new highway. There are countless measures that can be taken to make highways more safe for motorists: add a median, add a barrier, add rumble strips to warn drivers of lane diversions, increase the size of the clear zones, add guardrails, increase the radius of curves, cover the whole thing in bubble wrap, and so on. Each of these increases the cost of the highway, reducing the feasibility of building it in the first place. In other words, you don’t have the budget to make sure no one ever dies on this road. So, you have to decide which safety measures are appropriate and which ones may not be justified for the reduction in risk they provide. If you have a dollar amount for each fatality that a safety measure will prevent, it makes it much simpler to draw that line. You just have to compare the cost of the measure with the cost of the lives it saves.

But, really, It’s almost never quite so unequivocal. During the construction of the Golden Gate Bridge, the chief engineer required the contractor to put up an expensive safety net, not because it was the law, but just because it seemed prudent to protect workers against falls. The net eventually saved 19 people from plunging into the water below. That small group, who called themselves the Halfway to Hell Club, easily made up for the cost of that net, and that little example points to a dirty truth about the whole idea of weighing benefits and costs in terms of dollars: it’s predicated on the idea that we can actually know with certainty how much any one change to a structure will affect its safety over the long term (not to mention that we’ll know how much it actually costs, but I’ve covered that in a separate video). The truth is that we can only make educated guesses. Real life just comes with too many uncertainties and complexities. For example, in some northern places, the divots that form rumble strips on highways collect melted snow and de-icing salt, effectively creating a salt lick for moose and elk. What should be a safety measure, in some cases, can have the exact opposite effect, inviting hooved hazards onto the roadway. Humanity and the engineering profession have learned a lot of lessons like that the hard way because there was no other way to learn them. Sometimes, we have opportunities to be proactive, but it’s rare. As they say, most codes and regulations are written in blood. It’s a grim way to think about progress, but it’s true.

Look at fires and their consideration in modern building design. Insulated stairwells, sprinkler systems, emergency lights and signs, fire-resistant materials, and rated walls and doors - none of that stuff is free. It increases the cost of a building. But through years of studying the risks of fires through the tragedies of yesteryear, the powers at be decided that the costs of these measures to society (which we all pay in various ways) were worth the benefits to society through the lives they would save. And, by the way, there are countless safety measures that aren’t required in the building code or other regulations for the same reason.

Here’s an example: Earlier this year, a fuel tanker truck crashed into a bridge in Philadelphia, starting a fire and causing it to collapse. I made a video about it if you want more details. Even though there have been quite a few similar events in the recent past, bridge safety regulations don’t have much to say about fires. That’s because the risk of this kind of collapse is pretty well understood to take a bit of time. In almost every case, that timespan between when a fire starts and when it affects the structural integrity of the bridge is enough for emergency responders to arrive and close the road. Bridge fires, even if they end in a collapse, rarely result in fatalities. We could require bridges to be designed with fire-resistant materials, but (so far, at least), we don’t do it because the benefits through lives saved just wouldn’t make up for the enormous costs.

You can look at practically any part of the built world and find similar examples: flood infrastructure, railroads, water and wastewater utilities, and more. You know I have to talk about dams, and in the US, the federal agencies who own the big dams, mainly the Corps of Engineers and the Bureau of Reclamation, have put a great deal of thought and energy into how safe is safe enough. A dam failure is often a low-probability event but with high consequences, and those types of risks (like plane crashes and supervolcano eruptions) are the hardest for us to wrap our heads around. And dams can be enormous structures. They provide significant benefits to society, but the costs to upgrade them can be sky-high, so it’s prudent to investigate and understand which upgrades are worth it and which ones aren’t.

There’s an entire field of engineering that just looks at risk analysis, and federal agencies have developed a framework around dam safety decision-making by trying to put actual numbers to the probability of any part of a dam failing and the resulting consequences. Organizations around the world often use a chart like this, called an F-N chart, to put failure risks in context. Very roughly, society is less willing to tolerate a probability of failure the more people who might die as a result. Hopefully, that’s intuitive. So, a specific risk of failure can be plotted on this graph based on its probability and consequences. If the risks are too high, it’s justified to spend public money to reduce them. Below the line, spending more money to increase safety is just gold plating.

But above a certain number of deaths and below a certain probability, we kind of just throw up our hands. This box is really an acknowledgment that we aren’t brazen enough to suggest that society could tolerate any event where more than 1,000 people would die. The reality is that we’ve designed plenty of structures whose failure could result in so many deaths, but those structures’ benefits may outweigh the risks. Either way, such serious consequences demand more scrutiny than just plotting a point on a simple graph.

All this is, of course, not just true for civil structures, but every aspect of public safety in society. Workplace safety rules, labeling of chemicals, seatbelt rules, and public health measures around the world use this idea of the Value of a Statistical Life to justify the cost of reducing risks (or the savings of not reducing them). A road, bridge, dam, pipeline, antenna tower, or public arena for gladiatorial fights can always be made safer by spending more resources on design and construction. Likewise, resources can be saved by decreasing a structure’s strength, durability, and redundancy. Someone has to make a decision about how safe is safe enough. There’s a popular quote (unattributable, as far as I can tell) that gets the point across pretty well: “Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.” But there’s a huge difference between a bridge that barely stands and one that barely doesn’t. When it’s done correctly, people will consider you a good steward of the available resources. And, when it’s done poorly, your name gets put in the intro of online videos about structural failures. Thank you for watching, and let me know what you think.

October 17, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 4

October 10, 2023 by Wesley Crump

This is the fourth episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

October 10, 2023 /Wesley Crump
  • Newer
  • Older