Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

How Fish Survive Hydro Turbines

March 05, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Most of the largest dams in the US were built before we really understood the impacts they would have on river ecosystems. Or at least they were built before we were conscientious enough to weigh those impacts against the benefits of a dam. And, to be fair, it’s hard to overstate those benefits: flood control, agriculture, water supply for cities, and hydroelectric power. All of our lives benefit in some way from this enormous control over Earth’s freshwater resources.

But those benefits come at a cost, and the price isn’t just the dollars we’ve spent on the infrastructure but also the impacts dams have on the environment. So you have these two vastly important resources: the control of water to the benefit of humanity and aquatic ecosystems that we rely on, and in many ways these two are in direct competition with each other. But even though most of these big dams were built decades ago, the ways we manage that struggle are constantly evolving as the science and engineering improve. This is a controversial issue with perspectives that run the gamut. And I don’t think there’s one right answer, but I do know that an informed opinion is better than an oblivious one. So, I wanted to see for myself how we strike a balance between a dam’s benefits and environmental impacts, and how that’s changing over time. So, I partnered up with the folks at the Pacific Northwest National Laboratory (or PNNL) in Washington state to learn more. Just to be clear, they didn’t sponsor this video and had no control over its contents.They showed me so much, not just the incredible technology and research that goes on in their lab, but also how it is put into practice in real infrastructure in the field, all so I could share it with you. I’m Grady, and this is Practical Engineering. On today’s episode, we’re talking about hydropower!

This is McNary Dam, a nearly 1.5-mile-long hydroelectric dam across the Columbia River between Oregon and Washington state, just shy of 300 miles (or 470 km) upriver from the Pacific Ocean. And this is Tim Roberts, the dam’s Operations Project Manager and the best dam tour guide I’ve ever met.

“These are 1x4 hand-nailed forms that got built for the entire facility.”

But this was not just a little walkthrough. We went deep into every part of this facility to really understand how it works. McNary is one of the hydropower workhorses in the Columbia River system, a network of dams that provide electricity, irrigation water, flood control, and navigation to the region. It’s equipped with fourteen power-generating turbines, and these behemoths can generate nearly a gigawatt of power combined! That means this single facility can, very generally, power more than half-a-million homes. The powerhouse where those turbines live is nearly a quarter mile long (more than 350 meters)! It’s pretty hard to convey the scale of these units in a video, but Tim was gracious enough to take us down inside one to see and hear the enormous steel shaft spinning as it generates megawatts of electrical power. All that electricity flows out to the grid on these transmission lines to power the surrounding area.

McNary is a run-of-the-river dam, meaning it doesn’t maintain a large reservoir. It stores some water in the forebay to create the height needed to run the turbines, but water flows more or less at the rate it would without the dam. So, any extra water flowing into the forebay that can’t be used for hydro generation has to be passed downstream through one or more of these 22 enormous lift gates in the spillway beside the powerhouse.

As you can imagine, all this infrastructure is a lot to operate and maintain. But it’s not just hydrologic conditions like floods and droughts or human needs like hydropower demands and irrigation dictating how and when those gates open or when those turbines run; it’s biological criteria too. The Columbia and its tributaries are home to a huge, diverse population of migratory fish, including chinook, coho, sockeye, pink salmon, and lampreys, and over the years, through research, legislation, lawsuits, advocacy, and just plain good sense by the powers at be, we’ve steadily been improving the balance between impacts to that wildlife and the benefits of the infrastructure. In fact, just about every aspect of the operation of McNary Dam is driven by the Fish Passage Plan. This 500-page document, prepared each year in collaboration with a litany of partners, governs the operation of McNary and several other dams in the Columbia River system to improve the survival of fish along the river.

“It’s kind of a bible. It tells us how we operate. It tells us what turbine we can run, what order to run them in, what megawatts to run them at, what to do when a fish ladder or a fish pump goes out of service. So it’s a pretty good overall operating procedure for us.”

“So it’s the fish plan driving how you operate the dam?”

“Yeah, It dictates a lot of how we operate the powerhouse.”

This fish bible includes prescriptive details and schedules for just about every aspect of the dam, including the fish passage structures too. Usually, when we build infrastructure, the people who are going to use it are actual people. But in a very real sense, huge aspects of McNary and other similar dams are infrastructure for non-humans. On top of the hydropower plant and the spillway, McNary is equipped with a host of facilities meant to help wildlife get from one side to the other with as little stress or injury as possible. Let’s look at the fish ladders first. McNary has two of them, one on each side.

A big contingent of the fish needing past McNary dam are adult salmon and other species from the ocean trying to get upstream to reproduce in freshwater streams. They are biologically motivated to swim against the current, so a fish ladder is designed to encourage and allow them to do just that, and it starts with attraction water. Dams often slow down the flow of water, both upstream and downstream, which can be disorienting to fish trying to swim against a current. Also, dams are large, and fish generally don’t read signs, so we need an alternative way to show them how to get around. Luckily, in addition to a strong current, salmon are sensitive to the sound and motion of splashing water, so that’s just what we do. At McNary, huge electric pumps lift water from the tailrace below the dam and discharge it into a channel that runs along the powerhouse. As the water splashes back down, it draws fish toward the entrances so they can orient with the flow through the ladder. Some of this was a little tough to understand even seeing it in person, so I had a couple of the engineers at the dam explain it to me.

“So there’s water coming in the actual ladder and in the parallel conduit?”

“Right, right. So, it’s very complicated, huh? They’re going to approach the dam and enter from one of three spots on the Oregon side. There’s a north fish entrance on the north end of the powerhouse, south fish entrance on the south side of the powerhouse and there’s an adult collection channel that runs across the face.”

All these entrances provide options for the fish to come in, increasing the opportunity and likelihood that they will find their way.

“Between the regulating weirs on the north end, the regulating weirs on the south end and those floating orifices here, you back up that water. You need a massive amount of water to keep that step, that whole corridor.”

“I see.”

Once they’re in, they make their way upstream into the ladder itself. Concrete baffles break up the insurmountable height of the dam into manageable sections that fish can swim up at their own pace. Most of the fish go through holes in the baffles, but some jump over the weirs. There’s even a window near the top of the ladder where an expert counts the fish and identifies their species. This data is important to a wide variety of organizations, and it’s even posted online if you want to have a look. Once at the top, the fish pass through a trash rack that keeps debris out of the ladder and continue their journey to their spawning grounds.The goal is that they never even know they left the river at all, and it works. Every year hundreds of thousands of chinook, coho, steelhead, and sockeye make their way past McNary Dam. If you include the non-native shad, that number is in the millions.

“These pictures helps tremendously.”

And it’s not just bony fish that find their way through. Some of the latest updates are to help lamprey passage. These are really interesting creatures!

“I mean, in some parts of the country, they’re like, invasive. People want to get rid of them. Here, we’re trying to nurture them along because they’re a native uh, species, so there are some small changes we’ve been doing um, to try and make those make passage for lamprey more successful.”

I’m working on another video that will take a much deeper look at how this and other fish ladders work, so stay tuned for that one, but it’s not the only fish passage facility here. Because what goes up, must come down, or at least their offspring do (most adult salmon die after reproducing). So, McNary Dam needs a way to get those juvenile fish through as well. That might sound simple; thanks to gravity, it’s much simpler to go down than up. But at a dam, it’s anything but.

“And the way I explain to them is the adults are mission oriented. They’re coming back to spawn. The juveniles are just kinda dumb kids riding the wave of the ocean. I mean honestly, that’s what they’re doing. The main focus has been centered around the juveniles migrating out, right? How do we get the majority of them out? And so, when they’re coming down and they’re approaching the structure, uh, they got two basic paths to take, either the spillway or the powerhouse.”

I definitely wouldn’t want to pass through one of these, but juvenile fish can make it through the spillway mostly just fine. In fact, specialized structures are often installed during peak migration times to encourage fish to swim through the spillway. McNary Dam has lift gates where the water flows from lower in the water column. But salmon like to stay relatively close to the surface and they’re sensitive to the currents in the flow. Many dams on the Columbia system have some way to spill water over the top, called a weir, that is more conducive to getting the juveniles through the dam.

The other path for juveniles to take is to be drawn toward the turbines. But McNary and a lot of other dams are equipped with a sophisticated bypass system to divert the fish before they make it that far. and that all starts with the submersible screens. These enormous structures are specially designed with lots of narrow slots to let as much water through to the turbines while excluding juvenile fish. They are lowered into place with the huge gantry crane that rides along the top of the power house. Each submersible screen is installed in front of a turbine to redirect fish upwards while the water flows continues on. Brushes keep them clean of debris to make sure they fish don’t get trapped against the screen. They might look simple, but even a basic screen like this requires a huge investment of resources and maintenance, because they are absolutely critical to the operation of the dam.

“...incredibly labor intensive screens, we spend a lot of time cause, you know, you saw those brushes running up and down them. They’ve got submerged gearboxes, submerged motors, submerged electrical.”

“Oh my gosh.”

“Yeah, every December we pull them out for four months, we, we work on fish screens. Not to mention, so like, and if there’s a problem, these are a critical piece of equipment here, um, during fish passage season if that, if something goes wrong with that screen, this turbine has to shut down. You can’t run them without it.”

Once the fish have been diverted by the screens, they flow with some of the water upward into a massive collection channel. This was originally designed as a way to divert ice and debris, but now it’s basically a fish cathedral along the upstream face of the dam.

“Pretty cool huh?”

“That’s amazing!”

The juveniles come out in these conduits from below. Then they flow along the channel, while grates along the bottom concentrate them upward. Next they flow into a huge pipe that pops out on the downstream face of the dam. Along the way, the juveniles pass through electronic readers that scan any of the fish that have been equipped with tags and then into this maze of pipes and valves and pumps and flumes. In the past, this facility was used to store juveniles so they could be loaded up in barges and transported downstream. But over time, the science showed it was better to just release them downstream from the dam. Every once in a while, some of the juveniles are separated for counting so scientists can track them just like the adults in the ladder. Then the juveniles continue their journey in the pipe out to the middle of the river downstream.

Avian predation is a serious problem for juveniles. Pelicans, seagulls, and cormorants love salmon just like the rest of us. In many cases, most of the fish mortality caused by dams isn’t the stress of getting them through the various structures, but simply that birds take advantage of the fact that dams can slow down and concentrate migrating fish. This juvenile bypass pipe runs right out into the center of the downstream channel where flows are fastest to give the fish a fighting chance, and McNary is equipped with a lot of deterrents to try and keep the birds away.

All this infrastructure at McNary Dam to help fish get upstream and downstream has changed and evolved over time, and in fact, a lot of it wasn’t even conceived of when the dam was first built. And that’s one of the most important things I learned touring McNary Dam and the Pacific Northwest National Lab: the science is constantly improving. A ton of that science happens here at the PNNL Aquatics Research Laboratory. I spent an entire day just chatting with all the scientists and researchers here who are advancing the state of the art.

For example, not all the juvenile salmon get diverted away from those turbines. Some inevitably end up going right through. You might think that being hit by a spinning turbine is the worst thing that could happen to a fish, but actually the change in pressure is the main concern. A hydropower turbine’s job is to extract as much energy as possible from the flowing water. In practice, that means the pressure coming into each unit is much higher than going out, and that pressure drop happens rapidly. It doesn’t bother the lamprey at all, but that sudden change in pressure can affect the swim bladder that most fish use for buoyancy. So how do we know what that does to a fish and how newer designs can be safer? PNNL has developed sensor fish, electronic analogs to the real thing that they can send through turbines and get data out on the other side. Compare that data to what we already know about the limits fish can withstand (another area of research at PNNL), and you can quickly and safely evaluate the impacts a turbine can have.

What’s awesome is seeing how that research translates into actual investments in infrastructure that have a huge effect on survivability. New turbines recently installed at Ice Harbor Dam upstream were designed with fish passage in mind to reduce injury for any juveniles that find their way in. One study found that more than 98% of fish survived passing through the new turbines, and nearly all the large hydropower dams in the Columbia river system are slated to have them installed in the future. And it’s not just the turbines that are seeing improvements. I talked to researchers who study live fish, how they navigate different kinds of structures, and what they can withstand. Just the engineering in the water system to keep these fish happy is a feat in itself. I talked to a coatings expert about innovative ways to reduce biological buildup on nets and screens. I talked to an energy researcher about new ways to operate turbines to decrease impacts to fish from ramping them up and down in response to fluctuating grid demands.

“It doesn’t have to be that, you know, what’s good for the grid is necessarily bad for the fish.”

“Exactly.”

And I spent a lot of time learning about how we track and study the movement of fish as they interact with human made structures. Researchers at PNNL have developed a suite of sensors that can be implanted into fish for a variety of purposes. Some use acoustic signals picked up by nearby receivers that can precisely locate each fish like underwater GPS. Of course, if you want to study fish behavior accurately, you need the fish to behave like they would naturally, so those sensors have to be tiny. PNNL has developed miniscule devices, so small I could barely make out the details. You also want to make sure that inserting the tags doesn’t injure the fish, so researchers showed me how you do that and make sure they heal quickly. And of course, those acoustic tags require power, and tiny batteries (while extremely impressive in their own right) sometimes aren’t enough for long-term studies. So they’ve even come up with fish-powered generators that can keep the tags running for much longer periods of time. A piezoelectric device creates power as the fish swims… and they had some fun ways to test them out too.

Of course, migratory fish aren’t the only part of the environment impacted by hydropower, and with all the competing interests, I don’t think we’ll ever feel like the issue is fully solved. These are messy, muddy questions that take time, energy, and big investments in resources to get even the simplest answers.

“It’s really, it’s a complicated question. If you want to look at overall survivability from point A to point B, you can do that. But you’ve got to start talking about species. Is it a spring? Is it a fall? Is it a chinook? Is it a steelhead? Cause we have different models and studies that have been done. So it varies from species to species. People ask that question. I get really hesitant to respond, because I’m like, you don’t know how complicated a question you’re asking. You want to simplify it into one little number, and it’s not that simple.”

The salmon pink and blue paint in the powerhouse at McNary really sums it up well, with the blue symbolizing the water that drives the station, and the pink symbolizing the life within the water, and its environmental, economic, and cultural significance. This kind of balancing act is really at the heart of what a lot of engineering is all about. I’m so grateful for the opportunity to see and learn more about how energy researchers, biologists, ecologists, policy experts, regulators, activists, and engineers collaborate to make sure we’re being good stewards of the resources we depend on. I think Alison Colotelo, the Hydropower Program Lead at PNNL put it best:

“When you think about salmon and why we need to protect them, why we need to put all this money into understanding how do we, how do we coexist with our energy needs. It's because they're important from an ecological perspective, right?For the nutrients that they're bringing back, from an economic perspective, from a cultural perspective and if the salmon go away then so do a lot of other things.”

March 05, 2024 /Wesley Crump

How To Install a Pipeline Under a Railroad

February 20, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is the Union Pacific Railroad’s Austin Subdivision in central Texas. It’s a busy corridor that moves both freight and passengers north and south between Austin and San Antonio… But it’s mostly freight. Trains run twenty-four-seven here, carrying goods like rock from nearby quarries, cement, vehicles, intermodal freight, and more. So, when Crystal Clear Special Utility District was planning a new water transmission main that would connect a booster pumping station to a new water tower to meet the growing demand along I-35, the biggest question was this: how do you get the line across the tracks without shutting them down and trenching across? It’s only about 250 feet or 76 meters from one side to the other, but this small part of a large water transmission project takes more planning, coordination, engineering, and innovative construction than the rest of the project combined. Maybe you’ve never even wondered what it takes to move fresh water across the distances from where it’s stored to where it’s used. But, I really think you’re going to find this fascinating.

Crystal Clear and their general contractor, ACP, invited me on-site to see it happen in real-time and document the process for you! Most of the water lines are already installed, but getting this one across these tracks is going to be a different challenge. I’m your host, Grady Hillhouse, and this is a Practical Construction.

There are actually a lot of ways to install underground utilities without disrupting things at the surface, collectively known as trenchless technologies. This project is using a method called horizontal earth boring, but really, it’s pretty exciting. Before any dirt gets bored, there’s a lot that has to happen first. So much can go wrong if an operation like this isn’t carried out thoughtfully and carefully. One of those risks is hitting something that’s already buried at the site, and just about every subsurface utility contractor can tell horror stories about what happens if a water, sewer, gas, fiber optic, or telephone line is severed during construction. The right-of-way along a railroad track is a common place to install linear utilities, because they can just run parallel to the tracks, avoiding the complexity of dealing with multiple property owners and obstacles. The owners of all the utilities that run along these tracks have already been out to mark their location using spray paint on the ground and flags. But, that’s not enough to make sure they are avoided. Before the drill can get started, a vacuum excavation crew comes to the site to confirm their location not just along the ground, but how far each one is below it.

This truck has an enormous vacuum that sucks up soil as it’s blasted loose by a pressure washer. The benefit of a vacuum excavator is that, although the water is strong enough to dislodge and excavate soil, it’s not strong enough to damage the utility lines below. Compare that to using a hydraulic excavator with a bucket where one wrong move could rip a pipe or cable out like a wet noodle. It also disturbs a lot less of the area at the surface, so this process is often called potholing. It’s a crucial step if the margins are tight when avoiding existing utilities, like they are on this site. For each utility, the vacuum excavator locates the exact position and depth of the line so that it can be marked by a surveyor and compared to the proposed alignment of the bore. And there’s hardly any mess once the process is done. On this site, there are lines both above and below the proposed bore, so the drilling contractor will be threading a needle.

Safety is also critical, especially when working around railroads and trains. Since this job requires people on the tracks and construction below them, there’s a specialized crew on site who coordinates between the Union Pacific dispatchers, train engineers, and crews on site to make sure no one gets hurt. They’ve established a specific zone along the tracks, which requires the train engineers to check in with them first before any train gets near the work. When a train is on the way, the safety crew sounds a horn, and everyone on site stops working and gets clear of the tracks. Once the train is past, work starts right back up.

The process of horizontal earth boring, also known as jack-and-bore, starts with an entrance pit. Unlike some trenchless methods that can curve down and back up again from the surface, this waterline needs to be as straight and precise as possible. So you have to start underground. This enormous excavation is where almost all the work will happen. And, because it’s so close both to a roadway and the railroad tracks, there’s no room to slope the sides to avoid the risk of a collapse. Instead, huge steel trench boxes are installed in the pit to shore it up and keep it from collapsing or affecting the adjacent structures. Once the trench boxes are installed, the boring machine can be lowered into place. And before long, it’s up and running, or I guess you could say it’s down and running.

In practice, horizontal earth boring is relatively straightforward. The boring machine really only has two jobs: excavating the soil and advancing the casing pipe. For the first job, it uses a string of augers that connect to a boring head. It’s just an oversized drill bit. As the auger turns, the boring head breaks up the soil ahead of the casing pipe, and the flights draw the cuttings back toward the pit. The cutting head has wings that open when rotated in one direction. Those wings extend just slightly beyond the edges of the casing pipe, over-excavating the bore hole to minimize the friction of pushing the casing pipe forward. The soil cuttings from the boring are discharged from the side of the machine into a pile in the pit. Every so often, they have to be removed. The excavator at the surface uses a clamshell bucket to scoop the cuttings out of the pit and stockpile them nearby. They’ll eventually be disposed of off-site or used as backfill.

The machine’s second job is to advance the casing pipe into the bore. This pipe provides support to the hole to keep it from collapsing and prevent the overlying soil from shifting or settling over time. The boring machine sits on tracks. The back of the machine uses a hydraulic ram attached to a locking system that affixes to the rails. The ram provides thrust, pushing both the machine and the casing pipe forward with the tremendous force required to advance it through the ground. Newton’s third law is in play here. To provide that thrust to the casing, the machine needs something to react against. So, those tracks have been firmly concreted into the bottom of the entrance pit to make sure it’s the machine that moves and not the tracks.

Of course, every contractor knows as soon as you start making good progress, it’s going to rain. Water flows downhill, and this pit is the lowest spot of ground on site. But the crew doesn’t let it slow them down too much. The concrete bottom in the pit helps keep things from turning into a muddy mess, and an electric pump makes pretty quick work of the water that gets in. Tarps over the top of the pit also help keep it dry, if also making it a little tough to film the work inside.

Railroad operators are rightly strict about the what, where, when, and why when it comes to construction on their rights-of-way. Disrupting the movement of freight and passengers is simply not an option. So an essential part of this operation is continuous monitoring to make sure the boring is not affecting the tracks above. A surveying crew comes to the site every six hours to carefully measure for any changes in elevation along the tracks. They’ve installed these reflective markers and use a piece of equipment called a total station that can precisely pinpoint each length of the rail. They process the data as it comes in and compare it to the baseline measurements. If they notice any settling or movement, everything would have to stop (but, spoiler alert, they never did).

Another requirement from the railroad is that this work happens nonstop. They don’t want an open excavation sitting idle below the tracks, so they require that the boring happen continuously night and day. The longer it takes to get this casing pipe to the other side, the more opportunity for something to go wrong. The boring contractor works in double shifts. When one crew leaves, there’s already another one to take their place, so the site is never unattended.

Once one segment of casing pipe is pushed as far as it can go, the boring machine is pulled to the back of the pit. A new segment of pipe is collected from the stack. And, it’s lowered in. The next length of the auger is already inside. The auger is attached to the string. And then the casing segment is welded to the end of the previous one.

Segments go in faster at first, but each one takes a little bit longer than the last. That’s because, every two or three segments, they have to check and make sure the bore is following the right path. There are utilities to avoid, dimensional tolerances from the railroad, and location requirements from the engineer and property easements. So, having the alignment wander is not an option. Every so often, the crew has to remove the entire auger string from the bore to make sure it’s headed in the right direction. The way they do it might unnerve you, especially if you’re claustrophobic: they just send a worker on a skateboard to the end of the casing pipe. There are more sophisticated tools, but some contractors prefer the old-school, reliable method, and they have a slew of safety measures in place as required by OSHA, including ventilation, communication, and safety spotters. The person inside the pipe uses a rule to check for any deviations in grade from the precision laser installed in the bore pit. But, what happens if the bore gets off alignment?

Horizontal earth boring is not a very “steerable” operation, but there is some opportunity to make corrections if they’re needed. Take a look back at the first length of the casing pipe. Notice the shoes cut from each quadrant of the pipe. If the bore starts to deviate, a hydraulic jack can be used to bend one or more of the shoes outward and deflect the operation back into alignment. You’re not going to turn a corner this way, but it gives some control over alignment and grade. It’s why it’s so critical that the first length of casing pipe be installed perfectly; all the rest of the casing will follow right behind it.

The operation runs night and day. The machine bores and pushes each length of casing pipe. Soil is removed from the bore and then the pit. Alignment is checked. The auger string is re-inserted. A new length of casing is welded on. Rinse and repeat. All the while, trains are running constantly back and forth along this busy corridor. When the drilling crew starts getting toward the end of the line, an excavator arrives to dig the receiving pit. And, after just about a week of boring 24/7, the cutter breaks through on the other side. Even the guys who do this every day gathered around to watch it happen. It’s a perfect sight, especially for the fact that they broke through in the exact spot they were aiming for.

Only a few days later, it was time to push the water pipe through. The casing’s job is just to hold the bore open, but the water will run in rated plastic pressure pipe. These pipes connect using a bell-and-spigot design; they literally push together. A fiberglass rod is hammered into a groove around the inside of the spigot to lock each segment together. Spacers are installed to hold the line up off the casing to keep it from rubbing during installation or being damaged over time. Just like the boring, the pipes are lowered into the entrance pit, attached, and pushed through to the other side (although, this operation goes quite a bit faster). In some projects, the annular space between the casing and pipe is grouted in, but in this job they opted to keep the space open. It was a ton of work and coordination to get this line under the railroad, so if it ever breaks or leaks, Crystal Clear will be able to pull it out and repair or replace it. This line will be tied into the pipes already installed on either side of the bore, leak-tested, and backfilled, but the hard part is over. It won’t be long before it’s pressurized and put into service, moving fresh water to this quickly growing area in central Texas, quietly and invisibly meeting a crucial need. And not a single train was delayed while it went in.

Huge thanks to Crystal Clear Special Utility District, ACP, and their subcontractors for having me on their site.

February 20, 2024 /Wesley Crump

Why Locomotives Don't Have Tires

February 06, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Formula 1 is, by many accounts, the pinnacle of car racing. F1 cars are among the fastest in the world, particularly around the tight corners of the various paved tracks across the globe. Drivers can experience accelerations of 4 to 5 lateral gs around each lap. That’s tough on a human body, but think about the car! 5 times gravity is about 50 meters per second… per second, and an F1 car weighs 800 kilograms (or 1800 pounds). If you do a little quick recreational math, that comes out to a force between the car and the track of more than 4 tons. And all that force is transferred through four little contact patches below the tires.

Traction is one of the most important parts of F1 racing and the biggest limitation of how fast the cars can go. Cornering and braking at such extreme speeds requires a lot of force, and all of it has to come from the friction where the rubber meets the road. Pirelli put thousands of hours of testing and simulations into the current design. Nearly a hundred prototypes were whittled down to 8 compounds: two wet tires and six slicks of various levels of hardness that offer teams a balance between grip and durability during a race.

And yet, when you look at another of the most extreme vehicles on earth you see something completely different. A single modern diesel freight locomotive can deliver upwards of 50 tons of forward force (called tractive effort) into the rails, but it’s somehow able to do that through the tiny contact patches between two smooth and rigid surfaces. It’s just slick on slick. It seems impossible, but it turns out there’s a lot of engineering between those steel wheels and steel rails. And I’ve set up a couple of demonstrations in the garage to show how this works. I’m Grady, and this is Practical Engineering. In today’s episode, we’re talking about why locomotives don’t need tires.

In a previous episode of this series on railway engineering, I talked about how hard it is to pull a train based on the various aspects of grade, speed, and curves. I even tried to pull a train car myself. The whole point of locomotives is to overcome that resistance, to take all the force required to pull the train and deliver it to the tracks to keep the whole thing rolling. Most modern freight locomotives use a diesel-electric drive. The engine powers a generator, which powers electric traction motors that drive the wheels. There are a lot of benefits to this arrangement, including not needing a super complicated gearbox to couple the engine and wheels. But, even with electric traction motors, locomotives are still limited by the power rating of those motors, and power is the product of force and velocity. So if you graph the speed of a locomotive against the force it can exert on a train, you get this inverse relationship. But, this isn’t quite right. Of course, there are physical and mechanical limits on how fast a train can go, so the graph gets cut off there, but there’s another limitation that governs tractive effort on the slow side. Even if the motors could generate more force at slow speeds (and they usually can), the friction between the rails and wheels limits how much of that force can be mobilized (called the adhesion limit).

The graph makes it clear why this is such a major challenge for a railroad: you can’t even use the full power of the engine because you’re limited by the friction at the wheels. It’s why dragsters do a burnout before the race: to warm up the tires for more friction. I was reading this Federal Railroad Administration report, and I love that it called friction the “last frontier” of vehicle/track interaction; it’s just so important to nearly every aspect of railway engineering. The lack of friction is really the reason railways work in the first place: it means the rolling resistance of enormous loads can be overcome by relatively tiny locomotives. But, of course, some friction is necessary so that trains can accelerate and brake without slipping and sliding on the rails. There are alternatives, like the cog railways that carry trains up steep mountains, but most freight and passenger trains use simple “adhesion” for traction; just the steel-on-steel friction and nothing else. The area that’s physically touching between a wheel and rail, called the contact patch, is roughly the size of a US dime: maybe 2 to 3 square centimeters or half of a square inch. Imagine gluing a dime to the wall and then hanging two average sized cars from it. That’s a loose approximation of the traction force below each wheel of a locomotive; it’s a lot of friction!

Incredibly, friction really boils down to two numbers, one that’s simple (weight, or more generally, the normal force between the two surfaces), and a coefficient that’s a little more complicated. Let me show you what I mean. I have a little demonstration set up here in the garage. It’s just a sled attached to a spring scale. I can add a weight to the sled, and then slide different materials underneath. The reading on the scale is the kinetic friction between the materials. Even if the weight stays the same, the force changes because every material interacts differently with the steel sled, and this can get super complicated: asperity interlocking, cold welding, modified adhesion theory, interfacial layers, et cetera. I’m not going to get into all that, but it’s important to engineers who think about these problems. All that complexity gets boiled down into a single, empirical value called the coefficient of friction. Double the coefficient; double the friction. And the same is true of the normal force. If I double the weight on the sled, I get roughly double the reading on the scale for each of the materials I pulled underneath it.

In some ways, it really is that straightforward. You have two knobs to manage tractive effort: the weight of the locomotive and the friction coefficient. But you don’t always have a lot of control over that second knob. Environmental contaminants like oil, grease, rust, rain, and leaves lower the coefficient of friction, making it harder to keep the wheels stuck to the track. So you kind of just have the one knob to turn. Very generally, the math looks like this: You look at the steepest section of track where the highest tractive effort is required and divide that force by the “dispatchable adhesion,” a complicated-sounding term which is really just the friction coefficient that you can count on for the specific locomotive and operating conditions. Maybe it’s 30% for a modern locomotive on dry rail or 18% for an older model on a frosty winter morning. Now you have the total weight needed to develop that tractive effort. For longer and heavier trains, you can’t just use a single massive locomotive, because there are limits to the weight you can put on a single wheel before the tracks fail or you damage a bridge. That’s why many large freight trains use two, three, four, or more locomotives together.

But, that friction coefficient isn’t set in stone. You do have some control there. Even since the days of steam locomotives, sandboxes have been used to drop sand on the tracks to increase the friction between wheels and rails. If you look closely, you can sometimes see the pipes that deliver sand in front of the wheels. Some railways use air, water jets, chemical mixtures, and even lasers to clean the rails, carry away moisture, or just generally increase control over wheel/rail friction. And there’s another way to turn that knob that’s a little tricky to understand, because there’s really not a hard line between a wheel sticking to a rail through friction and a wheel sliding on it from not enough. Actually, all locomotive wheels under traction exist somewhere in between the two! Let me show you what I mean.

Even though both locomotive wheels and rails are made from hardened steel, that doesn’t mean they’re infinitely stiff. Everything deforms to some extent. But, it would be pretty tough to show the deformation of a steel-on-steel surface under hundreds of thousands of pounds in a garage demonstration, so I have the next best thing: a rug and a circular brush that spins on a shaft. This brush simulates a locomotive wheel, and right now, it can spin freely. So, when I pull the rug underneath it, nothing unexpected happens. There’s essentially no traction here. The force between the brush and the rug (representing a wheel on a rail) is negligible, and there’s no slip. The brush turns at the same rate as the rug moves. But I can change that.

I have a little homemade shaft brake made from a camera clamp, and I can tighten the clamp to essentially lock up the rotation of the brush. Now when I pull the rug under the wheel, it’s noticeably more difficult. The brush is applying a strong traction force to the surface, and also, it’s completely slipping. The relative movement between the wheel and the rail is basically infinite, since the wheel isn’t moving at all. Again, maybe this isn’t too surprising of a result. What’s interesting, I think, is what happens in between these two conditions. If I loosen the clamp so that the brush can rotate with some resistance and pull the rug through again, watch what happens.

The bristles deform as the brush rolls along. They’re applying a traction force, even as the brush rolls. If you look closely, the bristles stick to the rug at the front, but at a point within the contact area, they lose that connection to the rug and slip backwards. And this is exactly what happens to locomotive wheels as well. The surface layer of the wheel is stretched forward by the rail, but toward the back of the contact area, there’s not enough adhesion, and they separate as the elastic stress is released. The stick and the slip happen simultaneously. What’s fascinating about this behavior is that the locomotive wheels actually spin faster than the locomotive is moving along the rails, an effect called creep. And the brush makes it obvious why. The bristles in contact with the rug are flexing, making that part of the wheel rim essentially longer. So the wheel has to turn faster to make up for the difference, or in this demo (since the brush is static), the rug has to travel a greater distance for the same amount of rotation. I can make this clearer with a bit of tape.

With the brake off and no traction, I can pull the rug through and mark the length the rug traveled for half a rotation of the brush. Now, with the brake on, I can pull the rug through again. And you see that the rug traveled a longer distance, even though the brush rotated the same amount as before. If we graph the behavior of a wheel across these various conditions, you get something like this. With no traction, there’s no slip, and so there’s also no creep. But as traction goes up, a bigger part of the contact patch is slipping, and so its relative motion to the track, its creep, goes up. Eventually you reach a point where the entire contact patch slips, and the traction force levels off. You can spin and spin, but you’ll never develop more force.

Of course, that graph is a theoretical situation under ideal conditions. Your intuitions might be saying that a wheel that’s fully sliding on the rail has less traction than one that has at least some stick, and you’d mostly be right. For lots of materials, the “dynamic” friction coefficient when something is sliding, like my little sled demo, is less than the coefficient of friction when there’s no relative movement. That gives rise to this effect called stick-slip, where you get oscillation between sliding and sticking. A violin bow is a great example: the friction from the hairs in the bow stick, then slide, along the string, causing it to vibrate and create beautiful music.

On a locomotive, it’s less desirable. Stick-slip can lead to corrugation of the rail and unwanted noise. It was a notorious problem for steam locomotives because the traction force at the wheel rim was always fluctuating. But the other effect this difference in static versus dynamic friction creates is that the traction versus creep curve in the real world often looks more like this. There’s a maximum in there, and if you go past it toward greater slip, you get a lot less traction.

And that’s the trick many modern locomotives take advantage of. Sophisticated creep control systems can monitor each wheel individually and vary the tractive force to try and stay at the peak of that curve. Eeking out a few more percentage points on the friction coefficient means you can take better advantage of your power, and sometimes even use fewer locomotives than would otherwise be required, saving fuel, cost, and wear and tear.

All that complexity, and you still might be wondering, why all the trouble when you could just use a different material with a higher friction coefficient, like the rubber tires on cars? And the answer is just that everything comes with a tradeoff. Some passenger rail vehicles do use rubber tires, and some locomotives have steel “tires” that can be removed and replaced. But I think those F1 tires are a perfect analogy. You generally use the soft sticky ones when you want to gain track position and switch to the harder, more durable tires to maintain position without losing too much time in the pits. But pit stops for freight trains are pretty expensive. If you keep following that logic to more and more durable tires that can carry multiple tons of weight across hundreds of thousands of miles, you just end up with a steel wheel on a steel rail, and you find other ways to get the traction that you need.

February 06, 2024 /Wesley Crump

How The Channel Tunnel Works

January 16, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

2024 marks thirty years since the opening of the channel tunnel, or chunnel, or as they say in Calais, Le tunnel sous la Manche. This underground/undersea railroad tunnel connects England with France, crossing the narrowest, but still not that narrow, section of the English Channel. The tunnel allows passengers (and, in many cases, their cars, too) to cross the channel in just over half an hour at speeds as high as 99 mph! While there are longer tunnels out there, this is the longest underwater tunnel in the world.

When it was proposed in the mid-1980s, it was set to be the most expensive construction project ever, and like so many mega projects, it went way over budget and opened a year late. But unlike many megaprojects, this one was funded entirely by private investors. That’s a good thing, too, because it hasn’t exactly been a mega-financial success. The BBC once said that, "Depending on your viewpoint, the Channel Tunnel is one of the greatest engineering feats of the 20th Century or one of the most expensive white elephants in history.”

Elephant or not, the tunnel is legendary among engineers, and in light of the 30th anniversary, I thought it was about time I dug into it. It is a challenging endeavor to put any tunnel below the sea, and this monumental project faced some monumental hurdles. From complex cretaceous geology, to managing air pressure, water pressure, and even financial pressure, there are so many technical details I think are so interesting about this project. I’m Grady, and this is Practical Engineering; today, we’re talking about the channel tunnel.

[musical transition]

The idea of building a permanent connection between England and France across the English Channel isn’t a new one. An engineer way back in 1802 came up with a plan for a horse-and-buggy tunnel intended to be lit with oil lamps, featuring an artificial island midway for horse changes, and some pretty scary ventilation chimneys. Needless to say, that idea didn’t get there. In 1882, another tunnel proposed got a bit further, a few kilometers further, in fact. Several thousand meters of tunnel were actually dug before political pressures regarding the fear of future potential invasions killed the project. By the 1970s, another attempt to build a tunnel broke ground, but that project fell through, too. It wasn’t until the mid-1980s that the proposal for the tunnel as we know it was accepted and work began in earnest. A handful of other proposals were also considered at the time, including an even more ambitious project featuring an absolutely enormous suspension bridge 70 meters above the sea using an exotic fiber called parafil and carrying traffic within huge concrete tubes.

Unsurprisingly, this monster bridge did not get selected. The plan for an underground electric railroad connection won, and work began on the channel tunnel. But it’s not so much a tunnel as three separate tunnels with a variety of connections between them. There are two main railway tunnels, each with one-way service across the channel, and a third service tunnel that runs between the two. The three service tunnels each began on either side of the channel and, pretty impressively, met in the middle, deep under the sea bed, with an offset of less than two feet. They were even able to incorporate some of the work of those previous failed attempts.

The accuracy of this dig is even more impressive when you consider that the tunnels aren’t level or straight. The geology of the English channel is, putting it mildly, a bit complicated. There are layers of different kinds of sedimentary formations, and the project was designed to follow the path of a layer known as chalk marl, although some geologists call it marly chalk. This layer was less permeable and had fewer cracks and fissures than the overlying material. But that doesn’t mean there were NO fissures. The marly chalk was the best option for tunneling under the channel, but it was still far from simple.

In some ways, those past proposals and attempts to build the channel tunnel failed because the technology just hadn’t reached the level to make a project like this feasible. But by the 1980s, one piece of equipment had made huge strides in efficiency and safety. With the creative flair you’d expect from any civil engineer, they are aptly named: Tunnel Boring Machines, or TBMs. Drilling is just one of the multitude of jobs that happen in a tunneling operation, and TBMs manage to combine and accomplish them all in one massive and incredibly complicated machine.

There are lots of different styles and sizes of TBM, and the channel tunnel used a total of eleven separate machines to finish the job. Most of us are familiar with the process of drilling a hole, but doing it through soil and rock, underwater, across a vast distance, as you can imagine, adds some nuance to the process. For one, there are no drill bits that extend for miles, so the whole machine has to fit inside the tunnel it’s creating. For two, there are no big hands to push at the back of the drill. Instead, tunnel boring machines grip onto the tunnel walls and use hydraulic cylinders to provide the thrust forces needed to advance forward. For three, except in the most ideal circumstances, the hole of a tunnel is always trying to collapse. TBMs use a cylindrical shield at the front to support the walls of the tunnel until they can be permanently lined with cast iron or concrete and sealed with grout for strength and water resistance.

Also, there’s pressure. The soil, rock, and water deep below the ground are under immense pressure. When you try to excavate, especially in softer soils like were experienced on the French side of the project, they have the potential to collapse or flood the operation. Many of the TBMs used in the channel tunnel project were called earth pressure balance machines. Here’s how they work: The rotating cutter head chews through rock and soil, allowing it to pass through openings into a chamber behind where it is mixed into a pliable paste. As the machine moves forward, the pressure in the excavation chamber builds to match the earth and water pressure on the tunnel face, supporting it against collapse and preventing uncontrolled inflow of water. A screw conveyor creates a controllable plug. Its speed is carefully adjusted to remove only enough of the cuttings to maintain this balance.

Even that wasn’t enough in some cases. Water flowing into the excavated tunnel was a constant problem, making it difficult to work and damaging equipment. In many cases, the crews would inject grout into the rock ahead of the machine, effectively making it stronger before drilling through it. Imagine trying to drill a hole through a big bag of rocks and water. The drill bit would be easier to push, but it sure would make a mess. Grouting the rock ahead of the operation made it more physically challenging to drill through, but it simplified the process considerably. There are so many examples like that, where tunneling knowledge and experience improved drastically, just from running into problems and using trial-and-error to solve each one.

Most TBMs come with a train of equipment to support and power the operation behind the cutter head and lining systems. Each machine is basically its own factory with a workshop, cranes, transportation facilities, and more. And like any factory, you need a way to get materials and people in and out. Workers, lining segments, equipment, and materials travel to the machine from the entrance of the tunnel, often over miles on a temporary railway. And all the excavated spoils have to travel the same distance, often on conveyor belts, in the opposite direction. On the French side of the Channel Tunnel, the spoils were pumped as a wet slurry to a nearby area known as Fond Pignon. On the British side, the spoil was used to construct an extra 111 acres of new England. Well, not New England, but a portion of England that was new. This is now the site of the UK side’s cooling plant, but also a new nature reserve called Samphire Hoe.

Keeping the tunnel headed in the right direction was another challenge. For one, they needed to stay in the right geological layer to reduce the challenges of drilling through unstable ground. Of course, engineers had mapped the geology ahead of time but only using core samples from the surface. Those cores only provide a thin, tiny snapshot of what lies below, like trying to navigate a car by looking through a paper towel tube. And for two, they were drilling from both directions with the goal of meeting in the middle. The TBMs were guided with a sophisticated laser system to keep them on track as they tunneled through the marly chalk. Without a direct line of sight to the surface, surveyors had to set benchmarks along the tunnels with extreme accuracy. Any error in the measurements would propagate, since there was no way to “close the loop.” Crews also regularly took core samples, horizontally and vertically, along the way to keep the tunnel within the target geologic layer.

One of the ingenious parts of the channel tunnel design was for the service tunnel to lead the rest of construction. In a way, this tunnel was the pilot. It was a way to explore the geology with less risk, encountering the challenges on a smaller scale before making progress on the main tunnels. It was also a way to confirm the guidance and ensure that the tunnels were aligned properly when they met in the middle, which, to the relief of many, they famously did in 1990. For the first time since the ice age, there was a dry-land route from mainland Europe to Great Britain. Several of the TBMs were left and buried underground after they finished, since the cost of getting them out was too high. Now they serve as an electrical earth connection

Connecting a hole in the ground all the way across the channel is only part of the story, though. Many more engineering challenges lay ahead. As I mentioned, there are three tunnels: two large, one-way rail tunnels with diameters of 7.6 meters (nearly 25 feet) with a 4.8 meter (16 ft) diameter service tunnel running between them. But that’s not all the tunnels. There are two enormous crossover caverns where the two rail tunnels merge. During normal operation, gigantic steel doors keep the two sides separated, but they can be opened, allowing trains to cross over from one tunnel to the other. This means the tunnel can shut down large sections without the need to fully suspend train service.

The service tunnel connects both rail tunnels every 375 meters with cross passages. These allow for emergency escape from the rail tunnels should an accident or fire occur. And they’ve been used for evacuation in several cases in the past 30 years, including fires in 1996 and 2008. The air pressure in the service tunnel is higher than that in the rail tunnels so smoke can’t travel in. There are special, rubber-tired vehicles that are kind of like miniature trains, called the Service Tunnel Transport System or STTS. Of course, passenger egress is possible with these vehicles, but they are primarily, and ideally, used for shuttling staff to various locations along the tunnel.

Another engineering problem is created by the nature of trains passing through very long tunnels. On ordinary outdoor tracks, the air in front of a train gets pushed aside fairly effortlessly by the leading face of the locomotive. In a tunnel, the train acts kind of like a big piston, driving a pressurized slug of air in front of it the whole way down the tube. The rapid fluctuations in air pressure create drag on the trains, affect passenger comfort, and mess with ventilation systems. To solve this piston effect problem, a series of 2-meter-wide connections called piston relief ducts allow for controlled passage of air from one tunnel into the other, giving that chunk of air a place to go instead of just riding in front of the locomotive the whole way. A funny part of the engineering of the tunnel was investigating whether this long tube with regularly spaced holes would function like a big flute. Thankfully, it didn't end up being an issue.

Getting fresh air along the tunnels is another concern. And here again, the service tunnel shows its value. In addition to providing access to maintenance vehicles and an evacuation route, it also acts as a duct, delivering fresh air along the length of the main tunnels, allowing the stale air to discharge at the tunnel entrances. There is also a supplementary ventilation system that can pump air directly into the rail tunnels in the event a passenger train becomes immobilized.

Along with ventilation, the tunnel also has to manage heat. The trains use electricity for traction, but some of that energy is lost as heat through inefficiencies and friction. In ordinary railroad situations, this would be no big deal since the atmosphere can easily dissipate this heat. But engineers estimated that the trains would raise the temperature in the tunnel to 122 F or 50 Celsius. So, the project also required Europe’s largest cooling system. Enormous chilling plants were built on either side of the tunnel, and miles and miles of pipes carry chilled water throughout the tunnel at a cool 95 F, 35 C. Air conditioners on the trains bring this down to something more bearable for passenger comfort, rejecting more heat that has to be managed by the tunnel cooling system.

Of course, being a rail link between the two countries, the Channel tunnel is flanked by enormous rail terminals on either side, one in Folkestone, UK, and an even larger terminal located near Calais. There’s a shuttle that allows passengers to bring their vehicles along with them, effectively connecting the highways of France and the UK at the terminals. There’s also a passenger train service that crosses through the tunnel, and with the addition of High Speed 1, or HS1, in 2007, it is now possible to take the train from London to Paris and beyond.

The ordinary shuttle trains run on a loop, meaning that at each terminal, there is a track that goes from the exit of one tunnel, loops around, and then enters the other tunnel. In order to avoid uneven wheel wear from always turning in one direction like a NASCAR race, the French side features a crossover, which makes the whole tunnel loop into a huge figure 8. People aren’t the only cargo that passes through the channel tunnel, though. Freight makes its way as well. There are services for heavy trucks that get placed on trains, and there’s even a club car for the drivers to hang out in during passage under the channel. Full-on freight trains also pass through the tunnel, with service continuing past the terminals on either side.

Clearly, the channel tunnel is a triumph of modern civil engineering, and engineers around the world study its design and construction today. It wasn’t all something to celebrate, though. Like so many mega projects, there was a human cost to building the tunnel. More than ten workers perished in the construction of the project. Of course, it is absolutely unacceptable to trade safety for construction speed, even on the biggest construction project in the world, and after multiple lawsuits and investigations, things improved, and the remainder of the project saw far fewer safety incidents. The tunnel has also played a complicated role in illegal immigration and asylum-seeking in the UK, including some tragic incidents involving migrants.

The project also went significantly over budget, which is saying something since it was already slated to be the MOST EXPENSIVE construction project in history. I have a whole video that talks about some of the reasons projects like this end up costing more than we expect, so I won’t go into all those details here. The Channel Tunnel is unique in that it was privately funded, unlike most large infrastructure projects of its kind. The vast majority of the financial burden and risk was taken by banks and individual investors, and there was even a public offering. There aren’t many infrastructure projects that you can buy a share of. Over time, the tunnel has slowly turned a profit, but it’s been less lucrative than predicted. While it may be the most epic way to cross the English channel, it certainly isn’t the ONLY way. Discount airlines in Europe are far more prevalent than they were in the 1980s, and in many cases, it is more desirable and economical for travelers to just fly, especially if their ultimate destination is not the south coast of England or the north coast of France. Plus, for thousands of years, people have crossed the channel by sea. Ferries are still a totally viable and economically competitive way to cross. It might seem a little crazy to choose a ferry over the sense of wonder and delight that comes with passage through one of the most incredible tunnels in history, but maybe some people just like boat rides.

A lot has changed over the 30 years since the Channel Tunnel was completed. Construction technologies, of course, but transportation infrastructure as a whole has evolved as well. There’s probably a lot we would change about the channel tunnel if we could go back to those days when the project was first conceived, but actually, many would argue that perhaps it shouldn’t have been built at all. Knowing what we know now about the complexity of the job in a world of cheap flights, ferries, dynamic international relations, and 21st-century financial markets, it might be a bit harder to show that the costs would be outweighed by the benefits. But that’s part of the rub with megaprojects: it’s impossible to separate their wide-ranging impacts on the world, and the benefits they provide compared to an alternative where they don’t exist. Just last year, construction finished on a high-voltage electric interconnection between the UK and France through the tunnel, a project that may not have even been considered if the tunnel wasn’t already there. It’s easy to criticize the optimism required to justify huge, expensive projects in the face of an uncertain future, but projects like the Channel Tunnel create opportunities and benefits that permeate society in unique and often intangible ways.

I’m an engineer, so I see the achievement through a technical lens. It is, without a doubt, one of the most spectacular engineering feats of history. For me, that’s worth celebrating in its own right, from the intensive geological research leading up to the project, to the massive TBMs eating through so many miles of marl, from the creative ventilation and piston relief systems, to the unsung hero of the service tunnel. Whether or not it was a strictly practical idea, I’m glad it’s there. I haven’t had the opportunity to travel from Folkestone to Calais just yet, but if and when I do, I know how I’m getting there, and it’s not a ferry.

January 16, 2024 /Wesley Crump

How Railroad Crossings Work

January 16, 2024 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

If you’ve ever ridden a bike, driven a car, or operated pretty much any other vehicle on earth, there’s a fact you’ve probably taken for granted: you can see farther than it takes to stop. Within the span between seeing a stationary hazard and colliding with it, you have enough time to recognize it, apply the brakes, and come to a stop to avoid a collision. Your sight distance is greater than your stopping distance; it sounds almost silly, but this is a critical requirement for nearly all human-operated machines. But it’s not true for trains.

Engineers can see just as far as the rest of us, but the stopping distance of a fully laden freight train can be upwards of a mile. That means if an engineer can see something on the tracks ahead, it’s often already too late. So, trains need a lot of safety infrastructure to make up for that deficiency. For one: trains almost always have the right-of-way when they cross a road or highway at the same level, or at grade. The cars have to wait; And, we use a litany of warning devices at grade crossings to enforce that right-of-way and try to prevent collisions. In most cases, these devices have to detect the impending arrival of a train and give motorists enough time to clear the tracks or come to a stop. It sounds simple, but the engineering that makes that possible is, I think, really interesting, and of course, I built some demonstrations to help explain. This video is part of my series on railroads, so check out the rest after this if you want to learn more! I’m Grady, and this is Practical Engineering. Today, we’re exploring how grade crossings work.

It’s inevitable that roads cross railroad tracks, and it’s just not feasible to build a bridge in every case. In the US alone, there are over 200,000 grade crossings where cars and trains must share the same space. A car to a freight train is an aluminum can to a car: in other words, there’s a pretty big disparity in weight. So we’ve put a lot of thought into how to keep motorists, cyclists, and pedestrians safe from the trains that can’t swerve or stop for a hazard. You’ve probably stopped for a train at a crossing, but you may not have consciously added all the safety features up.

Of course, the locomotives at the front of trains themselves have warning devices, including bells, bright headlights, smaller flashing ditch lights, and most noticeably, the blaring horn. The standard pattern at a crossing is two long blasts, one short blast, and one final long blast. But the crossing has warnings too. Passive warning devices don’t change with an approaching train. They include a stop or yield signs, the crossbuck, which is the international symbol for a railroad crossing, and sometimes a plate saying how many tracks there are so you know whether to look for one train or many. Another crossbuck is usually included as a pavement marking to make sure you know what’s coming up. Many low-traffic crossings have only passive safety features, leaving it up to the driver to look out for trains and proceed when it’s safe. But, many crossings demand a little less margin for error. That’s when the active warning devices are installed.

A typical grade crossing features both visual and audible warning signals that a train is coming: red lights flash, a mechanical or electronic bell sounds, and usually a gate drops across oncoming lanes. That seems pretty simple, but there’s quite a bit of complexity in the task and the consequences if anything goes wrong are deadly. And the first part is just knowing if a train is coming.

Detecting a train is important for grade signals (it's also important for signaling trains about OTHER trains, but that's a topic for another video). It can be handled in a bunch of ways, but the simplest take advantage of the electrical conductivity of the steel rails and wheels themselves. A basic track circuit runs current up one rail, through a device called a relay I’ll explain in a minute, and back down the other rail. When a train comes along with its heavy steel wheels and axles, it creates a short circuit, a preferential path for the current in the track circuit. That deenergizes the relay, triggering all the connected warning devices or signals. But why use an ordinary old diagram when you have a model tank car, and an old railroad relay you got off eBay? Let me show you how this works in a real demonstration.

On the left, I’ve hooked up a power supply to the tracks, putting a voltage between the two rails. On the right side, I’ve attached a relay. Let’s take a look inside it to see what it does. I love playing with stuff like this. At its simplest, a relay is just an electromechanical switch: a way to turn something on or off with an electrical signal. When I energize the coil (at the bottom), it acts as an electromagnet, pulling a lever towards it. On the other side of the lever, you can see the movement interacting with several electrical contacts. It’s a little tough to see here, but these contacts are like switches that can control secondary circuits. Some will be switched on when the relay is energized, and others are switched off. When the relay is energized or de-energized, it basically flips the switch on these circuits, allowing various devices, like lights, bells, and gate arms, to be activated or deactivated. In my case, I have a simple battery and LED to indicate whether or not a train is being detected on the rails.

When there’s no train, current passes through the relay from one rail to the other, energizing the coil and holding the switch open so the LED stays dark. When I put a railcar on the tracks, the circuit changes. The wheels and axles create a short circuit (or shunt), a low-resistance path for current to flow, essentially bypassing the relay. The coils in the relay de-energize, closing the switch and lighting the LED to warn any nearby tiny drivers that a train is present on the tracks. It all depends on the train giving a preferential current path, which can be a problem if there are leaves or rust on the rails. You can see how shiny and clean tracks look when they’re in frequent use. Tracks that haven’t seen a train in a day or more often impose a speed restriction on the first train just in case there is rust that could affect the track circuits along the way.

If all this circuitry seems a little convoluted to simply detect the presence of a train, it’s because of how this simple track circuit handles when things go wrong. Let’s say the track circuit loses power; what happens? The relay deenergizes and falls back to the safest condition: assuming a train is occupying the tracks. Same thing if a rail cracks or breaks: the relay deenergizes and the light comes on. This is called failsafe operation, or as the engineers prefer to call it: fail to a known condition. If anything goes wrong, we want the default assumption to be that there’s a train coming because it might be true. Fail safe operation isn’t just in the track circuit but the warning devices too. Gates are actively held up with a powered brake. If power is lost, they fall just by gravity alone. And the bells and lights are usually powered by banks of batteries that can last for hours or days. Most modern train detection systems have moved to more sophisticated equipment, but relays are still used around the world because of their reliability. In fact, this is called a “vital” relay because of all the features that make it extremely unlikely to fail. You can see it acts slowly so that the inevitably noisy signal of a train shunting the tracks can’t cycle it on and off over and over; The armature assumes the de-energized position even if the spring breaks; The contacts use special materials to keep from welding together; And they’re just really robust and beefy to make sure they last for decades.

But even though assuming a train is coming is the safest way to manage problems, it’s not without its own challenges. Warning devices depend on trust, and that’s an extremely tenuous confidence to ask of a motorist. We are naturally dubious of automated equipment. Every time a grade crossing activates and no train comes, that trust is eroded, making motorists more likely to drive around the gates. So failing safe isn’t enough; we also need to make sure that failure is rare. Current leaking between the tracks through water, plant growth, or debris can falsely trigger warning devices. So railroads put a lot of time into keeping tracks clean and the coarse gravel below the tracks (called ballast) freely draining to prevent water from pooling up. In addition, even though maintenance workers can manually trigger devices by shunting current across the tracks, this is done rarely to avoid impacts to road traffic.

But maybe you’ve spotted a flaw in this simple track circuit. If not, let me point it out. It’s all to do with where you put the boundaries. If the circuit is close to the crossing on either side, there’s no warning time. By the time the train is detected, the motorists wouldn’t be able to clear the intersection or come to a stop. But if the circuit extends far enough beyond the crossing to give adequate warning time, motorists will have to sit and wait well after the train is past before it comes off the track circuit and the warning devices turn off. So, instead of a single track circuit, most crossings use three: two approaches and an island. Let me show you how this works with another demo.

Now I have three track circuits set up with power going to each one. The rails are separated by a small gap to avoid an inadvertent connection across the circuits. On actual railroads, you can often identify insulated joints used to isolate the track circuits. They can be hard to distinguish if the insulating material matches the profile of the rail itself, but they’re often painted to be easy to spot. A three-circuit configuration requires a little bit of logic to decide when to turn on the warning devices and when to turn them off. So, despite the fact that I have the coding skills of a civil engineer, I put this demo together using an arduino microcontroller. The model railroad folks are surely shaking their heads at this. You can see my LEDs as I roll the train along the tracks indicating which of the circuits is detecting the presence of a train; from approach to island to other approach. And here’s how the logic works.

When a train is detected on either approach circuit, it immediately activates the warning devices. The lights flash, bell sounds, and gates drop. As the train keeps moving toward the crossing, it’s detected on the island circuit too. The circuit effectively takes over control of the warning devices. They’ll stay on for as long as a train is occupying the island circuit. But as soon as the island is unoccupied, the warning devices turn off (even though one of the approach circuits is still detecting a train). You can see how just a little bit of logic makes it possible to give some warning time for motorists before the train arrives at the intersection without keeping them stuck behind gates after the train has passed. But, how much warning time is enough?

In the US, the minimum requirement is 20 seconds between activation of the warning devices and the arrival of a train, but it’s typical to see 30 or 45 seconds. You might think that the more warning time the better, but it’s a balance. Too much warning time, and motorists might become impatient and drive around the gates, so more time can actually make crossings less safe. For the three-circuit example in the demonstration, the only control you have over warning time is where to start the approach circuit. The farther away from the crossing it begins, the more warning time you get. But the exact time depends on the speed of a train. Since the approach is fixed in place, a slow train will provide lots of warning time, and a fast train will provide less. And a train stopped on an approach circuit before it even reaches the crossing will hold the gates down indefinitely. So the next step in grade crossing complexity takes speed into account.

I put a little acoustic distance sensor on my arduino so I can try to estimate the speed of an oncoming train. The large cardstock cutout just helps my sensor to ‘see’ the train a little better. The arduino measures the distance over time, converts that to an approximate speed, and guesses how long it will take the train to arrive at the crossing. If the expected arrival time is longer than the warning time I programmed in, nothing happens. But if an arrival is expected within the warning time, the devices are activated.

You can see if I approach the intersection slowly, the gates don’t drop until I’m relatively close to the crossing. And if I speed things up, the gates drop when I’m farther away, anticipating the faster arrival of the train. In theory, this type of sophistication means that the warning time at a crossing will always be the same, no matter the speed of the train. But it doesn’t just solve that problem. If you have ever sat at a railroad crossing while a train is stopped on the approach circuit, you know the frustration it causes. A grade crossing predictor avoids the issue. You can see as I move my train toward the crossing, the devices activate assuming the train will cross. But when I stop short, the predicted arrival time goes effectively to infinity, and the controller opens the gates back up.

Of course, actual crossings don’t use sonar to predict the speed of a train. In most cases, they use track circuits with an alternating current. A train interacts with the frequencies of the circuit as it travels along the rails, giving the sensors enough information to detect the presence and speed. Sometimes you can even hear these frequencies since they’re often in the audible range. AC track circuits are also used for electric train systems because they are less susceptible to interference from the traction currents in the rails used to drive the trains.

Another challenge with grade crossings happens in urban areas where signalized intersections are present near the railway. Red lights form a line of vehicles that can back up across the tracks. You should never drive over a railway until you know it’s clear on the other side. But, if you’re not paying attention, it can be easy to misjudge the available space and find yourself inadvertently stopped right on top of the tracks. Traffic signals near grade crossings are usually coordinated with automatic warning devices. When a train is approaching, the signal goes green to clear the queue blocking the tracks.

Equipment for the most basic track circuits to the most sophisticated, including relays, microcontrollers, backup batteries, and more are usually housed in a nearby bungalow or cabin that is easy to spot. In the US, every grade crossing has its own unique identifier, and they all have a phone number to call if something isn’t working correctly. Railroads take reports seriously, so give them a call if you ever see something that doesn’t look right. If you want to see a lot of these grade crossing systems in action, check out my friend Danny’s channel, Distant Signal, for some of the best railfan videos out there. We depend on trains for a lot of things, and in the opinion of many, we could use a few more of them in our lives. Despite the hazard they pose, trains have to coexist with our other forms of transportation. Next time you pull up to a crossbuck, take a moment to appreciate the sometimes simple, sometimes high tech, but always quite reliable ways that grade crossings keep us safe.

January 16, 2024 /Wesley Crump

How Engineers Straightened the Leaning Tower of Pisa

December 19, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Long ago, maybe upwards of 1-2 million years ago, a river in the central part of what’s now Italy, emptied into what’s now the Ligurian Sea. It still does, by the way, but it did back then too. As the sea rose and fell from the tides and the river moved sediment downstream, silt and soil were deposited across the landscape. In one little spot, in what is now the city of Pisa, that sea and that river deposited a little bit more sand to the north and a little bit more clay to the south. And no one knew or cared until around the year 1173, when construction of a bell tower, or campanile (camp-uh-NEE-lee) for the nearby cathedral began. You know the rest of this story. For whatever reason, we humans love stuff like the Leaning Tower. There’s just something special about a massive structure that looks like it’s about to fall over. But you might not know that it almost did. Over the roughly six centuries from when it was built to modern times, that iconic tilt continued to increase to a point in 1990 when the tower was closed to the public for fear that it was near collapse. The Italian Government appointed a committee of engineers, architects, and experts in historical restoration to decide how to fix the structure once and for all (or at least for the next several centuries, we hope). And the way they did it is really pretty cool, if you’re into recreational geology and heavy construction. And, who isn’t!? I’m Grady, and this is Practical Engineering. Today we’re talking about the Leaning Tower of Pisa.

Five-and-a-half degrees. That was the average tilt of the tower in 1990 when all this got started. I have to say average because the tilt isn’t the same all the way up. And actually, that fact makes it possible to track the history of the lean back before it was being monitored. The tower started construction in 1173 and reached about a third of its total height by 1178 when work was interrupted by medieval battles with neighboring states. When work started back up nearly a century later, the tower was already tilting. But the masons didn’t tear it down and start over; they just made one side taller than the other to bring the structure back into plumb. By 1278, the tower had reached the seventh cornice, the top of the main structure minus the belfry, when work was interrupted again. One short century later, the belfry was finally built, and again with a relative tilt to the rest of the structure to correct for the continued lean. On the south side of the belfry, there are 6 stairs down to the main tower; on the north side, only four. The result of all this compensation by the builders is that the Leaning Tower of Pisa is actually curved. Knowing the timeline of construction and how the tilt varies over the height of the structure allowed historians to estimate how much sinking and settling the foundation underwent over time. By 1817, when the first recorded measurement was taken, the inclination of the tower was about 4.9 degrees, and it just kept going.

The new committee charged with investigating the issue first spent a lot of their time simply characterizing the situation. They drilled boreholes and tested the soil. They estimated stability using simple hand calculations. They built a scale model of the tower and tested how far it could lean before it toppled. They developed computer models of the tower and its foundation to see how different soil characteristics would affect its stability. All of the analysis and various engineering investigations all pointed toward the same result: the tower was very near to collapse. In 1993, one researcher estimated the factor of safety to be 1.07, meaning (generally) that the underlying soil could withstand a mere 7 percent more weight than the tower was imposing on it. There was basically no margin left to let the tower continue its lean. A similar tower in Pavia had collapsed in 1989, and the committee knew they needed to act quickly.

To start, they installed a modern monitoring system that could better track any movement over time, including surveying benchmarks and inclinometers. I have a video all about this type of instrumentation if you want to learn more after this. The committee also opted to take immediate temporary measures to stabilize the tower with something that could eventually be removed before developing a permanent fix. They built a concrete ring around the base of the tower and gradually placed lead ingots, about 600 tons in total, on the north side to act as a counterweight to the overhanging structure. As they added each layer of counterweights, they monitored the tilt of the tower. It was ugly, but it worked. For the first time in history, the tower was moving in the right direction. A few months after they finished the project, the tower settled into a tilt that was about 48 arcseconds or a hundredth of a degree less than before.

In fact, it worked so well, the committee decided to take it one step further. To reduce the visual impact of all those lead weights, they proposed to replace them with ten deep anchors that would pull the northern side of the tower downward to the ground like huge rubber bands. This fix didn’t go quite so smoothly. The engineers had assumed that the walkway around the base of the tower, called the Catino, was structurally separate from the tower. But what they found during construction of the anchor solution was that some of the tower was resting on the Catino. The project required removal of part of the catino to make room for a concrete block, and when they did, the tower started tilting again, this time in the wrong direction, and fast (about 4 arc seconds per day, enough for serious concern that the tower might collapse). They quickly abandoned the anchoring plan and added 350 more tonnes of lead weights to stop the movement and focus on a permanent solution.

Engineering ANY solution to a structure of this scale with such a severe tilt is a challenge in the best circumstances. But adding on the fact that the solution had to maintain the historical appearance of the building (including leaving the right amount of lean!) made it even tougher. And after the near disaster of the temporary fix, the committee knew they would have to be extremely diligent. They ultimately came up with three ideas to save the tower. The first one was to pump out groundwater from the sand below the north side of the tower, but they didn’t feel confident that they could predict how the structure would respond over the long term. Another idea was electroosmosis.

If you’ve seen some of my other videos about settlement, you know that it’s hard to get water out of clay, and there are quite a few clever ways engineers use to make it happen faster. One of those ways involves inserting electrodes into the soil and passing electric current through it. Clay particles have a negative surface charge, so the majority of the ions in the water between the particles are positively charged. Electro-osmotic consolidation takes advantage of this by applying a voltage across the soil, causing the water to migrate toward the cathode where it can be pumped to the surface. The idea seemed promising because, by carefully choosing the location of electrodes, engineers hoped they could selectively consolidate the clay below the north side of the tower, reducing its overall tilt. They even performed a large-scale field test near the tower to shake out some of the kinks and gather data on the effectiveness of the technique. But, it didn’t work at all. Turns out the soil was too conductive, so things like electrolysis, corrosion, heat, and all the other effects of mixing electricity and saturated soil made the process pretty much useless for this particular case.

So, the committee was down to one last idea: underexcavation. If they couldn’t get the soil below the tower to consolidate, they could just take some out. And again, they would need to test it out first. So, in 1995, they built a large concrete footing on the Piazza grounds not far from the Tower. Then, they used inclined drills to bore underneath the footing and gradually remove some of the underlying soil. Guide tubes kept the boring in the right direction, and a hollow stem auger inside two casings was advanced below the footing. The outer casing stayed in place while the inner casing moved with the auger. The auger and the inner casing were advanced past the outer casing to create a void, and when they were retracted, the cavity would gently close. At first, it wasn’t looking good. After an initial tilt in the right direction, the test footing started leaning the wrong way. But the crew continued refining the process and eventually got it to work, even finding it was possible to steer the movements by changing the sequence of underexcavation. It was finally time to try it on the real thing.

Knowing the risks and uncertainties involved, the engineers first designed a safeguard system for the tower if things started to go awry. Cable stays were attached between the tower and anchoring frames. The cables could each be tightened individually, giving the engineers opportunity to stop movement in any undesirable direction if the drilling didn’t go as planned. In 1999, they started a preliminary trial with 12 holes. And the plan went perfectly. Over the course of 5 months, the underexcavation brought the tilt up by 90 arcseconds, and after a few more months, it settled in at 130 arcseconds, about four hundredths of a degree. This gave the committee confidence to move on to the final plan.

Starting in 2000, 41 holes were drilled to slowly tilt the tower upright. Over the course of a year, 38 cubic meters of soil were removed from below the tower, roughly 70 tonnes. The lead counterweights were removed. A drainage system was installed to control the fluctuating groundwater levels that exacerbated the tilt. And, the tower was structurally attached to the Catino, increasing the effective area of the foundation. In the end, the project had reduced the tilt of the tower by about half a degree, in effect reversing time to the early 1800s when its likelihood of toppling was much lower. Of course, they didn’t straighten it all the way. The lean isn’t just a fascinating oddity; it is integral to the historical character of the tower. It’s a big part of why we care. Tilting is in the Campanile’s DNA, and in that way, the stabilization project was just a continuation of an 850-year-old process. Unlike the millions of photos with tourists pretending to hold the tower up, the contractors, restoration experts, and engineers actually did it (for the next few centuries, at least).

December 19, 2023 /Wesley Crump

Why Railroads Don't Need Expansion Joints

December 05, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

One of the most common attributes folks imagine when they think of trains is the clickety-clack sound they make as they roll down the tracks. The thing is, most trains don’t make that sound anymore. Or really, I should say, most rails don’t make that sound anymore. Trains are still pretty clickety-clacky, but they’re far less so than they used to be. And here’s why: those rhythmic clicks and clacks came from joints in the tracks. Those joints were a solution to a transportation problem: you can only roll out a length of rail so long before it gets difficult to move them around. It’s easier to have short segments of rail that can be bolted together in place. But, they were also a solution to a thermal problem.

You might be familiar with the idea of an expansion joint: a gap in a sidewalk or handrail or bridge deck or building meant to give a structure room to expand or contract from changes in temperature. I actually made a video on that topic a few years back. The joints on railroads were bridged by fish plates, but with a gap, so on hot days, the rail would have room to grow. But look for a joint on modern railway, and you might have a hard time.

We’re in the middle of a deep dive series on railway engineering, so don’t forget to check out the other videos after this one. A lot of new track these days uses continuous welded rail or CWR that eliminates most joints. Large structures subjected to swings in temperature that don’t account for thermal expansion and contraction can run into serious problems or even fail. So how do modern railways get away with it? I have a bunch of demonstrations to show you. I’m Grady, and this is Practical Engineering. Today we’re talking about continuous welded rail.

As much as I enjoy a good conspiracy, the railroad companies don’t have access to some kind of special steel that doesn’t expand or contract. Rails really do experience thermal contraction and expansion. In the US, they would be installed in roughly 39-foot sections. In general, tracks would be laid out so that on the hottest days, the gap between sections would just barely close. But, this style of jointed rail (although it solved some of the practical problems of railroad construction) had some serious drawbacks, too. First, it was noisy! The famous clickety-clack of railroads was caused by each wheel passing over each joint on the track. I’m a simple man. I grew up listening to that clickety clack, or as they say in Korean, “chikchikpokpok”. It brings a certain nostalgia. But when you consider how long a train is, and the fact that most cars have at least eight wheels, and that train journeys can be hundreds of miles long, that’s a lot of clicks and clacks.

The railroad companies might say too many, because noise is just a symptom. Each time a wheel clacks over a joint in the rail, that impact batters the steel, eventually wearing it down at each location. Try as they might, railroads could never make these joints quite as rigid as the rest of the rail, meaning that (in addition to the extra wear) they would create additional load on the ballast below, and the flexing would cause freight cars to rock side-to-side in a phenomenon called rock and roll. All this creates a maintenance headache, increasing the cost of keeping railroads in service. And it’s why most modern railroads use continuous welded rail: it’s a huge reduction in the maintenance costs associated with the wear and tear from joints. In CWR, rail segments are welded together using electric flash butt welding, arc welding, or in some cases, THERMITE welding. These welds have much higher stiffness than the old joints and, of course, are ground smooth, so they lack clickety clacks. But they still expand and contract with changes in temperature like most materials do. Let me show you how this works.

I’ve set up an aluminum rod on the workbench with one end clamped down and the other free to move. I put a dial indicator at the end so we can observe even tiny changes in the length of the rod. You can see on the thermal camera that we’re already starting at a fairly warm temperature; that’s Texas for you. But, rather than wait for the weather to get even warmer, I’ll speed things up with my sunny day simulator. Notice the dial on the indicator climbing steadily as the heat is applied.

This is an example of unrestricted thermal expansion. That just means nothing is keeping the rod from growing under the increase in temperature. And, engineers can predict the change in length from most materials with a pretty simple formula. Multiply the difference in starting and ending temperatures by a coefficient of thermal expansion that’s easy to look up in a table. This aluminum rod expands by about 0.002% for every degree celsius it increases in temperature. Steel is about half that. Structures like bridges with expansion joints and jointed rail are designed to allow unrestricted thermal expansion. When the hot day comes, the materials expand into the gap. That’s usually a good thing. The structure doesn’t build up stress and stress is what breaks things. But, part of the reason CWR can get away from expansion joints is that changes in temperature aren’t the only way to change the dimensions of a material.

I’ve set up another demo using that same aluminum rod. This time I put it inside this length of pipe and put a nut and washer on both sides. I put the dial indicator on the end, just like before. Now, watch what happens when I turn one of the nuts. Well, if you’re not careful, the whole rod twists. But if you can keep the rod centered in the pipe, and the nut on the other end from twisting, you can see the dial indicator registering the rod getting longer. There’s no change in temperature here; this is a totally different phenomenon: elastic deformation. Turning this nut applies a tension force to the rod, and it stretches out in response.

Just as all materials have a mostly linear relationship between temperature change and length change, all materials also have a similar relationship between stress and change in length (often called strain). If you stress a metal too far, it will undergo a permanent (or plastic) deformation. But within a certain range, the behavior is elastic. It will return to its original length if the stress is removed. And just like the slope of the line for thermal expansion is the thermal coefficient, the slope of the elastic part of a stress/strain curve is called the elastic modulus. And this is part of the secret to continuous welded rail: restrained thermal expansion. You can overcome one with the other. Let me show you a demonstration.

Here you can see me using a hydraulic press in a way that’s not exactly how it was designed. First, I get this iron pipe set up in the press with enough pressure to hold it tight between the cylinder and table, about 3 tons. Then I heat up the pipe with the sunny day simulator. What do you think will happen? Will the hydraulic press break as the steel expands, or something else? Well, it wasn’t quite as dramatic as I was hoping, but that little movement in the gauge still corresponds to about a quarter of a ton of additional force in the hydraulic cylinder. You can kind of think of this in two separate steps: the steel expanded from the heat, but then the additional force from the hydraulic press unexpanded it back to its original size. The thermal and elastic deformations canceled each other out and the pipe stayed the same size. In reality, the force required to counteract thermal expansion should have been more than that, so I think the frame of my hydraulic press wasn’t quite stiff enough to hold the ends perfectly rigid. But you still get the point: you can trade temperature changes for stress and keep the material from changing in size. With a little recreational math, we can combine the two equations to get a single one that gives you the stress in a restricted material from a change in temperature.

So that’s just what railroads with CWR do: they connect the rail at each tie to hold it tight and restrict its movement, allowing it to build up tensile or compressive stress as its temperature changes. Of course, too much stress can fail a material, but steel can handle quite a bit before it gets close to that. Railway here in Texas can range in temperature from below freezing to over 100 degrees F or 40 C. That means every mile of steel wants to be more than 2 feet longer on the hot days than the cold ones. In metric, every kilometer of rail would expand by roughly half a meter, if it wasn’t restrained. Using the formula we developed here, we can see that fully restraining the rail across that temperature range results in a stress of about 15,000 psi or 100 megapascals, way below the tensile or compressive strength of any modern steel, especially the fancy alloys they use these days. But it’s not quite that simple, particularly for compression. Just because a material has a high compressive strength (and steel does), that doesn’t mean it won’t fail under compressive loading. Let me show you another demo.

We’re back to the aluminum rod, but this time I clamped both ends to create a restricted condition. Now watch what happens when I apply the blowtorch. Our equation says the rod should build up stress so that the elastic strain is equal to the thermal expansion. But that’s not what happens. Instead, the rod just deflects sideways, an effect known as buckling. Even though aluminum is relatively strong under compression, the long skinny shape of the rod (just like the rails on tracks) is particularly prone to buckling. Obviously, if a rail buckles on a hot day, it’s a pretty serious problem. The material itself doesn’t fail, but the track does fail at being a railway since trains need rails to be precisely spaced without crazy curves. Many train derailments have happened because a continuous welded rail got too hot and buckled, an effect colloquially known as sun kink. So railroad owners have to be really careful about compressive stress in a rail, and in the US, safety regulations require them to follow detailed procedures for installing, adjusting, inspecting, and maintaining continuous welded rail. One of the tricks they use to manage buckling is adding restraint. I’ve got one more formula and one more demo for you. The formula for the critical force required to buckle a structural member like this is pretty simple. Notice that the force goes up in inverse relation to the length of the structural member squared. This is much clearer in a demonstration. I have a length of welding wire, and I can apply a force with my finger that is measured by the scale. You can see it takes about 375 grams to buckle the rod. But watch what happens when I restrain the rod at the centerpoint, effectively halving its length. I can still buckle it, but it takes a lot more force from my finger. It happens right around 1500 grams, exactly what is predicted by the formula. Halve the length, quadruple the critical force for buckling. The spacing of railroad ties is really important because it affects whether or not a rail will buckle under thermal stress. And one of the most important jobs of all that crushed rock, called ballast, is to hold the ties in place and keep them from sliding horizontally and allowing the rail to buckle.

The other way railroads use to manage buckling is, I think, the most clever: just keeping rails from undergoing compression at all. Any continuous welded rail has a neutral temperature which is essentially the temperature it was the day it was installed. It’s the temperature at which the rail experiences no stress at all. If it’s colder than the neutral temperature, the rail experiences tensile stress, and if it’s hotter than the neutral temperature, the rail experiences compressive stress. The secret is that railroads use a really high neutral temperature to ensure the rail almost never undergoes compression. The Central Florida Rail Corridor has a neutral temperature of 105 F or just over 40 C. They only install rail on hot days, and if they can’t do that, they use heaters to bring the temperature up. And if they can’t do that, they use massive hydraulic jacks to induce enormous tensile forces in the rails before they’re welded together. On cold days when stresses are highest, they have to go out and inspect the rails to make sure they haven’t pulled apart, but a small break in a rail is nothing compared to a buckled track when it comes to the risk of derailment, so it just makes sense to use as high a neutral temperature as you can get away with.

Of course, you always get to the end of a continuously welded section at a bridge or older length of jointed rail. To keep the CWR from buckling at these locations, you need something more than a small gap. Instead, expansion joints on rails (sometimes called breathers) use diagonal tapers. This oblique joint allows train wheels to transition smoothly from one section of rail to another while still leaving enough room for thermal movement. And joints are also needed to break up the electrical circuits used for grade crossings and signals. So railroads often use stiff plates surrounded by insulation material to electrically isolate two sections of rail while keeping it stable in the field. We’ll cover track circuits in a future video of this series on railway engineering.

Even with its challenges, continuous welded rail extends the life of rails and wheels and makes for a much smoother and quieter ride. Even if you’re nostalgic for the soothing clickety-clack of jointed rail, it’s comforting to know that railways are continuously innovating with continuous welded rail.

December 05, 2023 /Wesley Crump

Engineering The Largest Nuclear Fusion Reactor

November 21, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is my friend Jade, creator of the Up and Atom channel. She makes these incredible math and physics explainers that I absolutely love, and she recently got the opportunity to visit ITER (eater) in France. You may have seen this place in the news: 35 nations working together to build an enormous, industrial-scale nuclear fusion reactor. The size of the project is mind-boggling. It’s been under construction since 2013, and… I like construction. So, when Jade and I were chatting about her tour, she said, “Why don’t you just make a video about it too?!”

If everything goes to plan, ITER’s tokamak reactor will house plasma at temperatures in the hundreds of millions of degrees, ten to twenty times hotter than the center of the sun, hopefully paving the way for an entirely new form of electricity generation. I don’t know much about superconducting coils or cyclotron resonance heating or breeder blankets, but I do know it takes a lot of earthwork and steel and concrete to build the biggest nuclear fusion reactor on earth. So let me give you the civil engineer’s tour of what might be the most complicated science experiment in human history. I’m Grady,

Jade: And I’m Jade, and this is Practical Engineering. Today we’re exploring the ITER megaproject.

Jade: I was fairly new to fusion when I went to visit, and although I'm still no expert, I still felt I should explain it rather than let a civil engineer. Before we dive into the mechanics. Here's a question. Why is the world so interested in nuclear fusion? Basically, it comes down to the huge potential payoff. If we could harness the power of nuclear fusion here on Earth.

It would be a way more powerful energy source than fossil fuels, without the environmental baggage. This water bottle full of seawater plus one gram of lithium could provide electricity to a family of four for a whole year. Unlike nuclear fission, there's no long lived waste and no chance of nuclear meltdowns. It's a clean, sustainable and powerful energy source.

Some scientists go so far as to say that commercial nuclear fusion is the next step for humanity. That's exactly what ITER, which translates to the way in Latin aims to do. To nail down the technologies needed for a fully functioning commercial fusion reactor. For you to get an idea of how ambitious their goal is, they plan to import 50 megawatts of thermal power and get out 500 megawatts of fusion power, a gain of ten in fusion talk.

Nothing close to this has ever been achieved or even attempted in fusion history. So how are they going to do it? Right in here. "So this is where the Tokamak is going to be built?" This is the Tokamak pit where ITER is assembling the largest nuclear fusion device in the world, a giant tokamak. Here's a man for comparison. It's going to be huge.

A tokamak is a nuclear fusion machine that works by magnetic confinement. It will hold about 840 cubic meters of piping hot plasma. Why plasma? Plasma is what the sun is primarily made of. And it has the perfect conditions for fusion. To get fusion started in the ITER tokamak, two isotopes of hydrogen, deuterium and tritium are pumped into a large donut shaped chamber.

This is just one of the six vessels that will make up the chamber. The fuel is heated to temperatures of up to 150 million degrees celsius. When they fuse, the energy they unleash is of epic proportions. But here's a question for you engineers. How is it possible to contain so much plasma? No regular material can withstand those kinds of insane temperatures.

Imagine trying to hold onto a piece of the sun. These giant magnets produce magnetic fields of almost 12 tesla, over 200,000 times stronger than Earth's magnetic fields. Plasma is electrically charged. And just like iron filings align with magnetic fields, so does plasma. How cool is that? But how does this fusion stuff actually lead to electricity? ITER itself will not actually produce any electricity.

It's our learning ground, an experimental arena to fine tune how a real reactor might operate. But in a real reactor, the walls of the tokamak will be filled with cooling fluid. When the deuterium and tritium atoms fuse, they release a neutron and a helium atom. About 80% of the energy released is carried by the neutrons and being electrically neutral, they pass straight through the magnetic field.

When these high energy neutrons strike the tokamak walls, they heat up the fluid, turning it into steam. Then, just like a regular power plant, the steam will spin turbines, which will generate electricity. But how will ITER heat the plasma to such insane temperatures? And when can we expect commercial nuclear fusion to get off the ground? Check out my video after you've finished watching Grady's and find out.

Grady: Jade’s video goes into a lot more of the groundbreaking science at ITER, but all that science requires a lot of actual breaking ground. This is a bird’s eye view of the whole facility, and this is where the Tokamak lives. So if all the nuclear fusion is going to happen in there, what are all these other buildings and structures for? Fortunately, there’s a civil engineer there in France amongst all the technicians and scientists who knows the answer, and I was lucky enough to chat with him. This is Laurent Patisson, the civil engineering and interface section leader at ITER, and he’s been there almost since the very beginning, including taking delivery of some truly massive pieces of equipment.

Laurent: “So the largest one is the vacuum vessel sector which is more or less 600 tons. And which is 600 tons, okay, 600-tons yes, on a multi-wheel truck. Very impressive. And with the protection around, it's like transporting an house, two-story house. It's very large. So all the roads are closed. They are dismantling some traffic light just for the passage, some specific display, you know...”

Laurent walked me through the whole campus, and gave me an overview of how construction is progressing across the facility. Many of those big deliveries get stored in one of the many tents scattered around the site until they’re ready to be installed, and then onto one of the various buildings. For example, the poloidal field coils that form superconducting magnets to help shape and contain the plasma in the reactor are just too big to be completed offsite and shipped to ITER, so instead, they built a manufacturing facility right on campus in this long building on the south side. Similarly, the cryostat workshop was built to assemble the massive, vacuum-tight structure that will surround the reactor and magnets. The cryostat parts, the poloidal field coils, and lots of other truly large pieces of equipment destined for the Tokamak itself are then moved to the adjacent assembly hall as needed. Pretty much every part of the Tokamak reactor is not only huge but sensitive to environmental conditions too, so this building makes it possible to protect, stage, assemble and install each one without having to worry about temperature or weather.

Laurent: “It’s one of the highest building and longest buildings, 120 meter long, 70 meter high, very large, 80 meter wide, and actually very large place dedicated really for assembly purpose.”

That’s about 21 stories tall and longer (and wider) than an American football field, end zones included! And maybe the most critical part of the whole building is what runs along the top of it.

Laurent: “We have two 700-tonne overhead cranes. I didn’t mention that. But those are coupled to transfer the modules, the central solenoid. So those are very impressive cranes.”

These two bridge cranes combine to become one of the largest cranes in the world with a combined capacity of 1500 tonnes needed to assemble all the parts of the tokamak. And everything has been tested and tested before each critical lift operation happens with dummy loads before they do the real thing. But material and equipment aren’t the only things flowing through this project site. There’s also a lot of electricity. Imagine what your utility bill would be if your toaster got as hot as the sun!

ITER connects to the European power grid from a 400 kilovolt transmission line. During peak periods of plasma production, the facility may need upwards of 600 megawatts! That’s the capacity of a small nuclear power plant. Obviously you can’t just turn the reactor on with a flip of a switch. ITER has to coordinate with the power grid manager to carefully time the huge power draws with surrounding power plants to make sure it doesn’t cause brownouts or surges on the grid. The 400 kV line feeds a large switchyard and substation on the ITER campus. Electricity is stepped down to a lower voltage using transformers. Then it flows through busbars, cables, and breakers to feed all the various buildings and equipment.

Like many electronic devices, the superconducting magnets that surround the tokamak run on direct current, DC. So the AC power from the grid has to be rectified. For a phone or a flashlight, an AC to DC converter looks like this. But at ITER, it takes up two full buildings. The magnet power converter buildings have enormous rectifiers dedicated to each one of the magnet systems. Once energized, those magnets can collectively store upwards of 50 gigajoues of energy in their fields, though, so you also need a way to quickly get rid of that energy if the magnets lose superconductivity (called a quench). Fast discharge units, located in this building, allow ITER to dissipate that stored energy as heat in a matter of seconds. There are also a lot of critical safety systems and parts to maintaining the expensive and delicate equipment at ITER that require power 24/7/365. So, there are two huge diesel generators that can provide backup power in case the grid goes down.

The flow of electricity is closely tied to the flow of heat through all the parts of ITER. Really the whole thing is an experiment in heat, and there are so many ways things are being warmed or cooled throughout the campus. Of course, you have heating, ventilation, and air conditioning in all the buildings, and it’s not just for the comfort of the people working in them. Even tiny temperature swings can affect the size of these huge components, complicating the assembly.

Laurent: “What we are facing for civil is to merge, at the end, tolerances of equipment which are at the level of millimeter with tolerance of construction building which is at centimeter. And the main challenge that we face in the past and we are continuing to face is that, not to merge but to make compliant, to make compliant the tolerance scales.”

And it’s not just temperature, but humidity and cleanliness as well. So, ITER has a robust ventilation and chilling system located in the site services building along with a lot of the other industrial support systems like air compressors, water treatment, pipes, pumps, and more.

Heat is also important for the electromagnets, which have to be cooled to cryogenic temperatures so they act as superconductors. That’s made possible by the Cryoplant, a soccer-field-sized installation of helium refrigerators, liquid nitrogen compressors, cold boxes, and tanks that keep the various parts of the tokomak supercool during operation. But, although some parts of the machine have to be cryogenically cooled, to create nuclear fusion, you need to heat the plasma to incredible temperatures, and there’s three external heating systems at ITER. One, called neutral beam injection, fires particles into the plasma where they collide and transfer energy. The other two, ion and electron cyclotron heating (say that three times fast), use radio waves, like huge microwave ovens. Those systems are located in the RF Heating building near the Tokamak complex.

And then there’s the matter of the heat output. The whole point of exploring nuclear fusion is to use it as an energy source, to convert tiny amounts of tritium and deuterim into copious amounts of heat. ITER’s goal is to produce a Q of ten, to get ten times as much thermal energy out as it puts into the reactor. But there’s no electrical generator on site. In a commercial fusion facility, you would need to convert that output heat to electricity, probably using steam generators like typical nuclear fission plants. That part of the process is pretty well understood, so it’s not part of this research facility. Instead, ITER needs a way to dissipate all that heat energy they hope the fusion will create. That’s the job of the water cooling system and the enormous cooling tower nearby. Water is circulated around the tokomak and then to the tower where it can reject all that heat into the atmosphere.

That brings us back to where we started, the Tokamak complex itself. That machine, once its finished, will weigh an astounding 23,000 tonnes, more than most freight trains. And with all the heating and cooling going on, there are some serious challenges in just holding the thing up. As the tokomak is cooled cryogenically, it shrinks, but the building stays the same size.

Laurent: “And actually, we had to find out some solution to decouple, physically, the movement of the machine and the building. And for that purpose, we designed some specific bearings allowing displacement, but keeping always the capacity to support and to restrain the machine. So it's one important thing, I could speak about that hours, because it was maybe one of the most challenging parts we had in the design of the building. The support of the machine, which is quite simple when now it is built, but to reach this robust supporting system, it took years.”

And, because, you know, this is an actual nuclear reactor, it has to follow all the safety regulations of any nuclear power plant. No one will be inside the Tokamak complex when it’s running. They’ll be nearby in a separate control building, physically distant from the reactor. And the complex itself has been engineered to withstand a host of disastrous conditions, from floods to plane crashes to explosions on the nearby highway. Like all nuclear power plants, it has a containment structure to confine any fusion products that might be released into the atmosphere in the event of an accident. And that’s made using a special concrete formula developed over two years just for this application that contains extra heavy aggregate and boron to provide radioactive shielding.

Laurent: “So you can see the dark, those are the, the aggregate with content of iron inside, okay. And the white inclusions are colemanite, okay?”

And, it’s not just thermal movement that the designers planned for, but seismic movement too. An earthquake could ruin the entire structure in an instant if the Tokamak was violently shaken, so engineers had to get creative.

Laurent: “One thing I need to mention as well, that the Tokamak complex building is built on elastomeric bearings. For seismic reason, allowing to decouple as much as possible horizontal movement of the soil with the building. And we have 493 anti-seismic bearings. The same type of bearing that you can see underneath bridges. So not large, 90 by 90 centimeters, 18 cm high, but we have a forest of plinths supporting those anti-seismic bearings, and then all the buildings are located on the anti-seismic bearings. It's incredible, incredible.”

Big thanks to the folks at ITER for taking the time to help me understand all this. I only had time to scratch the surface of all the incredible engineering involved. And, go check out Jade’s video to learn more about this awesome project; she actually got to be inside some of the buildings we showed. The civil engineering at the Tokamak building just wrapped up, but there’s a long way to go before fusion experiments start. Like all ambitious projects, this one has struggled through its share of setbacks and iterations. But with 10 times the plasma volume of any fusion reactor operating today, they’re hoping to eventually demonstrate the potential for fusion as a viable source of energy. And that might eventually change the world. Only time will tell if it happens, but it’s exciting right now to see countries across the world collaborating on such a grand scale to invest in the long-term future of energy infrastructure.

November 21, 2023 /Wesley Crump

Which Is Easier To Pull? (Railcars vs. Road Cars)

November 07, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Imagine the room you’re in right now was filled to the top with gravel. (I promise I’m headed somewhere with this.) I don’t know the size of the room you’re in, but if it’s anywhere near an average-sized bedroom, that’s roughly 70 tons of material. Fill every room in an average-sized apartment, and now we’re up to 400 tons. Fill up an average-sized house. That’s 900 tons. Fill up 30 of those houses, that’s roughly 25,000 tons of gravel. A city block of just pure gravel. Imagine it with me… gravel… chicken soup for the civil engineer’s soul. And now imagine you needed to move that material somewhere else several hundred miles away. How would you do it? Would you put it in 25,000 one-ton pickup trucks? Or 625 semi-trucks? Imagine the size of those engines added together and the enormous volume of fuel required to move all that material. You know what I’m getting at here. That 25,000 tons is around the upper limit of the heaviest freight trains that carry raw materials across the globe. There are heavier trains, but not many.

I’m not trying to patronize you about freight trains. It’s not that hard to imagine how much they can move. But it is harder to imagine the energy it takes. Compare those 625 semi trucks to a handful of diesel locomotives, and the difference starts to become clear just by looking at engines and the fuel required to move that mountain of material. We’re in the middle of a deep dive series on railway engineering, and it turns out that a lot of the engineering decisions that get made in railroading have to do with energy. When you’re talking about thousands of tons per trip, even the tiny details can add up to enormous differences in efficiency, so let’s talk about some of the tricks that railroads use to minimize energy use by trains. And I even tried to pull a railcar myself. I’m Grady, and this is Practical Engineering. In today’s episode we’re running our own hypothetical railway to move apartments full of gravel (and other stuff too, I guess).

By energy, I’m not just talking about fuel efficiency either. If it was that simple, do you think there would be a 160-page report from the 1970s called “Resistance of a Freight Train to Forward Motion”? I’ll link it below for some lightweight bedtime reading. Management of the energy required to pull a train affects nearly every part of a railroad. Resistances add up as forces within the train, meaning they affect how long a train can be and where the locomotives have to be placed. Resistances vary with speed, so they affect how fast a train can move. Of course they affect the size and number of locomotives required to move a train from one point to another and how much fuel they burn. And they even affect the routes on which railroads are built. Let me show you what I mean. Here’s a hypothetical railroad with a few routes from A to B. Put yourself in the engineer’s seat and see which one you think is best. Maybe you’ll pick the straightest path, but did you notice it goes straight over a mountain range?

If you've ever read about the little engine that could, you’re familiar with one of the most significant obstacles railways face: grade. A train moving up a hill has to overcome the force of gravity on its load, which can be enormous. Grade is measured in rise over run, so a 1% grade rises 1 unit across a horizontal distance of a hundred units. There’s a common rule of thumb that you need 20 pounds or 9 kilograms of tractive effort (that’s pull from a locomotive) for every ton of weight times every percent of grade. By the way, I know kilograms are a unit of mass, not weight, but the metric world uses them for weight so I’m going to too in this video. And metric tonnes are close enough to US tons that we can just assume they’re equal for the purposes of this video.

A wheelchair ramp is allowed to have a grade of up to 8.3 percent in the US. Pulling our theoretical gravel train up a slope that steep would require a force of more than 5 million pounds or 2 million kilograms, way beyond what any railcar drawbar could handle. That’s why heavy trains have locomotives in the middle, called distributed power, to divide up those in-train forces. But it’s also why railway grades have to be so gentle, often less than half a percent. Next time you’re driving parallel to a railway, watch the tracks as you travel. The road will often follow the natural ground closely, but the tracks will keep a much more consistent elevation with only gradual changes in slope.

You might think, “So what?” We’ll spend the energy on the way up the mountain, but get it back on the other side. Once the train crests the top, we can just shut off the engines and coast back down. And that’s true for gentle grades, but on steeper slopes, a train has to use its brakes on the way down to keep from getting over the speed limit. So all that energy that went into getting the train up the hill, instead of being converted to kinetic energy on the way down, gets wasted as heat in the brakes. That’s why direct routes over steep terrain are rarely the best choice for railroads. So let’s choose an alternative route.

How about the winding path that avoids the steep terrain by curving around it? Of course, the path is longer, and that’s an important consideration we’ll discuss in a moment, but those curves also matter. Straight sections of track are often called tangent track. That’s because they connect tangentially between curved sections of rail that are usually shaped like circular arcs. Outside the US, curves are measured by their radius, the distance from the center of curvature and the center of the track. Of course, in the US, our systems of measurement are a little more old-fashioned. We measure the degrees of curvature between a 100-foot chord. A 1-degree curve is super gentle, appropriate for the highest speeds. Once you get above 5 degrees, the speed limit starts coming down, with a practical limit at slow-speed facilities of around 12 degrees. In an ideal world, you only have to accelerate a train up to speed once, but on a windy path with speed restrictions, slowing and accelerating back up to speed takes extra energy.

But those curves don’t just affect the speed of a train, they also affect the tractive effort required to pull a train around them. Put simply, curves add drag. As you might have seen in the previous video of this series, the wheels of most trains are conical in shape. This allows the inside and outside wheels to travel different distances on the same rigid axle. But it’s not a perfect system. Train wheels do slip and slide on curves somewhat, and there’s flange contact too. Listen closely to a train rounding a sharp curve and you’ll hear the flanges of each wheel squealing as they slide on the rail. A 1-degree curve might add an extra pound (or half a kilogram) of resistance for every ton of train weight (not much at all). A 5-degree curve quadruples that resistance and a 10-degree curve doubles it again. When you’re talking about a train that might weigh several thousand tons, that extra resistance means several thousand more pounds pulling back on the locomotives. It adds up fast. So, depending on the number of curves along the route, and more importantly, their degree of curvature, the winding path might be just as expensive as the one straight up the mountain and back down.

Sometimes terrain is just too extreme to conquer using just grades and curves. There comes a point in the design of a railroad where the cost of going around an obstacle like a mountain or a gorge is so great that it makes good sense and actually saves money to just build a bridge or a tunnel! Many of the techniques pioneered for railroad bridges influenced the engineering of the massive road bridges that stir the hearts of civil engineers around the world. And then there’s tunnels. You know how much I like tunnels. There are even spiral tunnels that allow trains to climb or descend on a gentle grade in a small area of land. I could spend hours talking about bridges and tunnels, but they’re not really the point of this video, so I’ll try to stay on track here. Hopefully you can see how major infrastructure projects might change the math when developing efficient railroad routes.

Of course, I’ve talked about grades, curves, and acceleration, but even pulling a train on a perfectly straight and level track without changing speed at all requires energy. In a perfect world, a wheel is a frictionless device and an object in motion would tend to stay in motion. But our world is far from perfect. I doubt you need that reminder. And there are several sources of regular old rolling resistance. Let me give you something to compare to.

I put a crane scale on a sling and hooked it to my grocery hauler in the driveway to demonstrate. This car just keeps showing up in demos on the channel. Doing my best to pull the car at a constant speed, I could measure the rolling resistance. With no friction, my car would just keep rolling once I got it up to speed, but those squishy tires and friction in the bearings mean I have to constantly pull to keep the car moving. It was pretty hard to keep this consistent, so the scale jumps around quite a bit, but it averages around 30 pounds or 14 kilograms. Very roughly, it’s about a percent of the car’s weight. I put half the car on the gravel road to compare the resistance, and it took about twice the force to keep it rolling. 60 pounds (around 2% of the car’s weight) is a little much for a civil engineer, so I had to get some help pulling. We tried it with a lighter car, but the scale must not have been working right.

At slow speeds like in the demo, drag mostly comes from the pneumatic rubber tires we use on cars and trucks. They’re great at gripping the road and handling uneven surfaces or defects, but they also squish and deform as they roll. Deforming rubber takes energy, and that’s energy that DOESN’T go into moving the load down the road. It’s wasted as heat. At faster speeds, a different drag force starts to become important: fluid drag from the air. I didn’t demo that in my driveway, but it’s just as important for trains as it is for cars. Let’s take a look back at that 1970s report to see what I mean.

One of the most commonly used methods for estimating train resistance is the Davis Formula, originally published in 1926 and modified in the 70s after roller bearings became standard on railcars. It says there are three main types of resistance in a train for a given weight. The first is mechanical resistance that only depends on the weight of the train. This comes from friction in the bearings and deflections of the wheels and track. Steel is a stiff material, but not infinitely so. As a steel wheel rolls over a steel track, they squish against each other creating a contact patch, usually around the size of a small coin. The pressure between the wheel and track in this contact patch can be upwards of 100,000 psi or 7,000 bar, higher than the pressure at the deepest places in the ocean. There is an entire branch of engineering about contact mechanics, so we’ll save that for a future video, but it’s enough to say that, just like the deformation of a rubber tire down a road, this deformation of steel wheels on steel rails creates some resistance.

The second component of resistance in the Davis formula is velocity dependent. The faster the train goes, the more resistance it experiences. This is mainly a result of the ride quality of the trucks. As the train goes faster, the cars sway and jostle more, creating extra drag. The final term of the Davis formula is air resistance. Drag affects the front, the back, and the sides of the train as it travels through the air. This is velocity dependent too, but it varies with velocity squared. Double the speed, quadruple the drag. Add all three factors together and you get the total resistance of the train, the force required to keep it moving at a constant speed.

But why use an equation when you can just measure the real thing. I took a little trip out to the Texas Transportation Museum in San Antonio to show you how this works in practice. Take a look at these classic Pullman passenger cars. You can see the square doors on the bearings where lubrication would have been added to the journal boxes by crews. This facility has a running diesel locomotive, a flat car outfitted with seats for passengers, and a caboose. This little train’s main job these days is to give rides to museum patrons, but today it’s going to help us do a little demonstration.

First [choo choo] we had to decouple the car from the caboose. Then we used the locomotive to move the flat car down the track. This car was built in 1937 and used on the Missouri Pacific railroad until it was acquired by the museum in the early 1980s. The painted labels have faded, but it weighs in the neighborhood of 20 tons empty (about 15 times the weight of my car). So I set up a small winch with the force gauge and attached it to the car. The locomotive provides an ideal anchor point for the setup. But on the first try, the scale maxed out before the car started to move. It turns out the rolling resistance of a rail car is pretty high if you don’t fully disengage the brakes first. Who would’ve thought?

Now that the wheels are allowed to turn, it’s immediately clear that the tracks aren’t perfectly level. Even without the car rolling at all, it’s pulling on the scale with around 100 pounds or 45 kilograms. Once I start the winch to pull the car, the force starts jumping around just like the car, but it averages around 150 pounds or 68 kilograms. If I subtract the force from the grade, the rolling resistance of the car, the force just required to keep it moving at a constant speed, is just about 50 pounds or 32 kilograms. That’s about the same force required to move my car on the gravel road even though this car is 15 times its weight. And it’s not far off from what the Davis Formula would predict either.

We tried this a few times, and the results were pretty much the same each time. This is an old rail car on an old railway, so there’s quite a bit of variation to try and average out of the results. Little imperfections in the wheels and rail make a huge difference when the rolling resistance is so low. A joint in the track can double or triple the force required to keep the car moving, if only for a brief moment. Kind of like getting a pebble under the wheel of a shopping cart: It seems insignificant, but if it’s happened to you, you know it’s not.

Watching the forces involved, I couldn’t help but wonder if I could move the car myself. But there was no safe way for me to start pulling the car once it was already moving. I would have to try and overcome the static friction first… aaaaand that turned out to be a little beyond my capabilities. If you look close, you can see the car budging, but I couldn’t quite get it started. On a different part of the track with the wheels at a different position, maybe I could have moved it, but considering most of the working out I do is on a calculator, this result might not be that surprising. Those joints between rails don’t only add drag, but maintenance costs too, but that’s the topic of the next episode in this series, so stay tuned if you want to learn more. It’s still remarkable that the rolling resistance between a 20 ton freight railcar and my little hatchback is in the same ballpark. And that’s a big part of why railways exist in the first place. Those steel wheels on steel rails get the friction and drag low enough that just a handful of locomotives can move the same load as hundreds or trucks with a lot less energy and thus a lot less cost.

November 07, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 5

October 24, 2023 by Wesley Crump

This is the fifth and final episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

October 24, 2023 /Wesley Crump

Why There's a Legal Price for a Human Life

October 17, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

One of the very first documented engineering disasters happened in 27 AD in the early days of the Roman Empire. A freed slave named Atilius built a wooden amphitheater in a town called Fidenae outside of Rome. Gladiator shows in Rome were banned at the time, so people flocked from all over to the new amphitheater to attend the games. But the wooden structure wasn’t strong enough. One historian put it this way: “[Atilius] failed to lay a solid foundation and to frame the wooden superstructure with beams of sufficient strength; for he had neither an abundance of wealth, nor zeal for public popularity, but he had simply sought the work for sordid gain.” When the amphitheater fell, thousands of people were killed or injured. That historian put the number at 50,000, but it’s probably an exaggeration. Still, the collapse of the amphitheater at Fidenae is one of the most deadly engineering disasters in history.

Engineering didn’t really even exist at the time. Even with the foremost training in construction, Atilius would have had almost no ability beyond rules of thumb to predict the performance of materials, joints, or underlying soils before his arena was built. But there’s one thing about this story that was just as true then as it is today: The people in the amphitheater share none of the blame. They needn’t have considered (let alone verified) whether the structure they occupied was safe and sound. This idea is enshrined in practically every code of ethics you can find in engineering today: protection of the public is paramount. An engineer is not just someone who designs a structure; they are the person who takes the sole responsibility for its safety.

But if that were strictly true that safety is paramount, we would never engineering anything, because every part of the built environment comes with inherent risks. It’s clear that Atilius’s design was inadequate, and history is full of disasters that were avoidable in hindsight. But, it’s not always so obvious. The act of designing and building anything is necessarily an act of choosing a balance between cost and risks. So, how do engineers decide where to draw the line? I’m Grady, and this is Practical Engineering. Today, we’re exploring how safe is safe enough.

You might be familiar with the trolley problem or one of its variations. It’s a hypothetical scenario of an ethical dilemma. A runaway trolley is headed toward an unsuspecting group of five workers on the tracks. A siding only has a single worker. You, a bystander, can intervene and throw the switch to divert the trolley, killing only one person instead of five. But, if you do, that one person lost their life solely by your hand. There’s no right answer to the question, of course, but if you think harder about this ethical dilemma, you can find a way to blame an engineer. After all, someone engineered the safety plan for the track maintenance without an officer or lookout who could have warned the workers. And someone designed the brakes on that trolley that failed.

Hopefully, you never find yourself in such a philosophically ambiguous situation, but a large part of engineering involves making decisions that can be boiled down to a tug-of-war between cost and safety, and comparing those two can be an enormous challenge. On one side, you have dollars, and on the other, you have people. And you probably see where I’m going with this: sometimes you need a conversion factor. It sounds morbid, but it’s necessary for good decision-making to put a dollar price on the value of a human life. More technically, it’s the cost we’re willing to bear to reduce risks such that the expected number of fatalities goes down by one. But that’s not quite as easy to say.

Of course, no one is replaceable. You might say your life is priceless, but there are countless ways people signal how much value they put on their own safety. How much are people willing to pay for vehicles with higher safety ratings versus those that rank lower? How much life insurance do people purchase, and for what terms? What’s the difference in wages between people who do risky jobs and those who aren’t willing to? Economists much smarter than me can look at this type of data, aggregate it, and estimate what we call the Value of a Statistical Life or VSL. The US Department of Transportation, among many other organizations, actually does this estimation each year to help determine what safety measures are appropriate for projects like highways. The 2022 VSL is 12.5 million dollars.

Whether that number seems high or low, you can imagine how this makes safety decisions possible. Say you’re designing a new highway. There are countless measures that can be taken to make highways more safe for motorists: add a median, add a barrier, add rumble strips to warn drivers of lane diversions, increase the size of the clear zones, add guardrails, increase the radius of curves, cover the whole thing in bubble wrap, and so on. Each of these increases the cost of the highway, reducing the feasibility of building it in the first place. In other words, you don’t have the budget to make sure no one ever dies on this road. So, you have to decide which safety measures are appropriate and which ones may not be justified for the reduction in risk they provide. If you have a dollar amount for each fatality that a safety measure will prevent, it makes it much simpler to draw that line. You just have to compare the cost of the measure with the cost of the lives it saves.

But, really, It’s almost never quite so unequivocal. During the construction of the Golden Gate Bridge, the chief engineer required the contractor to put up an expensive safety net, not because it was the law, but just because it seemed prudent to protect workers against falls. The net eventually saved 19 people from plunging into the water below. That small group, who called themselves the Halfway to Hell Club, easily made up for the cost of that net, and that little example points to a dirty truth about the whole idea of weighing benefits and costs in terms of dollars: it’s predicated on the idea that we can actually know with certainty how much any one change to a structure will affect its safety over the long term (not to mention that we’ll know how much it actually costs, but I’ve covered that in a separate video). The truth is that we can only make educated guesses. Real life just comes with too many uncertainties and complexities. For example, in some northern places, the divots that form rumble strips on highways collect melted snow and de-icing salt, effectively creating a salt lick for moose and elk. What should be a safety measure, in some cases, can have the exact opposite effect, inviting hooved hazards onto the roadway. Humanity and the engineering profession have learned a lot of lessons like that the hard way because there was no other way to learn them. Sometimes, we have opportunities to be proactive, but it’s rare. As they say, most codes and regulations are written in blood. It’s a grim way to think about progress, but it’s true.

Look at fires and their consideration in modern building design. Insulated stairwells, sprinkler systems, emergency lights and signs, fire-resistant materials, and rated walls and doors - none of that stuff is free. It increases the cost of a building. But through years of studying the risks of fires through the tragedies of yesteryear, the powers at be decided that the costs of these measures to society (which we all pay in various ways) were worth the benefits to society through the lives they would save. And, by the way, there are countless safety measures that aren’t required in the building code or other regulations for the same reason.

Here’s an example: Earlier this year, a fuel tanker truck crashed into a bridge in Philadelphia, starting a fire and causing it to collapse. I made a video about it if you want more details. Even though there have been quite a few similar events in the recent past, bridge safety regulations don’t have much to say about fires. That’s because the risk of this kind of collapse is pretty well understood to take a bit of time. In almost every case, that timespan between when a fire starts and when it affects the structural integrity of the bridge is enough for emergency responders to arrive and close the road. Bridge fires, even if they end in a collapse, rarely result in fatalities. We could require bridges to be designed with fire-resistant materials, but (so far, at least), we don’t do it because the benefits through lives saved just wouldn’t make up for the enormous costs.

You can look at practically any part of the built world and find similar examples: flood infrastructure, railroads, water and wastewater utilities, and more. You know I have to talk about dams, and in the US, the federal agencies who own the big dams, mainly the Corps of Engineers and the Bureau of Reclamation, have put a great deal of thought and energy into how safe is safe enough. A dam failure is often a low-probability event but with high consequences, and those types of risks (like plane crashes and supervolcano eruptions) are the hardest for us to wrap our heads around. And dams can be enormous structures. They provide significant benefits to society, but the costs to upgrade them can be sky-high, so it’s prudent to investigate and understand which upgrades are worth it and which ones aren’t.

There’s an entire field of engineering that just looks at risk analysis, and federal agencies have developed a framework around dam safety decision-making by trying to put actual numbers to the probability of any part of a dam failing and the resulting consequences. Organizations around the world often use a chart like this, called an F-N chart, to put failure risks in context. Very roughly, society is less willing to tolerate a probability of failure the more people who might die as a result. Hopefully, that’s intuitive. So, a specific risk of failure can be plotted on this graph based on its probability and consequences. If the risks are too high, it’s justified to spend public money to reduce them. Below the line, spending more money to increase safety is just gold plating.

But above a certain number of deaths and below a certain probability, we kind of just throw up our hands. This box is really an acknowledgment that we aren’t brazen enough to suggest that society could tolerate any event where more than 1,000 people would die. The reality is that we’ve designed plenty of structures whose failure could result in so many deaths, but those structures’ benefits may outweigh the risks. Either way, such serious consequences demand more scrutiny than just plotting a point on a simple graph.

All this is, of course, not just true for civil structures, but every aspect of public safety in society. Workplace safety rules, labeling of chemicals, seatbelt rules, and public health measures around the world use this idea of the Value of a Statistical Life to justify the cost of reducing risks (or the savings of not reducing them). A road, bridge, dam, pipeline, antenna tower, or public arena for gladiatorial fights can always be made safer by spending more resources on design and construction. Likewise, resources can be saved by decreasing a structure’s strength, durability, and redundancy. Someone has to make a decision about how safe is safe enough. There’s a popular quote (unattributable, as far as I can tell) that gets the point across pretty well: “Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.” But there’s a huge difference between a bridge that barely stands and one that barely doesn’t. When it’s done correctly, people will consider you a good steward of the available resources. And, when it’s done poorly, your name gets put in the intro of online videos about structural failures. Thank you for watching, and let me know what you think.

October 17, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 4

October 10, 2023 by Wesley Crump

This is the fourth episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

October 10, 2023 /Wesley Crump

Why Are Rails Shaped Like That?

October 03, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Maybe more than any other type of infrastructure, railways have a contingent of devoted enthusiasts. “Railfans” as they call themselves; Or should say “ourselves”? Maybe it's the nostalgia of an earlier era or the simple appeal of seeing enormous machinery up close. But railroads and the trains that ride along them are just plain fascinating. Train drivers are often known as engineers, but operating a locomotive is far from the only engineering involved in railways. In fact, building and maintaining a railroad is a big feat full of complexity. And I’d like to share some of that complexity with you, starting where the rubber meets the road, or in this case, where the steel meets the… other steel? It might sound like a simple topic, but don’t say that to the attendees of the annual Wheel Rail Interaction Conference. This stuff is complicated, so this is the first in a series of videos I’m doing on the engineering behind railways. Why do the rails of railroads have such a weird shape? The answer is pretty ingenious. I’m Grady and this is Practical Engineering. In today’s episode, wer’e talking about train wheels and rails.

Why do we build railroads anyway? They might seem self-evident now and even kind of elementary. But modern railroads are the result of hundreds of years of innovation. And like many kinds of innovation, the development of railroads was really just a series of solving problems. For example, how can we move upwards of 100 tons per vehicle without tearing up the road in the process? Well, instead of compacted gravel, asphalt, or concrete, we can build the road out of steel. But steel is expensive, so rather than a ribbon, we can save cost by using two narrow steel rails directly below the wheels. But wooden or rubber tires have a lot of rolling resistance because they deform under load, and that resistance adds up with each individual train car. So, we use steel for the wheels too. I built this model to show exactly how this works. My wheels are plastic and rails are aluminum, but I think you’ll still get the point. Steel wheels on steel rails are just so much more efficient than…[wheel falls off track]

Well, there is the problem of turning, too. Just because you put a rail below a wheel doesn’t mean it will follow the same path. You have to have some way for the rail to correct the direction of the wheel and keep it on track, literally. And, if you look at railway wheels, the answer is obvious: flanges. The wheels on railway vehicles all have them: a lip that projects below the rail to guide the wheel as it rolls along, keeping the position side to side. You could put flanges on the outside of wheels like this, but if a horizontal force like a hard turn caused one of the wheels to lift, the flange won’t help keep the wheel on track. We put flanges on the insides of wheels so they can keep a train from derailing even if one wheel lifts off the track. Let’s put some flanges on my wheels and try that demo again. [wheels bind up on track].

You can see we haven’t fully solved the problem. Unlike a wheel that has a tiny contact point with the rail, a flange is a big surface that creates a lot of friction around every curve. If you’ve heard that characteristic squeal of a train going around a corner, that’s the sound of flanges rubbing and grinding along the side of a rail. Rails on tight curves are often made of higher-grade hardened steel compared to straight portions of the track, and sometimes they’re even greased up to minimize friction between flanges and the edges of rails. But, there’s a bigger problem at play in this demonstration than simple friction.

Instead of independent wheels, most railway cars use solid axles attached to both wheels called a wheelset. They need that design to withstand the incredible loads each axle carries, but it poses a problem around bends. A solid axle means both wheels turn at the same rate, but the length of the outer portion of track in any given curve is longer than the inside of the curve. Two wheels of the same diameter spinning at the same rate will, kind of obviously, have to roll the same distance. Since there’s a mismatch between the distances the wheels need to travel, solid-axeled wheelsets with cylindrical wheels would always experience some degree of slipping around a turn. That would not only create a bunch of additional friction, but also keep the wheels from following the curved path, and a flange can only do so much.

The trick to railway wheels is something that’s not so obvious at first glance. The wheels are actually conical. The profile of the wheel is wider on the inside next to the flange, and gently narrows toward the outside of the wheel. A wheelset with conical wheels will naturally tend to self-center itself between two rails. On a straight section of track, a wheel that rides up higher on one rail will naturally fall back down, keeping the wheelset roughly centered on the road. In a sense, conical wheels want to stay on the tracks. There’s always a little bit of wobble (exaggerated here), so trains actually move down tracks in a sinusoidal side-to-side pattern that you can sometimes feel if you’re paying attention. Incidentally, that helps the wheels wear evenly. But where it really counts is on a curve.

The turning forces on a train cause it to tend toward the outside track. This shifts the wheels over as well. The outer wheel will ride on the thicker part of its tread nearest to the flange, while the inner wheel will ride toward its edge, which has a smaller circumference. This way, the effective diameter of each wheel changes in a curve and solves the slip problem that cylindrical wheels would face. Take a look at the way these conical wheels that I 3D printed behave as they make this corner. You can see the outside wheel rolling on the wider part, effectively increasing its diameter and thus distance traveled per rotation. Conversely, the inside wheel rides on the narrower part of the cone, and so it has a smaller diameter and travels a shorter distance per rotation.

It really is kind of ingenious. Most vehicles have a differential gearbox to deal with this challenge of navigating curves; train cars just use some clever geometry. But that’s not the end of the story. You might even be thinking, “Richard Feynman already taught me this in the 80s… It’s nothing new.” But there’s more engineering involved in how train wheels and rails interact, including the interesting shape of modern rails. Think about that taper angle first. One standard in the US uses a 1:20 ratio. For the main part of the wheel, that means the outside diameter is roughly a quarter inch or 6 millimeters less than the inside diameter, and that difference has a big effect on the allowable radius of curves in a railroad. A steeper cone can navigate sharper curves, since there’s a bigger difference in the circumference from the inside to outside. You can see my wheelset can’t navigate this s-curve, despite the exaggerated conicity.

This challenge is partly solved with trucks, called bogies in the UK. You can kind of think of trucks as big rollerskates under each end of a train car. The trucks can rotate relative to the rest of the car, and they usually have some pretty serious springs and suspension systems to keep a smooth ride rolling. Most trucks keep the wheel sets parallel, but some can even allow them to ride radially with each curve.

However, even with trucks or bogies, wheels can overshoot their optimal orientation on the tracks. When the simple sinusoidal motion created by the tapered wheels is amplified by the speed of the car, the oscillation can violently slam the trucks side-to-side on the rails. This is called hunting behavior. The violent motion can even cause a train to derail. It’s worst with empty cars, and usually only happens at higher speeds, so a lot of engineering goes into developing wheel profiles and truck designs that raise the hunting onset speed so that it doesn’t limit how fast a train can go. That’s a lot of innovation on the wheel side, but what about the rails?

Just like all parts of a railroad, the rails themselves have evolved over time. Turns out there are a lot of shapes they can take and still serve the same basic function, but modern railway rails are shaped that way for a reason. Weight is equivalent to cost for big steel structures, so there’s nothing on these rails that isn’t absolutely necessary. In a sense, rails are I-beams, a shape that is well-known for its strength and something we see in plenty of other heavy load bearing steel structures. But there’s more to it than that. The bottom part of the rail, called the foot, distributes enormous loads, converting the extreme contact pressure of a steel wheel into something that can be withstood by a wooden or concrete tie. The web elevates the train above the ground, giving clearance for the flanges of the wheels and keeping everything clear of small debris that might end up on the tracks.

The head of the rail with where the action happens. This thick rounded section of steel takes an awful lot of abuse over its life, and thus experiences the bulk of the wear. An old rail section, especially on the high side of a curve, looks remarkably different than a newly forged rail. Here’s why: Theoretically, the speed of a spinning wheel exactly matches the speed of the rail at a mathematically precise point. But trains don’t care about math. For one, even steel wheels on steel rails deform a little bit as they roll. Rather than a single point, there is a small contact patch between the two. That tiny area, roughly the size of a small coin, carries all the weight of the train into the rail. But, because the contact patch is spread across the tapered wheel, the wheel is turning at many different speeds on the same piece of rail. Only the center of the contact patch actually moves at the exact speed of the train. This results in a small amount of grinding as the train moves along, slowly wearing down both the wheel and the rail. Eventually they start to conform to each other, and that’s mostly a bad thing.

Wheels can wear down to get a vertical face that wants to climb up the rail or a hollow profile with a quote-unquote “second flange” that takes the wrong direction at a switch. Most rail wheels have some amount of hollow to them, which changes how conical they actually are. Some wheels are even designed to be taken off and machined back into spec to extend their life. The best way to reduce this wear is to use hardened materials and reduce the size of the contact patch by curving the top of the rail so that the wheel only touches a tiny part of it as it rolls by. After that, it’s just a decision about how much wear you want before needing to replace the rail. The more metal you include in the rail head, the more it will cost, but the longer it will last. In fact, not all rails are equal. The lightest rails are used on straight sections and small commuter service lines. The largest rails are used on curves and heavy-haul freight tracks. Once they get worn down on the main line, they often get reinstalled for a second life in a yard or a siding where they can still bear train cars and locomotives at slow speeds.

So, rails are shaped in the funny way for a reason: they’re bulbous both to reduce the size of the contact patch and provide enough steel to wear away before needing to be replaced. And the shape of rails and wheels is still a topic of research and innovation. Just in the past few years, the standard profile of North American freight train wheels was updated to the new AAR-2A standard. Just a tiny change in the shape of the wheel was tested to have 40% less wear than the previous spec. That means trains will start seeing better steering, lower friction, better fuel consumption, and longer lasting infrastructure.

In many ways, railroads might seem like old technology, a solved problem that doesn’t need more engineering. But it’s just not true. Modern railroad companies use sophisticated software, like the Train Energy and Dynamics Simulator, to keep track of all the complexities involved in how wheels and rails interact. Simulators can let you adjust factors like train makeup, different track conditions, operating conditions, suspensions, and more to characterize how trains will handle and how much energy they’ll use. That’s the topic of the next video in this series, so stay tuned if you want to learn more.

In the 19th century, railway engineering was all about how to build railroads, finding routes through difficult terrain and efficient forms of construction. Modern rail engineering is all about getting the most out of the system. It might not look like much when you see a train passing by, but a huge amount of research, testing, and engineering went into the shape of those rails and wheels and we’re still improving them today.

October 03, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 3

September 26, 2023 by Wesley Crump

This is the third episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

September 26, 2023 /Wesley Crump

Every Type of Railcar Explained in 15 Minutes

September 19, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

A train is a simple thing at first glance: a locomotive (or several) pull a string of cars along a railroad. But not all those railcars are equal, and there are some fascinating details if you take minute to notice their differences. I’m about to start a deep dive series on railway engineering, but I thought, before I do that, we should cover some of the basics first. How many of these cars have you spotted before? I’m Grady, and this is Practical Engineering. Let’s get started.

All trains have at least one locomotive that provides the power. They can pull from the front (called the head) of the train, push from the tail, or act as so-called distributed power somewhere in between. There’s a ton of types of locomotives, but they deserve their own video, so today I’ll focus mainly on the unpowered cars they push or pull. We’ll start with passenger cars, move on to freight, and then talk about a few of the more unusual cars you might be lucky enough to spot on the rails.

Unless you work in the railroad industry, passenger trains are the only ones you’ll ever get a chance to interact with. The standard passenger car or coach is what you’ve probably seen the most of: aisle in the center with rows of seats on either side. Some coach cars can be disconnected and rearranged, but most modern passenger cars come in “train sets” that are rarely split up in normal operation.

Some passenger cars are bilevel, also called double-decker. This can double the capacity of a car, but it’s kind of rare. That’s not only because of height and weight restrictions on railroads, but also because the added time it takes to load and unload the cars can cause congestion at busy stations.

Long-haul passenger trains may include a baggage car for checked luggage like the cargo hold of an airliner. In most cases, they’re designed to look like the rest of the passenger cars, although often with fewer windows since bags rarely enjoy the view. Combine cars have a section for passengers and one for luggage or freight.

Although tricky to identify from the outside, a common sight on passenger trains is a diner car, essentially a rolling restaurant. These cars gave rise to the quintessential American restaurant of the same name, many of which are converted railcars themselves. Some passenger routes even include a lounge car, a bar on rails, that sometimes even has live music.

If you’re sleepy after dinner, you might find yourself in a sleeper car. Open section cars have the beds in bunks with only a curtain for privacy. Most modern sleeping cars have private rooms and bathrooms akin to rolling hotels.

These days, especially in the US, passenger rail is used by people who find the journey itself to be the destination. Some passenger trains include dome cars for better sightseeing along the trip. A bulbous glass dome provides a panoramic view from the side of the car. Similarly, observation cars are sometimes included at the end of a train to give passengers a view out the back.

Of course, we can’t forget crew cars. All trains have a team of people who work aboard for operation, maintenance, and other tasks, and they sometimes need their own quarters for breaks or sleep. Especially in areas like Australia where there are huge stretches of rail without stops at cities, a whole second crew might wait in the crew car, ready to swap when the working time limits of the first crew are reached.

Passenger trains are cool, of course, but I’m more of a freight train railfan myself. There’s just something awesome about seeing a single car weighing sometimes more than 100 tons move almost effortlessly down the steel rails. And with the huge variety of types of freight that move overland come a huge variety of railcars.

Boxcars are a common sight with their huge sliding doors. They can be loaded by hand or forklift and accommodate a wide range of sizes and types of cargo that require protection from the elements. And they have a few variations too. A refrigerated boxcar is exactly what it sounds like: a giant insulated fridge or freezer on rails. They usually feature a diesel-powered refrigeration system that’s easy to spot from the outside.

If the goods being transported in a boxcar are relatively light, you end up completely filling the car before coming close to its weight capacity, sometimes called “cubing out” the car. To maximize the use of a boxcar for lightweight cargo, there are taller versions called High Cubes. Not all railroads can fit such a tall car because of tunnels or bridges, so you might see the excess height portion of the car marked in white to make sure it doesn’t inadvertently end up on a route without the necessary clearance.

If you want a train car full of cars, then you’re looking for an autorack, designed to carry consumer cars and trucks. Many have three levels and carry dozens of vehicles at once. Freight rail moves automobiles cheaper and with better protection compared to driving each one individually from factories to distribution centers. A few passenger trains pull autoracks as well, like the autotrain between Washington DC and Orlando. You can take your car on your rail trip and have it at your destination.

When it comes to freight cars, it doesn’t get much more straightforward than a flat car. A simple name for a simple function: just a rolling platform that can be used for all sorts of cargo, especially big stuff that needs to be loaded with a hoist or crane and cargo that can handle a little rain or snow. You might see flat cars used to transport heavy equipment and machinery, pipes or steel beams, or even see multiple flat cars outfitted to transport enormous wind turbine blades. Some flatcars feature bulkheads at the front and rear. These help keep loads like steel plates, pipes, and wood products from shifting forwards or backwards when the train accelerates or brakes.

Another flat car variant is the centerbeam car, used to haul lumber, plywood, wallboard and fencing. The central beam helps stiffen the car, making it possible to stack products higher. It also provides a place to secure the loads from either side of the car. Some centerbeam railcars hold enough lumber to frame out half a dozen houses!

Flatcars are also used for intermodal shipping, or using more than one mode of transportation like trucks, trains, and ships. Trailer-on-flatcar, or TOFC, isn’t exactly a distinct type of railcar, but it is a distinct use of one. A semi-trailer is lifted or driven onto a flatcar at one terminal, and it’s ready to connect back to a truck once it reaches the next intermodal facility to be driven to its final destination. This is sometimes called piggy-backing and it can be a cheaper alternative than trucking the trailer for its entire route.

Most intermodal freight these days comes in containers, standardized steel boxes that fit on trucks, trains, and ships. Container-on-flatcar, or COFC, again isn’t a different kind of car but simply a specialized use. The cast corners of steel containers have holes that make them easy to secure with latches or twist lock devices so they can be quickly loaded and unloaded.

One of the great advantages of containerization is that modern intermodal containers can be stacked. An interbox connector slots between the corner castings and holds each box together. But, you don’t see double-stacked containers on flatcars very often, because of height restrictions and issues with center of gravity. Instead, well cars recess the bottom of a container between the wheels, lowering the top of a double-stack and making it safer at speed. Not every line has the clearance, but well cars have made it possible to double-stack intermodal freight on a lot more routes than before.

Coils of sheet metal are used in countless manufacturing processes, so you can see them on freight railroads fairly frequently in coil cars. Steel coils are challenging to load and unload, and challenging to secure as well, so that’s why they get their own specialized cars. Many are covered with a hood to protect the steel or other metal cargo from the elements.

Gondola (GON-dola) cars, or gon-DO-la, depending on where you live, are used for bulk materials like scrap metal, sand, ore, and coal. They’re basically enormous wagons. Gondolas have to be loaded and unloaded from the top with a crane or bucket. Some can be turned upside down and unloaded using a rotary dumper. Look for the different color of paint on the side with the rotary coupler.

Hopper cars are like gondolas in that they’re loaded from the top, but they have sloped sides and bottoms that funnel material so they can be unloaded through hatches at the bottom. Hoppers can have open tops when carrying loads that aren’t sensitive to the weather, but covered hoppers are used for cargo that needs protection from the elements like sugar and grains.

Another option for unloading bulk goods is to tip it sideways. This is a side dump car, not very common to see. They’re mostly used to maintain the railroad itself, rather than move and deliver bulk goods to customers.

This next car is very rare, but it’s so cool I just had to include it. Behold the behemoth that is a Schnabel car. There are actually two cars with far more axles than normal, each sporting a heavy lift arm for truly enormous cargo, such as power transformers used in substations. One of the largest of these is used in the US to transport nuclear reactor containment vessels on 36 axles.

Tank cars are used to carry liquids and gases on rails. Like all railcars, there are plenty of variations, but in general, they’re split up into two types. Non-pressurized tank cars handle all kinds of liquids from milk to oil. They may have specialized coatings that match their specific cargo needs, can be insulated or even refrigerated, and they usually have a bottom outlet so that they can unload by gravity.

Pressurized cars are designed to transport liquids and gases under pressure. These tanks have thicker walls and higher standards for containment of cargo. Pressurized cars always have protective housings covering the fittings on top of the tank. But, some non-pressurized cars have them too, so you'll have to look for other subtle clues (or memorize the DOT classification numbers) to know which type each one is for sure. Tank cars designed for hazardous cargo are heavily regulated and have special features like reinforced ends called head shields, specialized couplings that reduce the impacts of a derailment, and pressure relief valves to minimize the chances of an explosion.

I can’t be totally comprehensive for this short video. If you can dream it, there’s probably a freight railcar of it somewhere, but that should be all of what you’re likely to see in the wild, plus a few that you’d be really lucky to spot. But passenger and freight cars aren't the only things you'll see on the tracks. Non-revenue cars are those used by the railroad companies themselves. After all, building and maintaining railroads is a complicated and expensive endeavor, and it takes a lot of interesting equipment to do it well. I’ll rattle some of these off, but every railroad is different in the type of equipment they use to keep things running smoothly.

Ballast is the name for the gravel bedding that railroad ties sit on. It distributes the enormous pressure of trains to the subgrade, provides lateral support to keep tracks from sliding side-to-side, and facilitates drainage to keep the subgrade from getting soggy. Ballast tampers shake and pack the ballast under the tracks, restoring the support if the ballast has settled and sometimes correcting the rail alignment too. Ballast regulators use blades and brushes to distribute the ballast material evenly around the tracks and keep excess ballast from covering the ties. A ballast cleaner picks up all the rock, separates it from any dirt, and replaces it on the tracks to improve its ability to drain water and lock together to support the railroad.

Rail Grinders do just that: grind the rails to restore their shape and remove irregularities that show up as rails wear down. A tie exchanger takes out the old ties and inserts new ones without having to remove the rails. A spiker drives the spikes that hold the rails tightly to the ties. A railroad crane is used for heavy lifting along the rails where it might be difficult to access with an overland crane. Some railways in the north use a rotary snowplow during severe winter weather to keep the tracks clear.

Sometimes you might see a work truck driving around on regular old paved roads with an extra set of flanged metal wheels. This is a road-rail vehicle also called hi-rail (since they can run both on the highway and the railroad). There’s a whole host of hi-rail vehicles out there, really any kind of work truck setup you can imagine on the highway could find itself doing work on the railroad. And this is probably the only rail vehicle you’ll have a chance of seeing without also seeing a railroad itself!

Railroads depend on large scales to measure the weight of equipment and cargo. And of course, if you’ve got a scale, you need a way to calibrate it, which is where the scale test car comes into play. These cars are basically rolling hunks of metal with very precisely known weights, kind of like a huge railroad version of the little weights you might have used in school science classes.

A particularly rare car that you’d be lucky to see is a track geometry car. They carefully measure the gauge, position, curvature, and alignment of the railroad, helping to ensure the safety and smoothness of tracks without interrupting service. Unlike manual measurements of rail geometry, the measurements of track geometry cars account for loading conditions since the car itself is a full-scale railroad car.

And finally, bringing up the rear, a train car we’ve all heard of, but one you won’t really see too much of any more: the caboose. Historically, cabooses housed crewmembers who had a host of jobs, from helping with switching and shunting cars around, to looking for damaged cars, dangling equipment, monitoring brakeline air pressure, and spotting overheating bearings and axles. With the advent of roller bearings and wayside defect detectors, the role of the caboose was diminished and eventually the laws requiring them on trains were relaxed. Today the last car of a freight train is often just a regular cargo car, but with a small device on the back called an End-Of-Train Device. The most sophisticated versions monitor brake line pressure and movement of the back of the train, relaying the information to the engineer at the head. And a flashing red light lets anyone know that that’s the whole train and there aren’t any cars inadvertently left behind on the tracks.

Trains are one of the most fascinating engineered systems in the world, and they’re out there, right in the open for anyone to have a look! Once you start paying attention, it's pretty satisfying to look for all the different types of railcars that show up on the tracks, and in future videos, I’m going to show you a lot more. If you’ve been inspired to keep your eye out, we put together a checklist that you can use to keep track of the cars you’ve seen. It’s linked below in the description, but that’s not all.

If you’ve watched my channel for any length of time, you know that almost every video I make is connected to something you can see in your own surroundings. You might even know I released a book about it: Engineering in Plain Sight: An Illustrated Field Guide to the Constructed Environment. And now, I’m launching a companion game too. This is Infrastructure Road Trip Bingo. Our brains have a stupendous capacity to ignore all the fascinating details that are hidden in plain sight, and road trips are the perfect opportunity to open your mind’s eye.

Infrastructure Road Trip Bingo is just what it sounds like: a spotting game to play with your fellow passengers. Each sheet has 24 engineered structures that you might see on a typical road trip. Some you’re sure to spot. Some you might need to try and influence the driver to take a special detour. Get a line of 5 before anyone else, and you win. All the icons were designed by the illustrator for my book, and there’s a cross reference table inside the cover if you want to learn more about a particular square. 100 tear-off sheets mean you’ll have plenty of chances to play and win, and the squares are randomized so that no game ends the same.

Is this a silly idea? Of course it is. But, what I’ve learned from you over all these years is that you’re enthusiastic about the built environment just like me. Engineering In Plain Sight hit the Publisher’s Weekly best seller list, and it’s still topping out categories on Amazon nearly a year later. So I wanted to give you a chance to put those observation skills to the test. Infrastructure Road Trip Bingo goes on pre-sale today, only on my website, and they’ll start shipping later this year. And if you still don’t have my book, you can get a copy bundled with your game for a huge discount as well. You can get it from any retailer, but if you buy from my website, I signed every single copy in our warehouse. These are awesome gifts, or treat yourself with something fun and cool, and support what we’re doing on Practical Engineering while you’re at it. That link’s in the description. Thank you for watching, and let me know what you think!

September 19, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 2

September 12, 2023 by Wesley Crump

This is the second episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

September 12, 2023 /Wesley Crump

Do Droughts Make Floods Worse?

September 05, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Do you remember the summer of 2022 when a record drought had gripped not only a large part of the United States, but most of Europe too? Reservoirs were empty, wildfires spread, crop yields dropped, and rivers ran dry. It seemed like practically the whole world was facing heatwaves and water shortages. But there was one video that warned against hoping for rain, at least not for big storm right at first. Rob Thompson, a meteorologist and professor at the University of Reading, shared a little backyard experiment: cups of water being inverted on top of grass with varying moisture levels in the soil. The results seemed to show that the dry soil absorbed the water much more slowly than the wet grass or normal summer conditions. This video was shared across the internet as a viral reminder that, contrary to what you might think, droughts can increase the impact of flooding. But is that actually true? Does dry soil absorb moisture more slowly than wet soil, and could a storm after a drought cause more runoff and worse damage than if the ground was already wet? No matter what your intuitions say, the answer’s a little more complicated than you might think. And of course, I built some garage demonstrations to show why. I’m Grady, and this is Practical Engineering. Today we’re exploring the relationship between droughts and floods.

Of all of the natural disasters we face, floods are among the worst. There have been more than 30 floods in the US since 1980 that caused over a billion dollars in damages each! And that’s not including hurricanes. In fact, floods are so impactful that I’ve already made a whole series of videos about how dangerous they are and many of the ways that engineers work to reduce the risk of flooding or at least reduce the damage they cause. Many of those flood infrastructure projects are based on a “design storm,” essentially a made up flood used to set the capacity or height of a structure. For example, the storm gutters on your street might be designed to carry the 25-year storm. Many spillways for dams are designed for a flood that is unlikely to ever occur called the Probable Maximum Flood. Of course, we just can’t run full-scale tests on flood infrastructure. Despite architects and contractors saying we always rain on the parade, civil engineers can’t call down a flood of a particular magnitude and duration from the heavens. And even if they could, it would be an ethical gray area. So, engineers who design water infrastructure instead use models to help estimate various magnitudes of flooding and predict how the built environment will respond.

There are all kinds of hydrologic models that can simulate just about every aspect of the water cycle you could imagine, but modeling basic storm hydrology is actually pretty simple. It’s usually broken into three steps. Precipitation is exactly what you would expect: how much rain actually falls from the sky and hits the surface of the earth? Transformation describes what actually happens to those raindrops as they run along the ground and the timing of how they combine and concentrate. But in between precipitation and transformation, there’s a third step. Because not all those rain drops run off and reach a river or stream. Some of them get stuck in puddles and ponds (called abstractions), some evaporate, and some soak into the ground.

I say all this to point out that the engineers and scientists who study flooding have put a great deal of thought and research into the how, where, how much, and why rainfall soaks into the ground. It’s the third leg of the “estimating how bad floods can be” stool (a stool, by the way, I spent a good part of my education and professional experience sitting on). And of course, there’s a litany of factors that affect how much precipitation is lost to infiltration into the earth versus how much runs off into rivers and creeks: temperature, vegetation, season, land use, soil type, and more. But one of the factors is more important than any other: soil moisture. And it shouldn’t be that surprising. How much water is being held between those tiny grains of silt, sand, or clay plays a pretty big role in how much more water can flow in.

Maybe you’re starting to see what I’m getting at here (and I promise the demos are coming but I think it’s important to know the theory first). One of the most beloved mathematical expressions of hydrologists everywhere is Horton’s equation. Looks a little intimidating, but it’s much simpler as a graph. This logarithmic curve shows the infiltration rate we can expect during a rain event of a given magnitude over time. At first, when the soil is driest, the rate of infiltration is highest. As rainfall continues to soak the soil, less water is absorbed, and the infiltration rate slowly approaches a steady state.

The inputs to Horton’s equation are fine for a laboratory, but they’re not really easy to estimate in a real world scenario, so most hydrologic models don’t use it. One of the simpler infiltration models actually used in engineering is the Curve Number method, originally developed by the Soil Conservation Service in the 1950s. Here, instead of esoteric laboratory variables, infiltration rates are tied to actual soil types and land uses we can estimate in the field, and this is meant to be dead simple. You too can be a civil engineer by simply picking the right number from a table and feeding it into a model. In fact, let’s try it out. My backyard is an open space, I would say in good condition, with mostly clay which is hydrologic soil group D. So my curve number should be 80.

I won't make you go through the calculations, because we can make the computer do them. This is the Hydrologic Modeling System, a free piece of software available from the US Army Corps of Engineers. I’ll plug in my backyard curve number, plug in a storm with a constant rainfall over a day, and push go. The bars show the total amount of precipitation for each time step. The red portion shows the losses and the blue portion shows the runoff. At first, all the precipitation goes toward losses as the rainfall gets caught in abstractions. But once the puddles fill up, some runoff starts to occur. You can see that, for a constant rate of precipitation, runoff increases over time, and infiltration goes down, just like we saw with Horton’s equation. I know we’re in the weeds just a bit, but I think it’s important to know that we have technically rigorous ways to describe our intuitions of how floods work. The Curve Number method (along with many others) are used across the world by engineers to characterize floods and even to calibrate hydrological models to actual floods. Of course models are never perfect, but at least they’re based on real science. Water fits into the spaces, the interstices, between soil particles. The more water there already, the harder it is for more water to flow in. But you don’t need a graph. You can see it for yourself.

I hammered a clear tube into my Curve Number 80 backyard, and we can watch the water flow into that clay soil with grass of quote-unquote “good condition.” This is actually a crude version of an actual scientific test apparatus called an infiltrometer, but this isn’t strictly scientific. The real test involves hammering the tube deeper to prevent lateral spread and maintaining a constant level to remove water pressure as a variable. But, hey, this is just a youtube demo, and I wanted to push my kid on the swing instead of babysitting the water level in a clear tube for 45 minutes.

I did take the time to graph the water level for the duration of the experiment so you can see the results more clearly. The level drops quickly at first and slows down to roughly a constant rate, just as the theory predicted. Some of that slowdown is because of the decreased water pressure over time, the variable I didn’t control, but it’s mostly because the soil became saturated, making it harder for water to infiltrate.

Just for fun, I ran another experiment in the garage with a tube full of sand. FYI, that’s roughly equivalent to “Natural Desert Landscaping” with an associated curve number of 63. Are you feeling like an expert at this yet? It’s a little harder to tell in the sand because the water flows so quickly, but it does in fact flow more quickly at the beginning before the sand is saturated. Once it saturates, the infiltration is more or less constant, just like we would expect. The reason for the sand demo is this: we’ve left out a key consideration so far which is the initial conditions. How much water is in the soil at the start of the event? If it’s a lot, you would assume there would be less infiltration. If the soil is dry, you would assume infiltration would be greater. Is it true? Let’s try it out!

Again, the sand is maybe a little bit too porous for this demonstration, and my method for adding the water isn’t so precise either. But, just paying attention to how quickly the tube fills up with water with the valve fully opened, the dry sand takes longer. That’s because more of the water is infiltrating into the soil. The wet sand is like starting halfway down the Horton curve. But that wasn’t a super satisfying result, so I put some potting soil into the tube next (Curve Number 86). I ran it once dry, then ran it again after the soil was saturated, and lined the shots up side by side. This time you can clearly see the difference. Water infiltrates into the unsaturated soil much more quickly, but once it does, it infiltrates about the same rate as the already wet example.

We have a word for this: antecedent conditions. Most of the factors we talked about that affect infiltration rates don’t stay the same over time. They change. Many hydrologic models use average conditions as a starting point, but the real world isn’t very average. Vegetation is seasonal; temperatures fluctuate; watersheds experience fires and droughts (hint hint). How wet a watershed was before the storm is an important factor in determining how much runoff will occur. According to all the theory and practical examples I’ve shown, a wet watershed will absorb less precipitation, so flooding will be worse. And the opposite is true for a dry one. More water will soak in, making flooding less impactful. But, that seems contrary to the video I showed in the introduction, and do you really think I would make a video called “Do Droughts Make Floods Worse” if the answer was just, “no”?

It turns out that certain kinds of soil, when they become very dry, also become hydrophobic. They actually repel water. This is not a super-well-understood phenomenon, but it seems that under very dry conditions, waxes, plant root excretions, and the action of bacteria and fungi create a layer at the surface that reduces a soil’s affinity to water. If you’ve ever forgotten to water a houseplant for a while, you may have experienced this yourself. It’s hard to get the water to soak in at first, and many gardeners will actually fully submerge a potted plant to properly water it.

Because it’s a finicky phenomenon, I had a little trouble creating water repellent soil in the garage, but luckily, hydrophobicity is interesting enough to be a fun kids toy. I bought some hydrophobic sand and put a layer of it on top of my regular sand to simulate this effect of soil water repellency. You can see clearly that the repellent layer slows down the infiltration of the water. It still gets through, but it happens a lot more slowly compared to if it weren’t there. So, why doesn’t this effect show up in the theory (or least the theory of flood modeling)?

There are a few reasons: number 1 is that most hydrophobic soil effects disappear pretty quickly after the soil gets wet. It just doesn’t last that long, as you know if you’ve dealt with it in your potted plants. Number 2 is that it’s a phenomenon that hasn’t been well-characterized in terms of what soils experience repellency and under what conditions. There’s no nice table for an engineer to look up values. But number 3 is the biggest one: there are other antecedent factors that just end up being more important. Very high soil moisture before a flood is much more likely to lead to severe flooding than very low soil moisture in most cases. The extreme example of this is rain-on-snow flooding, which contributed to the 2022 flooding in Yellowstone National Park. But there is one big exception to this rule: fires.

When organic stuff burns, some of that volatile material creates hydrophobic properties in the underlying soil, reducing its ability to absorb rainfall. That effect plus the loss of vegetation on the surface means that the potential for flooding after a fire increases dramatically. Storms after wildfires are known to create massive floods, mudslides, and erosion, so there is a lot of research into understanding this phenomenon.

So what’s the answer? Are floods worse after a drought? Dry conditions do kill plants and grasses that slow down runoff, they create hydrophobic soils that briefly keep water from soaking into the ground. And, they often make fire conditions worse which in turn, can lead to more impactful floods. But droughts also leave the soil drier than average, increasing its ability to soak up rainfall. In many cases, a flood after a good soaking rainfall is going to generate far more runoff than a flood after a drought.

Rob told me he was completely surprised by the response to his video, especially since he only spent a few hours making it. His goal was to show that, under certain conditions, flash floods can be worse when the underlying soil is very dry. But I suspect if his demo lasted a little bit longer (and his setup was a little more rigorous), the results may have looked a little different. And on the other hand, most models used by engineers to estimate floods assume that infiltration always goes up as soil moisture goes down, completely neglecting the fact that some soils lose their affinity for water at very low moisture levels. One statistician famously said that, “All models are wrong, but some are useful.” And even something as simple as the flow of water into the soil has so many complexities to keep track of. Like most answers to simple questions in engineering and in life: the answer is that’s it’s complicated.

September 05, 2023 /Wesley Crump

HEAVY CONSTRUCTION of a Sewage Pump Station - Ep 1

August 30, 2023 by Wesley Crump

Check out our new series! This is the first episode of a five-part pilot series to gauge your interest in "How It's Made"-esque heavy construction videos. Drop a comment or send me an email to let me know what you think! Watch on YouTube above or ad-free on Nebula here.

August 30, 2023 /Wesley Crump

Every Construction Machine Explained in 15 Minutes

August 15, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

We talk about a lot of big structures on this channel. But, it takes a lot of big tools to build the roads, dams, sewage lift stations, and every other part of the constructed environment. To me, there’s almost nothing more fun than watching something get built, and that’s made all the better when you know what all those machines do. So, in this episode, we’re going to try something a little bit different. I’m Grady, and this is Practical Engineering. Let’s get started!

A big part of construction is just shifting around soil and rock. If you’ve ever had to dig a hole, you know how limited human effort is in moving earth. Almost no major job site is complete without at least one excavator because they’re just so versatile. Depending on size, the heavy steel bucket of an excavator can match an entire day’s digging of one guy or girl with a single scoop. But excavators get used for more than just digging. They are a lifter, pusher, crane, and hammer all in one.

A skid steer is second only to an excavator when it comes to versatility. These little machines are often equipped with a bucket, but you can attach almost any type of tool as well. While there are often purpose built machines that can do the same job, none of them can convert from loader to mower to forklift to drill rig quite so quickly, and in tight confined spaces, a skid steer is the perfect tool.

A loader is one in many machines meant to carry soil and rock across a distance. They’re often articulated in the center for tighter turns and use a large bucket on the front for lifting and dumping. They’re meant to carry materials over short distances, like the length of a construction site.

Longer hauls use a dump truck. These trucks feature a large open-topped tub meant to withstand repeated loading with various heavy materials. A typical dump truck features a hydraulic cylinder that can lift the bed, tilting it at a steep angle and allowing material to dump out of the back.. Since dump trucks carry heavy loads, lots of them have auxiliary axles that can be lowered to distribute the weight over more tires and keep the truck in compliance with roadway and bridge weight limits. Articulated haulers are dump trucks used in off-road and difficult terrain.

If you want to move a lot of soil around a large construction site, another option is a scraper. Rather than loading from the ground into a dump truck, these machines do it all in one. A huge blade scrapes directly from the ground into a hopper. It’s carried directly to where it’s needed and unloaded with a hydraulic ejector, and these are often used on large embankments like for highways and dams.

Another Swiss army knife of the construction yard is the backhoe that is kind of a combination excavator and loader. Great for small sites where it doesn’t make sense to have two pieces of equipment.

And don’t forget the bulldozer that specializes in moving material at ground level. They can’t move material over large distances, but they can spread out literal tons with their tank-like tracks.

The last stop on the digging train is the trencher. There are a huge variety of styles and sizes, but ultimately they all specialize in digging long holes for pipes and utilities. Many use a tooth chain like a giant chainsaw for the Earth!

By the way, there are about a hundred different colloquial names for almost every piece of large equipment. Different sites, suppliers, regions, and countries use different words for the same machine; it’s part of the fun. One easy tip to sound like a pro is just to add the drive style to the front of the name. It’s not a loader, it’s a wheel loader, or a tracked excavator and so on.

Now let’s hit the road. Roadwork is something we’ve all seen, and while it can be a bit frustrating if you’re stuck in a traffic jam from it, roads might be the largest engineered structures on earth. Our modern lives depend on them, and it takes some pretty cool tools to get them built.

A grader is technically an earthwork tool, but it’s used mostly on roadways. The extra long wheelbase makes it well suited for precisely leveling surfaces and evening out bumps, leaving a nice even grade.

Once all that soil is in the right place, it needs to be solidified so it doesn’t settle over time. A roller compactor is the main tool for this job. There are a few varieties of these depending on the material being compacted. Smooth drums are used for most soils and asphalt. Sheep’s foot and padded drums have protrusions that work best on clay and silt. Pneumatic tire rollers are best to knead and seal the surface. And a lot of roller compactors have a vibration feature to shake the soil into place.

An asphalt paver is the machine where the road meets the road. Hot asphalt is loaded into the machine, which spreads it into an even layer onto the subgrade using a screed. Many paving machines have a wand that follows a stringline as a reference to the exact elevation required for the roadway.

If we’re talking about making a road out of concrete, then the tool for the job is a slip former. It’s usually more efficient and produces better quality of work when paving, curbs, and highway barriers are installed continuously rather than building forms and casting them in batches. Careful control of the mix makes it possible for a slip form machine to create long concrete structures without any formwork at all.

If we just added another layer of pavement to the road every time it started to wear out, pretty soon, we’d have walls! Roads are designed to be extraordinarily tough, so removing the top layer isn’t easy. That’s a job for an asphalt mill or planer. These specialized tools grind and remove the surface with a large rotating drum. The material is routed up a conveyor system and can be loaded into a following dump truck.

It’s actually fairly common to see multiple vehicles following one another in roadwork like this. An interesting example is the so-called paving train. On one end, we have a dump truck full of asphalt fresh from the plant. This is loaded into the asphalt paver, which continuously lays a layer of asphalt that is then compacted by one or more rollers. Workers on the ground also continuously monitor the process to ensure a nice even road surface.

Not everything at a construction site is a machine with wheels or tracks. A lot of equipment gets hauled in on a trailer, or is a trailer itself. A light tower lets you work outside of daylight hours, illuminating the site so you can work at night or underground. An air compressor enables the use of lots of tools on a job site, like jackhammers, sandblasters, and painting rigs. If you need electric power instead of compressed air, diesel generators offer access to power when grid service isn’t available.

So far, the actual material we’ve seen is in bulk like earth or asphalt. Often in construction, the materials we need to lift or move are objects like girders or concrete pipes. For that you need a crane or similar material-handling equipment.

This is a pipe layer. The name is a bit confusing since the workers that operate them are also often called pipe layers. And it's no surprise what kind of jobs they do. They specialize in handling large sections of pipe and precisely lowering them and placing them into trenches.

A telescopic handler, or a telehandler or teleporter is like an all-terrain forklift. The boom can have attachments like a bucket, pallet forks, or a winch, and it telescopes to make it easy to deliver materials and equipment exactly where you need it.

If you happen to be the load that needs elevating, then you’ll need a boom lift or its cousin, the scissor lift. The operator of these controls the platform while standing on it, allowing for very positioning of people that’s much more precise, and usually safer, than a ladder. Another relative of the boom lift is a bucket truck which has a boom lift in the back, used a lot of electric and utility work on poles.

Stepping up in size, we have road-rated all-terrain cranes. If you’ve passed a giant crane driving down the highway, it was one of these, since most other types of cranes have to be hauled to a site in pieces and assembled.

As the name implies, all-terrain cranes don’t require perfectly level, paved surfaces to get to work. However, if your job site is particularly rough, you need a rough-terrain crane. The giant rubber tires on these mean you’ll need to have them transported, but once rolling, they can go where highway-rated vehicles might struggle.

If the crane you’re looking at is mounted on tracks, you’ve got a crawler crane. These heavy-duty cranes, while slower and bulkier than all-terrain cranes and also requiring modular transport to job sites, can carry immense loads and extend to even greater heights than any of the cranes we’ve seen so far. Most crawler cranes can be configured according to the job with different lengths of booms, amounts of counterweight, and extensions called jibs. A particularly fun configuration is for demolition where a crawler crane might be fitted with a wrecking ball.

Most can move from place to place, but not all. Tower cranes use large counterbalanced horizontal booms with an integrated operator cab on top of a large, well… tower. Like most of the cranes we’ve seen so far, these come in a wide range of sizes but can be absolutely enormous, almost a construction project themselves requiring other cranes for assembly.

One way to build bridges uses a specialized crane called a launching gantry. You may have heard the term gantry before for a bridgelike overhead crane. These are in all kinds of industries. A launching gantry uses the existing structure of the bridge as a base and often lifts whole pre-built sections of the bridge.

Turning from the sky and looking underground, let’s talk about a few foundation-specific machines.

The biggest and heaviest structures are supported on bedrock or some deeper geological layer. Even if the usable soil is just clay for hundreds of feet, sinking deep subterranean columns or piles below a heavy structure can keep it from settling too much over time. One way to install a pile is to dig a very deep hole, place a reinforcing steel cage in the hole, then fill the whole thing with concrete. This is the exact job that a pile drill rig is designed to do. These large-scale drills are pretty closely related to the machines used for oil and gas exploration.

Another way to install piles is to drive them into the earth, the job of a pile driver. Just like the name implies, they repeatedly strike wooden, steel, or concrete piles to sink them to the required depth.

Speaking of concrete, there’s a whole subset of construction machines that are specifically designed to handle, transport, and place this important material. You’ve probably seen a mixer truck before, and I’ll forgive you for calling them cement trucks, even though cement is just one of the ingredients of a concrete mix. The truck can be loaded with dry materials and water, and the mixing occurs en route to the job site, since concrete generally has a limited time before it begins to cure.

Concrete is often placed directly from the truck using a chute, but that’s not always the easiest way. Concrete pumps are used to pump concrete to job site locations that are hard to access with a truck, often with a huge overhead boom. Since concrete is more than twice as dense as water, these pumps operate at extremely high pressures, sometimes over 100 times atmospheric pressure!

Finishing concrete is mostly a hand-tool job, but there are some machines for big jobs, like ride-on trowels, that speed up the job of floating a slab smooth once it has started to set up.

Big jobs with lots of concrete might just mix it onsite with a mobile batching plant. This is helpful if you need to produce vast volumes of concrete over a long period in a way that would be too inconvenient or maybe even impossible for mixer trucks to handle.

Sometimes concrete needs to be placed on a sloped or vertical surface to stabilize a rock face, shore up a tunnel, or even just install a pool! The catch-all term for the various varieties of sprayed concrete is shotcrete (although some pool installers might disagree). Shotcrete machines use compressed air to apply concrete to all kinds of surfaces in the construction world.

When projects require the installation of new or additional utility lines in areas that are already built up, the traditional method of digging trenches isn’t feasible. This kind of job calls for a directional drilling machine. While these are technically boring tools, they are anything but uninteresting. I actually have a dedicated video just to talk about how they work, and specifically how they steer that bit below the ground. Go check that out after this if you want to learn more.

Hopefully there have been a few machines in the list so far that are new to you, but if not, I have a few more specialized machines you might be lucky enough to spot on a site:

Fans of the channel might recognize a soil nail rig, a specialized machine that drills out more or less horizontal shafts in an earthen slope and then adds soil nails to greatly enhance stability.

Jobs that require grout often use mobile batch plants, called grout plants. You can even inject ground into the ground at high pressures using a hydraulic pump to fill voids and stabilize soils.

A wick drain machine installs prefabricated vertical drains into the soil at regular intervals to speed up drainage of water in clay soils which helps speed up the inevitable settling of the soil so construction can get started faster.

One option for repairing existing pipelines in place without trenching is cured-in-place pipe lining. Inverting a liner impregnated with epoxy-resin into an existing pipeline using air pressure essentially puts a brand new pipe inside an old or damaged line.

One of the least boring machines that you’d be really lucky to see above ground is a tunnel boring machine. These behemoths use a complicated face of various cutting tools followed by a material removal and shoring installation apparatus to efficiently bore full scale tunnels!

Obviously I can’t be exhaustive here. The construction industry is just full of machines. There is such a variety in the type and scale of projects that manufacturers are always coming up with new and improved equipment that can get a particular job done better. And lots of industries outside of construction use heavy machinery, including mines, oil and gas, and railroads. Let me know what you think I missed or if you want a similar list within a different industry. But I think this is a good starting point for any burgeoning construction spotter, and I hope it’s exhaustive enough that if you see something that didn’t make the list, you can puzzle out its purpose on your own. That part of the satisfaction of construction spotting anyway, so get out there and see what kinds of machines you can find.

August 15, 2023 /Wesley Crump

Where Does Grounded Electricity Actually Go?

August 01, 2023 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

Imagine this scenario: You have a diesel-powered generator on a stand that is electrically isolated from the ground. Run a wire from the energized slot of an outlet to an electrode driven into the ground. Don’t connect anything to the ground or neutral slots. Now imagine starting the generator. What happens? Does current flow from the energized wire into the ground or not? Your answer depends completely on your mental model of what the earth represents in an electrical circuit. After all, the idea of a circuit is just an abstraction of some really complicated electromagnetic processes, and that’s even more true on the grand scale of the power grid. Grounding is one of the most confusing and misunderstood aspects of the grid, so you can be pardoned for being a little perplexed.

For example, if I run a wire from the positive side of a battery into the ground, nothing happens. But, when an energized power line falls from a pole, there’s definitely current flowing into the ground then. Cloud-to-ground lightning strikes move huge electrical currents into or out of the earth, but my little thought experiment of a generator connected to a grounding electrode won’t create any current at all. I’ll explain why in a minute. Even on a electrical diagram, ground is just this magical symbol that hangs off the circuit willy nilly. But, connections between an electrical circuit and the ground serve quite a few different and critical purposes. And I have some demonstrations set up in the studio to help explain. I think you’re going to look at the power grid in a whole new way after this, but just don’t try these experiments at home. I’m Grady and this is Practical Engineering. In today’s episode, we’re talking about electrical grounding.

Why do we ground electrical circuits in the first place? Maybe the easiest way to answer that question is to show you what happens when we don’t. For as much importance as it gets in the electrical code, it might surprise you that it’s not always such a big deal, and in some cases, can even be beneficial. After all, lots of small electrical circuits lack a connection to the ground, even if part of the circuit is literally called, “ground.” In that case, that term really just refers to a common reference point from which voltages are measured. That’s one thing that can be confusing about voltage: it doesn’t actually refer to a single wire or trace or location, but the difference in electrical potentials between two points. For convention, we pick a common reference point, assume it has zero potential to make the math simple, and call it ground, even if there’s no reference to the actual ground below our feet. On small, low voltage devices (like battery powered toys), the difference in potential between components on the circuit board and the actual earth isn’t all that important, but that’s not true for high voltage systems connected to the grid. Let me show you why:

This is a diagram of a typical power system on the grid. The coils of a generator are shown on the left. When a magnetic field rotates past these coils, it generates electric current on the conductors, and (very generally) this is how we get the three phase AC power that is the backbone of most electric grids today. Look at nearly any transmission line, and you’ll see three main conductors that (again, very generally) correspond to this diagram. But what you don’t see here is a connection to ground. Let me put another diagram underneath where distance is equal to voltage. You can see our three conductors all have the same phase-to-phase voltage, and they have the same phase-to-ground voltage too. Everything is balanced. But, in this example, that connection to the ground isn’t very strong, resulting just from the electromagnetic fields of the alternating current (called capacitive coupling).

Watch what happens during a ground fault. This could be a tree branch knocking down a power line or a conductor being blown into contact with a steel tower or any other number of problems that lead to a short between one phase and ground. Now, all of the sudden, that weak coupling force keeping the phase-to-ground voltages balanced is overpowered, and all the phases experience a voltage shift with respect to the ground. But, the phase-to-phase voltages don’t change. In fact, a ground fault on an ungrounded power system usually doesn’t cause any immediate problems. The motors and transformers and other loads on the system don’t really care about the phase-to-ground voltage because they’re hooked up between phases. This is one benefit of an ungrounded power system: in many cases it can keep working even during a ground fault. But, of course, there are some downsides too.

In the example I showed, the phase-to-ground voltages of the two unfaulted conductors rise to almost twice what they would be in a balanced condition. Here’s why that matters: Higher voltage requires more insulation which means more cost. Especially on large transmission lines where insulation means literally holding the conductors great distances away from each other and the ground, those costs can add up quick. It might seem like an esoteric problem for an electrical engineer, but in practice, it just means that ungrounded power systems can be a lot more expensive (a problem anyone can understand). But that’s just the start.

Look back at our diagram and you can see the faulted phase potential is equal to ground potential. In other words, their difference is zero. There’s no voltage, and when you have zero voltage, you also have zero current. No electricity is flowing from the conductor into the ground. Or at least not very much is. You still have the capacitive coupling between the unfaulted conductors that allows a little bit of current to flow, but it’s not much. And that matters because nearly all the devices that would protect a system from a problem (like a ground fault) need some current to flow.

If you know much about wiring in buildings, you might be familiar with the classic example of a toaster with a metal case. It could be any appliance, but let’s use a toaster. Under normal conditions, current flows from the live or hot wire through a heating element and into the neutral wire to return to the grid, completing the circuit. But, if something comes loose inside the toaster, the live or energized side of your electrical supply could come into contact with that metal case, making it energized too. This could start a fire, or in the worst case, shock someone who touches the case. So, many appliances are required to have another conductor attached to the housing, giving the current a parallel, low-resistance return path. That low resistance means lots of current will flow, triggering a breaker to shut off the circuit.

And, it’s not just the breakers in your house that work this way. Nearly all the protective devices, called relays, that monitor parts of the power grid for problems rely on fault current to tell the difference between normal electrical loads and short circuits. The simplest way to do that is make sure the fault current is much higher than the normal loads. In the case of the damaged toaster, that fault current flowed through a conductor that is called “ground” (but is actually just a parallel wire that connects to the neutral in your electrical panel). But, in the case of substations and transmission lines, the fault current path is the actual ground.

Let’s look back at the diagram and convert it to a grounded system. If I add a strong bond to ground at the generator, things don’t look much different in the unfaulted condition. But as soon as you add a phase-to-ground short circuit, the diagram looks much different. First, the other phases don’t experience a shift in their phase-to-ground potential. But secondly, there’s now a path for fault current to flow through the ground back to the source. And that’s the answer to the question in the title of this video: electrical current (in nearly all cases) doesn’t flow into the earth; it flows through the earth. The ground is really just another wire. Although not a great one. Let me show you an example.

I have a narrow acrylic box full of dry sand. I put a copper rod into the sand on either side of the box and connected a circuit with a lightbulb so that the current has to flow across the sand from one electrode to the other. When I turn on the switch, nothing happens. It turns out that dry sand is a pretty good insulator. In fact, soil and rock vary widely in how well they conduct electrical current. The resistivity changes with soil type, seasons, weather, temperature, and moisture content. For example, let’s try to wet this sand and see if it makes a difference. Still nothing. Even completely saturating the sand with tap water, only a tiny current flows. You can barely see anything in the lightbulb, but the current meter shows a tenth of an amp now.

Soil resistivity also changes with the chemical constituents in the soil, which is why I’m having trouble getting any current to flow through the sand. There just aren’t enough electrolytes. Even with a layer of standing water on top of the sand doesn’t conduct much current at all. If I add just a little bit of salt water to that standing water, immediately you see that the resistivity goes down and the lightbulb is able to light. And if I let that salt water soak into the soil, now the sand is able to conduct electricity too.

This resistivity of soil to conduct current is pretty important. Earth isn’t a great wire, but what it lacks in conductivity, it makes up for in size. You can kind of image current flowing from a ground electrode into the surrounding soil as a series of concentric shells, each representing a drop in voltage between the faulted conductor and the ground potential. Each shell has more surface area for current to flow and so has lower resistance until eventually there’s practically no resistance at all. But up close to the electrode, the shells are spaced tightly together toward a single point or line. That spacing is related to the resistance of the soil, and it can represent a pretty serious safety issue. Here’s a little demonstration I set up to show how this works.

This is a length of nichrome wire connected between mains voltage with a few power resistors in between to limit the current. When I flip the switch, electrical current flows through the wire, simulating a ground fault. This length of NiChrome wire is resistive to the flow of current just like the soil would be in a ground fault condition. You can see it heat up when I flip the switch. That means the electric potential along this wire is different at every point. I can show that just by measuring the voltage with a meter at a few different locations.

Remember that voltage is the difference in potential between two points, or in the case of Zap McBodySlam here, between two feet. When Zap steps on the wire, his legs are are at two different electric potentials, and unfortunately, human bodies are better conductors than the ground. That difference in electric potential creates a voltage that drives current up into one leg and down out of the other. In this case, I just have that voltage turning on a little light, but depending on how high that voltage is, and how well Zap is insulated from it, this step potential can be a matter of life or death. In fact, power line technicans are often encouraged to hop on one foot away from a ground fault to reduce the chance of a step potential. It sounds silly, but it might save their life.

Similarly, power technicians often come into contact with the metal cases around equipment regularly. So, if a ground fault happens on a piece of equipment, and the resistance of the grounding system is too high, there can be a voltage between the ground and the metal case, again creating the possibility of a voltage across a person’s body, called touch potential. The engineers who design power plants, substations, and transmission lines have to consider what touch potentials and step potentials can be safely withstood by a person and design grounding systems to make sure that they never exceed that level. For example, most substations are equipped not just with a single grounding electrode but a grid of buried conductors to minimize resistance in the earth connection. You might also notice that many substations use crushed rock as the ground surface. That’s not just because linesmen don’t like to mow the grass. It’s because the crushed rock, like the dry sand in my demo, doesn’t conduct electricity well and minimizes the chance of standing water.

But, not all power systems use the ground just as a safety measure. There are systems where the earth is actually the primary return path for current to flow. The ground is essentially the neutral line. Electrical distribution systems called “Single Wire Earth Return” or SWER are used in a few places around the world to deliver electrical power in rural areas. Using the earth as a return path can save cost, since you only have to run a single wire, but of course there are safety and technical challenges too.

Similarly, there are some high voltage transmission lines across the word that use direct current (like a battery) instead of AC. We’ll save a detailed discussion of these systems for another day, because there is a lot of fascinating engineering involved. But, I did want to mention them here, because many of these lines are equipped with really elaborate grounding systems. Although most High Voltage DC transmission lines use two conductors (positive and negative), some only use one with the return current flowing through the earth or the sea. And, even the bipolar lines often include grounding systems so they they can use ground return during and outage or emergency if one pole is out of service. For example, the Pacific DC Intertie that carries power from the pacific northwest into Los Angeles has elaborate grounding systems at both ends. In Oregon, over 1000 electrodes are buried in a ring with a circumference of 2 miles or 3.2 kilometers. In California, the grounding system consists of huge electrodes submerged in the Pacific Ocean a few miles off the shore.

Unlike AC return currents that generally follow a path that matches the transmission line, DC currents can flow through the entire earth. In essence, the electrodes are completely decoupled. That does mean they’re susceptible to some environmental issues though. They create magnetic fields that can affect compass readings and magneto-sensitive fish like salmon and eels. In ocean electrodes, the current can cause electrolysis, breaking down seawater into toxic chemicals like chloroform and bromoform. And, stray electrical currents in the ground can flow into pipelines and other buried structures, causing them to corrode. This is also a problem with some electric trains that use the rail as a return path. You may have heard that electricity takes the path of least resistance, but that’s not really true. Electricity takes all the paths it can in accordance with their relative conductivity. So, even though a big steel rail is a lot more conductive than the earth, return current from traction motors can and does flow into ground, sometimes corroding adjacent pipelines, and occasionally interfering with buried telecommunication lines too.

I’ve conveniently left out lightning from this discussion until now. Unlike a conventional circuit where current is alway moving, lightning is a type of static electricity. It’s not flowing… until it is. And unlike fault current that only uses the ground as a conduit, the current from a lightning strike really does just flow into the ground, or most frequently, out of the ground and into the atmosphere, restoring an imbalance of charge created by the movement of air or water… or something else. We really don’t understand lightning that well. But an additional and vital reason we ground electrical systems is so that, if lightning strikes, that current has a direct path to the ground. If it didn’t, it might arc across gaps or build up charge in the system, creating a fire or damaging equipment.

It’s not just lighting, ground faults, and circuit return current that flows through the earth. Lots of other natural mechanisms cause current to flow below our feet, including solar wind, changes in earth’s magnetic field, and more. These are collectively known as a telluric currents, and they intermingle below the surface with the currents that we send into the ground.

A common question I get about the electrical grid is how to know specifically which power plant serves a city or a building. It’s kind of like asking what tree or plant created the oxygen that you breath. Technically, it’s more likely to be one close to you than very far away, but that’s not quite how it works. Power gets intermingled on the grid - that’s why it’s called the grid in the first place - and it just flows along the lines in accordance with the difference in potential. And the ground works in a similar way. You can’t necessarily draw lines of current flow between sources and loads, lightning strikes, and telluric phenomena. The truth of how current flows in the ground is a little more complicated than that; it all kind of mixes together down there to some extent. But above the surface, it really isn’t so complicated. Current doesn’t flow to the ground; it flows through the ground and back up. If there is electricity moving into the ground from an energized conductor, go back to the source of that conductor and see what’s happening. For the grid, it’s probably a transformer or electrical generator, in either case, a simple coil of wire. And, the electrical current flowing out of the coil has to be equal to the electrical current flowing into it, whether that current is coming from one of the other phases, a neutral line, or an electrode buried in the ground.

August 01, 2023 /Wesley Crump
  • Newer
  • Older