Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

How Sewers Work

July 06, 2021 by Wesley Crump

A sewage collection system is not only a modern convenience but one also of the most critical pillars of public health in an urban area. Humans are kind of gross. We collectively create a constant stream of waste that threatens city-dwellers with plague and pestilence unless it is safely carried away. Sewers convert that figurative stream into a literal one that flows below ground away from public view (and hopefully public smell). There are a lot of technical challenges with getting so much poop from point A to point B, and the fact that we do it mostly out-of-mind, I think, is cause for celebration. So, this post is an ode to the grossest and probably most underappreciated pieces of public infrastructure. I’m Grady, and this is Practical Engineering. Today, we’re talking about sewers.


As easy as it sounds to slap a pipe in the ground and point it toward the nearest wastewater treatment plant, designing sanitary sewage lines - like a lot of things in engineering - is a more complex task than you would think. It is a disruptive and expensive ordeal to install subsurface pipes, especially because they are so intertwined with roadways and other underground utilities. If we’re going to go to the trouble and cost to install or replace them, we need to be sure that these lines will be there to stay, functioning effectively for many decades. And speaking of decades, sewers need to be designed not just for the present conditions, but also for the growth and changes to the city over time. More people usually means more wastewater, and sewers must be sized accordingly. Joseph Bazalgette, who designed London’s original sewer system, famously doubled the proposed sizes of the tunnels, saying, “We’re only going to do this once.” Although wantonly oversizing infrastructure isn’t usually the right economic decision, in that case, the upsizing was prescient. Finally, these lines carry some awful stuff that we do not want leaking into the ground or, heaven forbid, into the drinking water supply whose lines are almost always nearby. This all to say that the stakes are pretty high for the engineers, planners, and contractors who make our sewers work.


One of the first steps of designing a sewage collection system is understanding how much to expect. There are lots of published studies and guidelines for estimating average and peak wastewater flows based on population and land use. But, just counting the number of flushes doesn’t tell the whole story. Most sanitary systems are separated from storm drains which carry away rainfall and snowmelt. That doesn’t mean precipitation can’t make its way into the sewage system, though. Inflow and infiltration (referred to in the business as I&I) are the enemies of utility providers for one simple reason. Precipitation finding its way into sewers through loose manholes, cracks in pipes, and other means can overwhelm the capacity of the system during storms. The volume of the fabled “super flush” during the halftime of the Superbowl is usually a drop in the bucket compared to a big rainstorm. I&I can lead to overflows which create exposure to raw sewage and environmental problems. So utilities try to limit this I&I to the extent possible through system maintenance, and engineers designing sewers try to take it into account when choosing the system capacity.


Once you know how much sewage to expect, then you have to design pipes to handle it. It’s often said that a civil engineer’s only concerns are gravity and friction. I’ll let you take a guess at which one of those makes poop flow downhill. It’s true that almost all sewage collection systems rely mostly on gravity to do the work of collecting and transporting waste. This is convenient because we don’t have to pay a gravity bill - it comes entirely free. But, like most free things, it comes with an asterisk, mainly that gravity only works in one direction: down. This fact constrains the design and construction of modern sewer systems more than any other factor.


We need some control over the flow in a sewer pipe. It shouldn’t be too fast so as to damage the joints or walls of the pipe. But it can’t flow too slow, or you risk solids settling out of suspension and building up over time. We can’t adjust gravity up or down to reach this balance, and we also don’t have much control over the flow of wastewater. People flush when they flush. The only things engineers can control are the size of the sewer pipe and its slope. Take a look at what happens when the slope is too low. The water moves too slowly and allows solids to settle on the bottom. Over time, these solids build up and reduce the capacity of the pipe. They can even completely clog. Pipes without enough slope require frequent and costly maintenance from work crews to keep the lines clear. If you adjust the slope of the line without changing the flow rate, the velocity of the water increases. This not only allows solids to stay in suspension, but it also allows the water to scour away the solids that have already settled out. The minimum speed to make sure lines stay clear is known as the self-cleaning velocity. It can vary, but most cities require that flow in a sewer pipe be at least three feet or one meter per second. 


So far I’ve been talking abou sand to simulate the typical “solids” that could be found in a wastewater stream. But, you might be interested to know that we’re, thankfully and by design, only scratching the surface of synthetic human waste. Laboratories doing research on urban sanitation, wastewater treatment, and even life support systems in space often need a safe and realistic stand-in for excrement, of which there are many interesting recipes published in the academic literature. Miso (or soybean) paste is one of the more popular constituents. Feel free to take your own journey down the rabbit hole of simulated sewage after this. I mean that figuratively, of course.


The slope of a sewer pipe is not only constrained by the necessary range of flow velocities. It also needs to consider the slope of the ground above. If the slope is too shallow compared to the ground, the sewer can get too close to the surface, losing the protection of the overlying soil. If the slope is too steep compared to the ground, the sewer can eventually become too deep below the surface. Digging deep holes to install sewer pipes isn’t impossible or anything, but it is expensive. Above a certain depth, you need to lay back the slopes of the trench to avoid having it collapse. In urban areas where that’s not possible, you instead have to install temporary shoring to hold the walls open during construction. You can also use trenchless excavation like tunneling, but that’s a topic for another post. This all to say that choosing a slope for a sewer is a balance. Too shallow or too steep, and you’re creating extra problems. Another topographic challenge faced by sewer engineers is getting across a creek or river.


It is usually not cost-effective to lower an entire sewer line or increase its slope to stay below a natural channel. In these cases, we can install a structure called an inverted siphon. This allows for a portion of a line to dip below a depressed topographic feature like a river or creek and come back up on the other side. The hydraulic grade line, which is the imaginary line representing the surface of the fluid, comes up above the surface of the ground. But, the pipe contains the flow below the surface. The problem with inverted siphons is that, because they flow full, the velocity of the flow goes down. That means solids are more likely to settle out, something that is especially challenging on a structure with limited access for maintenance. This is similar to the p- or u-trap below your sink, that spot where everything seems to get stuck. Even though the pipe is the same size along the full length, settling only happens within the siphon. To combat this issue, inverted siphons often split the flow into multiple smaller pipes. This helps to keep the velocity up above the self-cleaning limit. A smaller pipe obviously means a lower capacity, which is partly why siphons often include two or three. Even though there’s some settling happens, it’s not increasing over time. The velocity of the flow in the smaller siphons is high enough to keep most of the solids in suspension.


The volume and hydraulics of wastewater flow aren’t the only challenges engineers face. Sewers are lawless places, by nature. There are no wastewater police monitoring what you flush down the toilet, thank goodness. However, that means sewers often end up conveying (or at least trying to convey) substances and objects for which they were not designed. For a long time, grease and oil were the most egregious of these interlopers since they congeal at room temperatures. However, the rising popularity of quote-unquote “flushable” wipes has only made things worse. Grease and fat combine with wet wipes in sewers to create unsettling but aptly named, “fatbergs,” disgusting conglomerates that, among other things, are not easily conveyed through sanitary sewer lines. Conveniently, most places in the world have services available to carry away your solid wastes so you don’t have to flush them. But they usually do it in trucks - not pipes.


Obviously, this issue is more complicated than my little experiment. The labeling of wipes has turned into a controversy that is too complex to get into here. My point though, and indeed the point of this whole post, is that your friendly neighborhood sewage collection system is not a magical place where gross stuff goes to disappear. It is a carefully-planned, thoroughly tested system designed to keep the stuff we don’t want to see - unseen. What happens to your flush once it reaches a wastewater treatment plant is a topic for another post, but I think the real treasure is the friends - sewers - it meets along the way.


July 06, 2021 /Wesley Crump

What Really Happened at the Hernando de Soto Bridge?

June 15, 2021 by Wesley Crump

In May of 2021, inspectors on the Hernando de Soto Bridge between Arkansas and Memphis, Tennessee discovered a crack in a major structural member. They immediately contacted emergency managers to shut down this key crossing over the Mississippi River to vehicle traffic above and maritime traffic below. How long had the crack been there and how close did this iconic bridge come to failing? I’m Grady and this is Practical Engineering. Today, we’re discussing the Memphis I-40 bridge incident.


The Hernando de Soto Bridge carries US Interstate 40 across the Mississippi River between West Memphis, Arkansas and Memphis, Tennessee. Opened for traffic in 1973, the bridge’s distinctive double arch design gives it the appearance of a bird gliding low above the muddy river. I-40 through Tennessee and Arkansas is one of the busiest freight corridors in the United States, so the Mississippi River bridge is a vital east-west link, carrying an average of 50,000 vehicles per day. Although it was built in the 70s, the bridge has had some major recent improvements. It’s located in a particularly earthquake-prone region called the New Madrid Seismic Zone. Starting in 2000 and continuing all the way through 2015, seismic retrofits were added to the bridge to help it withstand a major earthquake and serve as a post-earthquake lifeline link for emergency vehicles and the public. ARDOT and TDOT share the maintenance responsibilities for the structure, with ARDOT in charge of inspections.


On May 11, 2021, a climbing team from an outside engineering firm was performing a detailed inspection of the bridge's superstructure. During the inspection, they noted a major defect in one of the steel members below the bridge deck. The crack went through nearly the entire box beam with a significant offset between the two sides. Recognizing the severity of the finding, several of the engineers called 911 to alert local law enforcement agencies and shut the bridge down to travel above and below the structure. This decision to close the bridge snarled traffic, forcing cars and trucks to detour over the older and smaller I-55 bridge nearby. It also created a backup of hundreds of barges and ships needing to pass north and south on the Mississippi River below the bridge. Knowing how significant an impact closing the bridge would be on such a vital corridor, how did engineers know to act so quickly and decisively? In other words, how important is this structure member? To explain that, we need to do a quick lesson on arch bridges. There are so many ways to span a gap, all singular in function but remarkably different in form. One type of bridge takes advantage of a structural feature that’s been around for millennia: the arch.


Most materials are stronger against forces along their axis than those applied at right angles (called bending forces). That’s partly because bending forces introduce tension in structural members. Instead of beams that are loaded perpendicularly, arch bridges use a curved element to transfer the weight of the bridge to the substructure using almost entirely compressive forces. Many of the oldest bridges used arches because it was the only way to span a gap with materials available at the time (stone and mortar). The Caravan Bridge in Turkey was built nearly 3,000 years ago but is still in use today. Even now, with the convenience of modern steel and concrete, arches are a popular choice for bridges. When the arch is below the roadway, we call it a deck arch bridge. Vertical supports transfer the load of the deck onto the arch. If part or all the arch extends above the roadway with the deck suspended below, it’s a through-arch bridge like the Hernando de Soto.


Arches can be formed from many different materials, including steel beams, reinforced concrete, or even stone or brick masonry. The I-40 Mississippi River bridge has two arches made from a lattice of steel trusses. One result of compressing an arch is that it creates horizontal forces called thrusts. So, arch bridges normally need strong abutments at either side to push against that can withstand the extra horizontal loads. So why do the arches of this bridge sit on top of spindly piers? Just from looking at it, you can tell that this support was not designed for horizontal loading. That’s okay, because the Hernando de Soto uses tied arches. Instead of transferring the arch thrusts into an abutment, you can tie the two ends together with a horizontal chord. This tie works exactly like a bowstring, balancing the arch’s thrust forces with its resistance to tension. Tied arch bridges don’t transfer thrust forces to their supports, meaning they can sit atop piers designed primarily for vertical loads.


This tension member is the subject of our concern. The crack in the Hernando de Soto bridge went right through one of the two arch ties on the eastern span. It’s hard to understate the severity of the situation. These ties are considered fracture-critical members - those non-redundant structural elements subject to tension whose fracture would be expected to result in a collapse of the entire bridge. Obviously, this member did fracture without a collapse, so there may be a dispute about whether it truly qualifies as fracture-critical, but suffice it to say that losing the tie on a tied-arch bridge is not a minor issue. So why would a tension member like this crack?


Let me throw in a caveat here before continuing. Structural engineering is not an armchair activity. Forensic analysis of a failure requires a tremendous amount of information before arriving at a conclusion, including structural analysis, material testing, and review of historical information. Without such an investigation, the best we can do is speculate. A detailed forensic review will almost certainly be performed, and then we’ll know for sure. With all that said, there’s really only one reason that a steel member would crack like what’s shown in the photos of the I-40 bridge.


When steel fails, it is usually a ductile event. In other words, the material bends, deforms, and stretches. But, steel can experience brittle failures too, called fractures, where little deformation occurs. And the primary reason that a crack would initiate in a steel tension member of a bridge is fatigue. Fatigue in steel happens because of repeated cycles of loading. Over time, microscopic flaws in the material can grow into cracks that open a small amount with each loading cycle, even if those loading cycles are well below the metal’s yield strength. If not caught, a fatigue crack will eventually reach a critical size where it can propagate rapidly, leading to a fracture. Bridges are particularly susceptible to fatigue because traffic loads are so dynamic. This bridge sees an average of 50,000 vehicles per day. That is tens of millions of load cycles every year.


Fatigue is common on steel members that have been welded because welding has a tendency to introduce flaws in the material. When weld metal cools, it shrinks generating residual stress in the steel. These stress concentrations are where most fatigue cracks occur. And the box tie member at the I-40 bridge is a built-up section. That means it was fabricated by welding steel plates together. It’s a common way to get structural steel members in whatever shape the design requires. But, if not carefully performed, the welds have the potential to introduce flaws from which a fatigue crack can propagate.


Of course, these ties aren’t purely tension members holding the two sides of the arch together. If they were, the load cycles would probably be a lot less dynamic. The ties don’t support these lateral beams below the road deck - that’s done by the suspender cables hanging from the arch above - but they do have a rigid connection. That means when the deck moves, the tension ties move with it, potentially introducing stresses that could exacerbate the formation of a crack. Again, without a detailed structural model, it’s impossible to say how the dynamic cycles of traffic forces are distributed through each member. We can’t say whether the original design or the seismic retrofits had a flaw that could have been prevented. Fatigue and fractures are difficult to characterize, and in some cases inevitable given the construction materials and methods, even with a good design. That’s why inspections are so important. One of the biggest questions everyone is asking, and rightly so given the severity of the situation, is “how long has this structural member been cracked?”


National bridge standards require inspections for highway bridges every two years. Bridges with fracture-critical members, like this one, are usually inspected more frequently than that, and inspection of those members has to be hands-on. That means no drones or observations from a distance - a human person has to check every surface of the steel from, at minimum, an arm’s length away. Given those requirements, you would think that this crack, discovered in May of 2021 did not exist the year before. Unfortunately, ARDOT provided a drone inspection video from 2 years earlier, clearly showing the crack on the tie beam. Although it hadn’t yet grown to its eventual size, the crack is nearly impossible to miss. And it could have been there well before that video was shot. One amateur photographer who took a canoe trip below the bridge in 2016 shared a photo of the same spot, and it sure looks like there’s a crack.


Bridge inspections are not easy. Even on simple structures they often require special equipment - like snooper trucks - and closing down lanes of traffic. Complicated structures like the I-40 bridge require teams of structural engineers trained in rope access climbing to put eyes on every inch of steel. And even then, cracks are hard to identify visually and can be missed. Inspectors are humans, after all. But, none of that justifies this incident, especially given how large and obvious the fracture was. ARDOT announced that they fired an unnamed inspector who was presumably responsible for the annual inspections on this bridge. We don’t know many details of that situation, but I just want to clarify that it’s not a solution to the problem. If your ability to identify a major defect in a fracture-critical member of a bridge hinges on a single person, there’s something very wrong with your inspection process. Quality management is an absolutely essential part of all engineering activities. We know we’re human and capable of mistakes, so we build processes that reduce their probability and consequences.


That includes quality assurance which are the administrative activities of verifying that work is being performed correctly such as making sure that bridges are inspected by teams and that inspectors are properly trained. It also includes quality control, the checks and double-checks of work products like inspection reports. And, quality management should be commensurate with the level of risk. In other words, if an error would threaten public safety, you can’t just leave it up to a single person. Put simply and clearly, there is absolutely no excuse for this crack to have sat open on the bridge’s tie member for as long as it did.


This story is ongoing. As of this video’s writing the bridge is closed to traffic indefinitely. But, that doesn’t mean the incident is over. There’s a chance that, as the forces in the bridge redistributed with the damage to this vital member, other structural elements became overloaded. The second tension tie may have taken up much of its partner's stress and the pier supporting the arch may have been subject to a lot more horizontal force than it was designed to withstand. In addition, bridges are full of repetitive details. If this crack could happen in one place, there’s a good chance similar cracks may exist elsewhere. The Federal Highway Administration recommends that, when a fatigue crack is found, a special, in-depth inspection be performed to look for more. That will involve hands-on checking of practically every square inch of steel on the bridge, and probably non-destructive tests that can identify defects like using x-rays, magnetic particles or dyes that make cracks more apparent.


The repair plan for the bridge is already in progress. Phase 1 was to temporarily reattach the tie using steel plates to make the bridge safe for contractors. The design for Phase 2 will depend entirely on the findings of detailed structural analysis and forensic investigation. In the meantime, it’s clear that ARDOT and TDOT have some work ahead of them. Most importantly, they need to do some reckoning with their bridge inspection procedures, and thank their lucky stars that this fracture didn’t end in catastrophe. There’s no clear end in sight for the inconvenienced motorists needing to cross the Mississippi River, but I’m thankful that they’re all still around to be inconvenienced. Thank you, and let me know what you think.

June 15, 2021 /Wesley Crump

The Fluid Effects That Kill Pumps

June 01, 2021 by Wesley Crump

The West Closure Complex is a billion-dollar piece of infrastructure that protects parts of New Orleans from flooding during tropical storms. Constructed partly as a result of Hurricane Katrina, it features one of the largest pumping stations in the world, capable of lifting the equivalent of a fully-loaded Boeing 747 every second. When storm surge threatens to raise the levels of the sea above developed areas on the west bank of the Mississippi River, this facility’s job is to hold it back. The gates close and the pumps move rainwater and drainage from the City’s canals back into the Mississippi River and out to the gulf. This pump station may be the largest of its kind, but its job is hardly unique. We collectively move incredible volumes of fresh water, drainage, and wastewater into, out of, and around our cities every day. And, we mostly do it using pumps. I love pumps. But, even though they are critical for the safety, health, and well-being of huge populations of people, there are a lot of things that can go wrong if not properly designed and operated. I’m Grady, and this is Practical Engineering. Today, we’re exploring some of the problems that can happen with pumps.


The first of the common pitfalls that pumps can face is priming. Although liquids and gases are both fluids, not all pumps can move them equally. Most types of pumps that move liquids cannot move air. It’s less dense and more compressible, so it’s often just unaffected by impellers designed for liquids. That has a big implication, though. It means if you’re starting a pump dry - that is when the intake line and the housing are not already full of water, like I’m doing here - nothing happens. The pump can run and run, but because it can’t draw air out of the intake line, no water ever flows. This is why many pumps need to be primed before starting up. Priming just means filling the pump with liquid to displace the air out of housing and sometimes the intake pipe. When you raise the discharge line to let water flow backwards into the pump, it happens quickly. As soon as the air is displaced from the housing, the pump is primed and water starts to flow. There are a lot of creative ways to accomplish this for large pumps. Some even have small priming pumps to do this very job. “But what primes the priming pumps?” Well, there are some kinds of pumps that are self-priming. One is submersible pumps that are always below the water where air can’t find its way in. Another is positive displacement pumps that can create a vacuum and draw air through. They may not be as efficient or convenient to use as the main pump, but they work just fine for the smaller application of priming.


However a pump is primed, it’s critical that it stays that way. If air finds its way into the suction line of a pump, it can lose its prime and stop working altogether. When you lift a pump out of water, the prime is lost. And if you put the pump back down into the water, it doesn’t start back up. This can be a big problem if it goes unnoticed, not just because the pump isn’t working, but also because running a pump dry often leads to damage. Many pumps depend on the fluid in the housing for cooling, so without it, they overheat. In addition, the seals around the shaft that keep water from intruding on the motor depend on the fluid to function properly. If the seals dry out, they get damaged and require replacement which can be a big job.


The next problem with pumps is also related to the suction side. Pumps work by creating a difference in pressure between the inlet and outlet. In very simple terms, one side sucks and one side blows. A problem comes when the pressure gets too low on the suction side. You might know that the phase of many substances depends not just on their temperature, but also on the ambient pressure. That’s why the higher you are in elevation, the lower the temperature needed to boil water. If you continue that trend into lower and lower pressures, eventually some liquids (including water) will boil at normal temperatures without any added heat. It’s a pretty cool effect as a science demonstration, but it’s not something you want happening spontaneously inside your pump. Just like they don’t work with air, most pumps don’t work very well with steam either. But, the major problem comes when those bubbles of stream collapse back into a liquid. Liquids aren’t very compressible so these collapsing bubbles send powerful shockwaves that can damage pump components. This phenomenon is called cavitation, and I have a blog covering it in a lot more detail that you can check out after this one to learn more. It usually doesn’t lead to immediate failure, but cavitation will definitely shorten the life of a pump significantly if not addressed.


The solution to this problem at pumps is known as Net Positive Suction Head, and with a name like that, you know it’s important. Manufacturers of large pumps will tell you the required Net Positive Suction Head (or NPSH), which is the minimum pressure needed at a pump inlet to avoid cavitation. The engineer’s job is to make sure that a pump system is designed to provide at least this minimum pressure. That NPSH depends on the vertical distance between the sump and inlet, the frictional losses in the intake pipe, the temperature of the fluid, and the ambient air pressure. Here’s an example: With a valve wide open, the suction pressure at the inlet is about 20 kPa or 5 inches of mercury. Now, when you move the pump to the height of a ladder, but leave the bucket on the ground, the suction pressure just about doubles. A constriction in the line also decreases the available NPSH. If you close the valve on the intake side of a pump, you immediately see the pressure in the line becoming more negative (in other words, a stronger vacuum). This pump isn’t strong enough to cavitate, but it will make a bad sound when there isn’t enough Positive Suction Head at the inlet. I think it easily demonstrates how a poor intake design can dramatically affect the pressure in the intake line and quickly lead to failure of a pump.


The last problem that can occur at pumps is also the most interesting: vortices. You’ve probably seen a vortex form when you drain a sink or bathtub. These vortices occur when the water accelerates in a circular pattern around an outlet. If the vortex is strong enough, the water is flung to the outside, allowing air to dip below the surface. This is a problem for pumps if that air is allowed to enter the suction line. We talked a little about what happens when a pump runs dry in the discussion about priming, but air is a problem even if it’s mixed with water. That’s because it takes up space. A bubble of air in the impeller reduces the pump’s efficiency since the full surface of the blades can’t act on the water. This causes the pump to run at reduced performance and may cause it to lose prime, creating further damage.


The easiest solution to vortexing is submergence - just getting the intake pipe as far as possible below the surface of the water. The deeper it is, the larger and longer a vortex would have to be before air could find its way into the line. This is achieved by making the sump - that is the structure that guides the water toward the intake - deeper. That solution seems simple enough, except that these sumps are often major structural elements of a pump station that are very costly to construct. You can’t just indiscriminately oversize them. But how deep is deep enough? 


It turns out that’s a pretty complicated question because a vortex is hard to predict. Even sophisticated computational fluid dynamics models have trouble accurately characterizing when and if a vortex will form. That’s an issue because you don’t want to design and construct a multi-million-dollar pumping facility just to find out it doesn’t work. And there aren’t really off-the-shelf designs. Just about every pumping station is a custom-designed facility meant for a specific application, whether it’s delivering raw water from a reservoir or river to a treatment plant, sending fresh water out to customers, lifting sewage to be treated at a wastewater plant, pumping rainwater out of a low area, or any number of other reasons to move large volumes of water. So if you’re a designer, you have some options.


First, you can just be conservative. We know through lots of testing that vortices occur mostly due to non-uniform flow in the sump. Any obstructions, sharp turns, and even vertical walls can lead to flow patterns that evolve into vortices. Organizations like the Hydraulic Institute have come up with detailed design standards that can guide engineers through the process of designing a pump station to make sure many of these pitfalls are avoided. Things like reducing the velocity of the flow and maintaining clearance between the walls and the suction line can reduce the probability of a vortex forming. There are also lots of geometric elements that can be added to a sump or intake pipe to suppress the formation of vortices.


The second option for an engineer is to build a scale model. Civil engineering is a little bit unique from other fields because there aren’t as many opportunities for testing and prototyping. Infrastructure is so large and costly, you usually only have one shot to get the design right. But, some things can be tested at scale, including hydraulic phenomena. In fact, there are many laboratories across the world that can assemble and test scale models of pump stations, pipelines, spillways, and other water-handling infrastructure to make sure they work correctly before spending those millions (or billions) of dollars on construction. They give engineers a chance to try out different configurations, gain confidence in the performance of a hydraulic structure, and avoid the pitfalls like loss of prime, cavitation, and vortices at pump stations.

June 01, 2021 /Wesley Crump

What Really Happened at the Oroville Dam Spillway?

May 18, 2021 by Wesley Crump

In February 2017, concrete slabs in the spillway at Oroville Dam failed during releases from the floodgates, starting a chain of events that prompted the evacuation of nearly 200,000 people downstream. The dam didn’t fail, but it came too close for comfort, especially for the tallest structure of its kind in the United States. Oroville Dam falls under the purview of the Federal Energy Regulatory Commission, in a state with a progressive dam safety program and regular inspections and evaluations by the most competent engineers in the industry. So how could a failure mode like this slip through the cracks, both figuratively and literally? Luckily, an independent forensic team got deep in the weeds and prepared a 600 page report to try and find out. This is a summary of that. I’m Grady and this is Practical Engineering. Today, we’re talking about the Oroville Dam Crisis.


Oroville Dam, located in northern California, is the tallest dam in the United States at 770 feet or 235 meters high. Completed in 1968, and owned and operated by the California Department of Water Resources, every part of Oroville Dam is massive. The facility consists of an earthen embankment which forms the dam itself, a hydropower generation plant that can be reversed to create pumped storage, a service spillway with 8 radial floodgates, and an emergency overflow spillway. The reservoir created by the dam, Lake Oroville, is also immense - the second biggest in the state. It’s part of the California State Water Project, one of the largest water storage and delivery systems in the U.S. that supplies water to more than 20 million people and hundreds of thousands of acres of irrigated farmland. The reservoir is also used to generate electricity with over 800 megawatts of capacity. Finally, the dam also keeps a reserve volume empty during the wet season. In case of major flooding upstream, it can store floodwaters and release them gradually over time, reducing the potential damage downstream.


No dam is built to hold all the water that could ever flow into the reservoir at once. And yet, having water overtop an unprotected embankment will almost certainly cause a breach and failure. So, all dams need spillways to safely release excess inflows and maintain the level of the reservoir once it’s full. Spillways are often the most complex and expensive components of a dam, and that is definitely true at Oroville. The service spillway has a chute that is 180 feet or 55 meters wide and 3,000 feet long. That’s nearly a kilometer for the metric folks. Radial gates control how much water is released and massive concrete blocks at the bottom of the chute, called dentates, disperse the flow to reduce erosion as it crashes into the Feather River. This spillway is capable of releasing nearly 300,000 cubic feet or 8,000 cubic meters of water per second. That’s roughly an olympic-sized swimming pool every other second, which I know is not that helpful in conceptualizing this incredible volume. If you somehow put that much flow through a standard garden hose, it would travel at 15% of the speed of light, reaching the moon in about 9 seconds. How’s that for a flow rate equivalency? But even that is not enough to protect the embankment.


Large dams have to be able to withstand extraordinary flooding. In most cases, their design is based on a synthetic (or made up) storm called the Probable Maximum Flood, which is essentially an approximation of the most rain that could ever physically fall out of the sky. It usually doesn’t make sense to design the primary spillway to handle this event, since such a magnitude of flooding is unlikely to ever happen during the lifetime of the structure. Instead, many dams have a second spillway, much simpler in design - and thus less expensive to construct - to increase their ability to discharge huge volumes of water during rare but extreme events. At Oroville, the emergency spillway consists of a concrete weir set one foot above the maximum operating level. If the reservoir gets too high and the service spillway can’t release water fast enough, this structure overflows, preventing the reservoir from reaching and overtopping the crest of the dam.


Early 2017 was one of northern California’s wettest winters in history with several major flood events across the state. One of those storms happened in February upstream of Oroville Dam. As the reservoir filled, it became clear to operators that the spillway gates would need to be opened to release excess inflows. On February 7, early during the releases, they noticed an unusual flow pattern about halfway down the chute. The issue was worrying enough that they decided to close the gates and pause the flood releases in order to get a better look. What they saw when the water stopped was harrowing. Several large concrete slabs were completely missing and a gigantic hole had eroded below the chute.


There was a lot more inflow to the reservoir in the forecast, so the operators knew they didn’t have much time to keep the gates closed while they inspected the damage, and no chance to try and make repairs. They knew they would have to keep operating the crippled spillway. So, they started opening gates incrementally to test how quickly the erosion would progress. Meanwhile, more rain was falling upstream, contributing to inflows and raising the level of the reservoir faster and faster. It wasn’t long before the operators were faced with an extremely difficult decision: open more gates on the service spillway which would further damage the structure or let the reservoir rise above the untested emergency spillway and cascade down the adjacent hillside.


Several issues made this decision even more complicated. On one hand, the service spillway was in bad shape, and there was the possibility of the erosion progressing upstream toward the headworks which could result in an uncontrolled release of the reservoir. Also, debris from the damaged spillway was piling up in the Feather River, raising its level and threatening to flood out the power plant. Finally, electrical transmission lines connecting the power plant to the grid were being threatened by the erosion along the service spillway. Losing these lines or flooding the hydropower facility would hamstring the dam’s only backup for making releases from the reservoir. Operators knew that repairing the spillway would be nearly impossible until the power plant could be restored. These factors pointed towards closing the spillway gates and allowing the reservoir to rise.


On the other hand, the emergency spillway had never been tested, and operators weren’t confident that it could safely release so much water, especially after witnessing how quickly and aggressively the erosion happened on the service spillway nearby. Also, its use would almost certainly strip at least the top layer of soil and vegetation from the entire hillside, threatening adjacent electrical transmission towers. A huge contingent of engineers and operations personnel were all hands on deck, running analyses, forecasting weather, reviewing geologic records and original design reports trying to decide the best course of action. Of course, this is all happening over the course of only a couple of days with conditions constantly changing and no one having slept, further complicating the decision making process. Operators worked to find a sweet spot in managing these risks, limiting releases from the service spillway as much as possible while still trying to keep the reservoir from overtopping the emergency spillway. But, every new forecast just showed more rain and more inflows.


Eventually it became clear to operators that they would have to pick a lesser evil: Increase discharges and flood the powerhouse or let the reservoir rise above the emergency spillway. They decided to let the reservoir come up. The morning of February 11, about four days after the damage was initially noticed, Lake Oroville rose above the crest of the emergency spillway for the first time in the facility’s history. Almost immediately, it was clear that things were not going to go smoothly.


As it flowed across and down the natural hillside, water from the emergency spillway began to channelize and concentrate. This quickly accelerated erosion of the soil and rock, creating features called headcuts, which are a sign of unstable and incising waterways. Headcuts are vertical drops in the topography eroded by flowing water, and they always move upstream oftentimes aggressively. In this case, upstream meant toward the emergency spillway structure, threatening its stability. This hillside was a zone many had assumed to be solid, competent bedrock. It only took a modest flow through the emergency spillway to reveal the true geologic conditions: the hillside was composed almost entirely of highly erodible soil and weathered rock. If the headcuts were to reach the concrete structure upstream, it would almost certainly fail, releasing a wall of water from Oroville Lake that would devastate downstream communities. Authorities knew they had to act quickly.


On February 12, only about a day and half after flow over the emergency spillway began, an evacuation order was issued for downstream residents, displacing nearly 200,000 people to higher ground. At the same time, operators elected to open the service spillway gates to double the flow rate and accelerate the lowering of the reservoir. The level dropped below the emergency spillway crest that night, stopping the flow and easing fears about an imminent failure. Two days later, on Valentine’s Day, the evacuation order was changed to a warning, allowing people to return to their homes. But there was still more rain in the forecast, and the emergency spillway was in poor condition to handle additional flow if the reservoir were to rise again. California DWR continued discharging through the crippled service spillway to lower the reservoir by 50 feet or 15 meters in order to create enough storage that the spillway could be taken out of service for evaluation and repairs. The gates stayed open until February 27th, nearly three weeks after the whole mess started, revealing the havoc to the dam’s right abutment. Water that started its journey as tiny drops of rain in a heavy storm - funneled and concentrated by the earth’s topography and turbulently released through massive human-made structures - had carved harrowing scars through the hillside. But, how did it happen?


Like all major catastrophes, there were a host of problems and issues that coincided to cause the failure of the concrete chute. One of the most fundamental issues was geologic. Although it was well-understood that some areas of the spillway’s foundation were not good stuff (in other words, weathered rock and soil), the spillway was designed and maintained as if the entire structure was sitting on hard bedrock.


That mischaracterization had profound consequences that I’ll discuss. As for how the spillway damage started, the issue was uplift forces. How do concrete structures stay put? Mostly by being heavy. Their weight pins them to the ground so they can resist other forces that may cause them to move. But, water complicates the issue. You might think that adding water to the top of a slab just adds to the weight, making things more stable. And that would be true without cracks and joints. The problem with the Oroville Dam service spillway chute was that it had lots of cracks and joints, for reasons I’ll discuss in a moment. These cracks allowed water to get underneath the slabs, essentially submerging the concrete on all sides. Here’s the issue with that: structures weigh less under water, or more accurately, their weight is counteracted by the buoyant force of the water they displace. So, being underwater already starts to destabilize them, because it adds an uplift force. But, concrete still sinks underwater, right? The net force is still down, holding the structure in place. That’s true in static conditions, but when the water is moving, things change.


We talk about Bernoulli’s principle a lot, and he’s got something to say about the flow of water in a spillway. In this case, the issue was what happens to a fast-moving fluid when it suddenly stops. Cracks and joints in a concrete spillway have an effect on the flow inside. Any protrusion into the stream redirects the flow. If a joint or crack is offset, that redirection can happen underneath the slab. When this happens, all the kinetic energy of the fluid is converted into potential energy, in other words, pressure. When it’s 100% of the kinetic energy being converted, we call it the stagnation pressure. When you direct the end of a tube into the flowing water, you see how the level rises?. The equation for stagnation pressure is a function of velocity squared. So, if you double the speed of flow, you get four times the resulting pressure and thus four times the height the water rises in the tube. And the water in the Oroville spillway is moving a lot faster than this. When this stagnation pressure acts on the bottom of a concrete slab, it creates an additional uplift force. If all the uplift forces exceed the weight of the slab, it’s going to move. That’s exactly what happened at Oroville. And once one slab goes, it’s just a chain reaction. More of the foundation is exposed to the fast moving water, and more of that water can inject itself below the slabs, causing a runaway failure.


Of course, we try to design around this problem. The service spillway had drains consisting of perforated pipes to relieve the pressure of water flowing beneath the slabs. Unfortunately, the design of these drains was a major reason for the cracking chute. Instead of trenching them into the foundation below the slabs, they reduced the thickness of the concrete to make room for the drains. The crack pattern on the chute essentially matched the layout of the drains beneath perfectly. So, in this case the drains inadvertently let more water below the slab than they let out from underneath it. The chute also included anchors, steel rods tying the concrete to the foundation material below. Unfortunately those anchors were designed for strong rock and their design wasn’t modified when the actual foundation conditions were revealed during construction.


The root cause wasn’t just a bad design, though. There are plenty of human factors that played into the lack of recognition and failure to address the inherent weaknesses in the structure. Large dams are regularly inspected, and their designs periodically compared to the state of current practice in dam engineering. Put simply, we’ve built bigger structures on worse foundations than this. Modern spillway designs have lots of features that help to avoid what happened at Oroville. Multiple layers of reinforcement keep cracks from getting too wide. Flexible waterstops are embedded into joints to keep water from migrating below the concrete. Joints are also keyed so individual slabs can’t separate from one another easily. Lateral cutoffs help resist sliding and keep water from migrating beneath one slab to another. Anchors add uplift resistance by holding the slabs down against their foundation. Even the surface of the joints is offset to avoid the possibility of a protrusion into the high velocity flow. All these are things that the Oroville Spillway either didn’t have or weren’t done properly. Periodic reviews of the structure’s design, required by regulators, should have recognized the deterioration and inherent weaknesses and addressed them before they could turn into such a consequential chain of tribulations.


As for the emergency spillway, the fundamental cause of the problem was similar: a mischaracterization of the foundation material during and after design. Emergency spillways are just that: intended for use only during a rare event where it’s ok to sustain some damage. But, it’s never acceptable for the structure to fail, or even come close enough to failing that the residents downstream have to be evacuated. That means engineers have to be able to make conservative estimates of how much erosion will occur when an emergency spillway engages. Predicting the amount and extent of erosion caused by flowing water is a notoriously difficult problem in civil engineering. It takes sophisticated analysis in the best of times, and even then, the uncertainty is still significant. It is practically impossible to do under the severe pressure of an emergency. The operators of the dam chose to allow the reservoir to rise above the crest of the emergency spillway rather than increase discharges through the debilitated service spillway, trusting the original designer that it could withstand the flows. It’s a decision I think most people (in hindsight) would not have made.


The powerhouse was further from flooding and the transmission lines further from failing than initially thought, and they eventually ramped up discharges from the service spillway anyway, after realizing the magnitude of the erosion happening at the emergency spillway. But, it’s difficult to pass blame too strongly. The operators making decisions during the heat of the emergency did not have the benefit of hindsight. They were stuck with the many small but consequential decisions made over a very long period of time that eventually led to the initial failure, not to mention the limitations of professional engineering practice’s ability to shine a light down multiple paths and choose the perfect one.


The forensic team’s report outlines many lessons to be learned from the event by the owner of the dam and the engineering community at large, and it’s worth a read if you’re interested in more detail. But, I think the most important lesson is about professional responsibility. The people downstream of Oroville Dam, and indeed any large dam across the world, probably chose their home or workplace without considering too carefully the consequences of a failure and breach. We rarely have the luxury to make decisions with such esoteric priorities. That means, whether they realized it or not, they put their trust in the engineers, operators, and regulators in charge of that dam to keep them safe and sound against disaster. In this case, that trust was broken. It’s a good reminder to anyone whose work can affect public safety. The repairs and rebuilding of the spillways at Oroville Dam are a whole other fascinating story. Maybe I’ll cover that in a future post. Thank you, and let me know what you think!


May 18, 2021 /Wesley Crump

Do Pumps Create Pressure or Flow?

May 04, 2021 by Wesley Crump

There’s a popular and persistent saying that pumps only create flow in a fluid, and resistance to that flow is what creates the pressure in a pipe. That may be helpful in conceptualizing what’s happening in a pump system, but it’s not the whole story. In fact, it’s almost identical to another popular but misleading belief, this one about electrical safety, that says, “It’s not the voltage that kills you. It’s the current.” Well if you know anything about Ohm’s Law, you know that voltage and current go hand and hand, and the same is true about the pressure and flow rate in pipe systems. This is not rocket science, but it’s not common knowledge either, even though almost everyone has used or interacted with a pump before. Today, we’re talking about how pumps work!


Let me just say right at the start that I love pumps. They’re one of my favorite topics, so this is the first of two blogs I’m doing about them. Let me know if you want to hear more because there are a ton of topics we can cover. Funny enough, most engineers working with pumps aren’t all that concerned with the physics of what’s happening inside one. They mostly care about performance. That’s because the most important job of an engineer designing a pump system is choosing the right one. That might sound silly at first. For a small aquarium pump or sump pump, you usually don’t have to be very thoughtful about selection - the difference between them on such a small scale is not that significant. But, like most things in our industry, those small variances turn into large ones at scale. As pumps get bigger, and their roles become more important, selecting the right one for the job becomes a critical task. For example, choosing the wrong pump to supply a city with fresh water or get rid of floodwaters can be life-or-death. Today we’ll walk through some of the considerations engineers use to select the right pump using demonstrations in my video and even give you some tips if you ever have to choose one yourself.


Most pumps used in civil engineering, and indeed most pumps you’ll encounter in your everyday life, are centrifugal pumps. That means they use an impeller connected to a motor to accelerate the liquid into the discharge line. If you go searching for pumps online for smaller applications, you’ll likely see them listed according to flow rate. That makes sense because it’s usually what you care about. How many gallons or liters per minute can I move? But, for centrifugal pumps, it’s not quite that simple. Let me show you what I mean in the video. I have a small fountain pump here rated for 2 liters per minute. If I turn it on and pump this water into a beaker, it does just about that. It takes just about 30 seconds to fill one liter. But watch what happens if I raise or lower the vertical distance of the beaker above the pump. In fact, through the magic of video compositing, I can show you all three at the same time. 


It’s very easy to see the effect that the discharge pressure has on the pump’s flow rate. The higher the beaker, the greater the pressure. And the greater the pressure, the lower the flow. To illustrate this further, here’s a graph of my little experiment with flow rate on the x-axis and pressure on the y-axis. In this case, I’m measuring pressure as the height of a fluid column, also known as head. You can see that my experiment created a curve on this graph. In fact, all centrifugal pumps have a curve like this, called the characteristic curve. And, at the risk of this just becoming a video about cool graphs (even though some might argue that it has inherent value on its own just by being a cool graph), in this case, it’s also a means to an end. Let me show you why it’s so important. 


No matter what you connect a pump to - whether a single hose or a complex citywide network of water mains - it is going to have its own curve describing how much flow will occur under different pressure conditions. You can see when I change the supply pressure by adjusting this valve, the flow rate through the pipe changes accordingly. The graph of this relationship is called the system curve, and it’s different for every network of pipes from the simple to the complex. A system with lots of constriction will have a more vertical curve where, no matter what the pressure is, the flow rate doesn’t change much. A system with less constriction will have a flatter curve where more pressure equals a lot more flow. A system at a much higher elevation or higher pressure will have a curve high up on the graph. System curves can even change over time. A city’s fresh water distribution system will have a flatter curve during the day when more people are using their taps and a steeper curve at night when the demand for water is lower. 


Stay with me, because here’s why this matters: If you plot a pump’s characteristic curve on top of the system curve to which it is connected, you can see they intersect. This point of intersection tells you the pressure and flow rate at which the pump will operate. It’s conceptually both simple and confusing. The pump doesn’t decide what pressure and flow rate it will deliver. What it’s connected to does. When I change my system curve by opening or closing this valve, both the pressure and flow rate created by the pump respond accordingly. So, to select the right pump for an application, you have to know how your system will respond to being supplied with a range of different pressures.


Flow and pressure are important, but they’re not the only considerations that go into pump selection. A pump curve sometimes also shows other important information like efficiency. Even if a pump can operate in the extreme ranges of its performance curve, it usually can’t do it efficiently. Listen to the sound of this pump as I close the valve and you can tell that it’s not performing its best over the full range of flow rates. That might not matter in some applications, but if it’s a big pump that requires a lot of energy or one that will run 24/7/365, this is something to be thoughtful about. Again, think about scale. On your fish tank pump, a little inefficiency is not a huge deal. If you are delivering water to millions of customers 24 hours per day, small inefficiencies add up quickly. And it’s especially challenging if your system curve changes over time.


It might seem cheaper to use a single pump that can handle a wide range of flow rates, but it’s often more cost-effective to use multiple pumps so that you can always operate in the most efficient part of each one’s characteristic curve. A pump curve also shows you the pressure at which it can’t create any flow. Watch what happens when I raise the tube from my aquarium pump to above its maximum pressure. The liquid reaches the maximum head and stops. I have to say, despite what you will read in nearly every internet forum about pumps, this one is not creating flow, but it is creating pressure.


I’m being a little facetious here talking only about centrifugal pumps when there is another major category that behaves a little bit differently. Positive displacement pumps trap a fixed volume of fluid and force it into the discharge line. Unlike centrifugal pumps, where the impeller can spin without actually moving any fluid, positive displacement pumps directly couple the motor to a fixed volume no matter what the pressure is in the discharge line. As long as the motor has enough power to force that fluid out, it will happen at a constant rate. Essentially, their characteristic curve is just a flat line. I think, in most cases, people who say that pumps only create flow and not pressure are specifically referring to positive displacement pumps. But, if the pressure wouldn’t be there without the pump, I have to contend that the pump created it. That said, I think I understand the sentiment of this idea that pumps only create flow and why it’s so often repeated.


It is a little bit confusing that a pump itself is not directly responsible for the flow rate and pressures under which it operates. Those properties depend on the characteristics of the system to which the pump is connected. In the case of a positive displacement pump, only the pressure is determined by the system curve. The flow rate is a fixed value. Both are still created by the pump, but only one is “decided” by it. For a centrifugal pump, both the flow and the pressure depend on the system curve. Given this discussion, I’d like to propose this new mantra for the internet pump enthusiasts as a more correct answer to the question of whether pumps create pressure or flow: Pumps impart flow and pressure to a fluid in accordance with their characteristic curve and the corresponding system curve. Not a great catchphrase, but it is accurate. Maybe one of you can come up with something a bit more catchy. Thank you, and let me know what you think!


May 04, 2021 /Wesley Crump

What Really Happened at the Suez Canal?

April 20, 2021 by Wesley Crump

On March 23, 2021, the massive container ship Ever Given ran aground in the Suez Canal. The wedged vessel obstructed the entire channel, blocking one of the most important trade routes in the world for nearly a week. The cause and details of this event are still under investigation, but there’s a lot we already know. How could something like this happen, and why did it take so long to fix? I’m Grady and this is Practical Engineering. Today, we’re exploring some of the engineering principles behind the 2021 Suez Canal obstruction.


Before we get into the event itself, let’s learn a little bit about the Suez Canal. Built in the 1860s, the Suez Canal is a constructed waterway in Egypt, allowing shipping and other maritime traffic to go from the Mediterranean Sea to the Red Sea and vice versa. This means ships don’t need to navigate all the way north around the European and Asian continents or all the way south around the African continent to travel between the Atlantic and Indian oceans. It’s basically a global shortcut. That makes it one of the most important routes for global commerce, handling roughly ten percent of the entire world’s ocean trade.


For as important as it is to the global economy, the Suez Canal is a relatively straightforward structure: essentially a trapezoidal channel cut through the sand of the low-lying Suez Peninsula and taking advantage of the existing Great Bitter Lake at the center. Unlike the Panama Canal which uses locks to raise vessels up for transit, the Suez Canal is entirely at sea level with no gates or locks. Minor differences in level between the Mediterranean and Red Seas create gentle currents in the canal, but they’re not strong enough to trouble the ships. In 2016, an expansion to the Suez Canal opened, essentially doubling its capacity. The project involved adding a second shipping lane to part of the canal, and deepening and widening some of the choke points so larger ships could pass through. It’s now about 200 meters (700 feet) wide and about 24 meters (80 feet) deep.


All ships passing through the Suez Canal are required to have a Canal Authority pilot to help navigate each step. These pilots aren’t fully responsible for the safety of the ship during transit, but they have special knowledge about the processes, procedures, and challenges required to navigate these massive vessels through the canal. It’s tricky, and ships have been stuck in the canal before, including a 3-day blockage in 2004. So, each ship’s Master (sometimes called the captain) and the canal authority pilot work together to maneuver the ship through. It takes about half a day to get from one end to the other, and on average, about 50 ships make their way through the canal each day.


Navigating through the Suez Canal is a careful dance since some parts of the channel only have a single shipping lane with no room to pass. That’s why ships are required to go through in convoys. Early each morning, the convoys line up to enter the canal. The southbound group begins their journey from about 3AM to 8AM at Port Said, following the western channel. At around the same time, the northbound convoy enters the canal at Suez. On a normal day, everything is carefully timed so that the two convoys can pass each other in the Great Bitter Lake and the dual lane section of the canal without any stopping or interruptions. Unfortunately, March 23rd was not a normal day. One of the first ships in the northbound convoy, the Ever Given, had barely entered the canal at Suez when it veered into the eastern bank, smashing its bow into the sandy embankment and wedging the massive vessel diagonally across the channel’s entire width. Amazingly, there was not a single injury and the cargo was completely unharmed. 


As I mentioned, the exact reason the ship ran aground is still under investigation. Some reporting suggested the Ever Given experienced a loss of power, but that was denied by the ship’s technical manager. Sources also say that there was an ongoing dust storm that morning creating high winds and limited visibility. Many have suggested that the Ever Given’s unscheduled and unfortunate landing in the canal may have been hastened by a hydraulic phenomenon unique to vessels transiting through shallow water called the Bank Effect. Before we explore this further, first a little info on this ship.


Leased and operated by international shipping company Evergreen, the Ever Given is one of the eleven Golden Class container ships, all confusingly named “Ever” combined with a seemingly arbitrary g-word. Weird names aside, these ships are truly massive. In fact, the Ever Given will never get a chance to go through the Panama Canal because it’s too long for the locks at 400 meters (or over 1,300 feet long). The ship’s beam is 60 meters (or nearly 200 feet) with a fully-loaded draft of 15 meters (or 48 feet). You can see how small the margin for error is with a ship this size in the canal.


If you remember your lessons on buoyancy, you know that a ship displaces its own weight in water. That means for every pound of steel and cargo aboard, a pound of water below the ship has to get out of the way. For the Ever Given, that is hundreds of thousands of tons of liquid being pushed to either side of the ship as it cuts through the water. On the open sea, that’s not a problem. The displacement forms a wake, but the water otherwise doesn’t have trouble finding a new place to go. In a shallow canal, though, things are a little different.


In a shallow canal, all the water displaced by a ship has to essentially squish through the small areas along the sides and bottom of the vessel. The smaller the area, the faster the water has to move to get out of the way. The water builds up at the bow (or front) of the ship. As the water accelerates through the narrow gaps on either side, its level drops. This is a well-known phenomenon that creates some unusual effects on ships. That’s because, in accordance with Bernoulli’s law, a fluid’s pressure goes down when its speed goes up. When traveling in a shallow area, the squished and sped-up flow below the hull creates a suction force pulling the ship further into the water, a phenomenon known as “squatting”. One massive ship even used the effect by speeding up as it went below the Great Belt Bridge in Denmark to create some extra margin above the deck. But, the exact same effect can happen on the side of a ship as well. If a vessel gets too close to the bank of a shallow canal, the water it displaces on that side essentially has nowhere to go. It has to pick up speed as it squishes through the narrow gap, lowering the pressure, and thus pulling the ship toward the bank. In reality the Bank Effect is not that well understood. Research is ongoing to better characterize how depth, distance, speed, propellor action, and other factors can affect the way a ship moves in a restricted waterway. We still have a lot to learn both in an academic sense and in nautical practice, a fact made very clear when this massive vessel found the edge of the Suez Canal.


Images of the first responder to the accident, a tiny excavator removing soil from the Ever Given’s gigantic hull, circulated around the internet like wildfire. The yawning gap between the machine’s assignment and its capability was just too ripe for parody - you could hardly check a single social media feed without being overwhelmed by the memes. In a long period of collective unrest and despondency during a global pandemic and the seemingly constant uncertainty surrounding who or what to believe about so many complicated issues, here was a story that anyone could understand: A boat was stuck in a canal. It was in the way of other boats that needed to get through. Simple as that. So why did it take so long to dislodge?


Humanity has a long and storied history of driving stuff into the ground so it will stay put, from the small (like tent stakes) to the massive (like the earth anchors used to hold guy wires for antenna masts). It’s pretty intuitive how this works. The pullout force is resisted by the friction between the soil and anchor. This ability to resist pullout is a function of the pressure against the soil and the surface area of the anchor. And when your anchor is a ship the size of a skyscraper, you obviously have both of those in abundance. It’s really no wonder that salvage crews struggled to unstick the Ever Given. But, there is a geotechnical phenomenon that I suspect made things even worse. And just a warning that I’m straying a little into speculation here, since the geotechnical details of the extraction have not been widely reported.


Soils with large grains, like sand, have an interesting property called dilatancy. Essentially, when they’re deformed, they expand in volume. If you’ve ever walked on the beach, this is probably something you’ve seen before. The water disappears from the surface because it soaks into the extra space created when the sand was deformed. This dilation occurs because the grains of sand, which were interlocked, rotate and lever against each other, pressing outwards as they do. This would not be a major issue except for one detail about the Ever Given’s hull: the bulbous bow, a feature included on many large ships to reduce drag. The hydrodynamics of bulbous bows are definitely worth discussing in a future video, but here is why it was such a problem for the Ever Given. Unlike if only the triangular hull was wedged in the sand, the bulbous bow was surrounded by soil on all sides. Essentially, the Ever Given put its appendage into a gigantic finger trap toy. Any movement of the ship would dilate the sand, effectively clamping down harder on the bulbous bow.


Ultimately it was impossible to simply pull the ship out. Removal took a much more extensive operation of dredging the sand from around the hull and lightening the ship by releasing ballast water, both to relieve the friction from the soil. Even the moon joined in on the operation, raising the tide in the canal to give a little more buoyancy to the foundered ship. After six days aground, the Ever Given was finally dislodged and traffic through the canal could resume. At the time, there were about 400 vessels waiting to make their pass and many more that had already diverted around the Cape of Good Hope. With a capacity of only around 90 ships per day, the backlog took about a week to clear up. That doesn’t mean the problem is resolved though. A weeklong disruption in such a big portion of global shipping traffic doesn’t untangle itself so quickly. The investigation into the exact cause of the incident is ongoing, and I’m sure many insurance claims are as well. In the meantime, I hope this helps you understand a few of the engineering challenges associated with navigating massive ships through tiny canals and what can happen when they run aground. Thank you, and let me know what you think!

April 20, 2021 /Wesley Crump

Flow and Pressure in Pipes Explained

April 06, 2021 by Wesley Crump

All pipes carrying fluids experience losses of pressure caused by friction and turbulence of the flow. It affects seemingly simple things like the plumbing in your house all the way up to the design of massive, way more complex, long-distance pipelines. I’ve talked about many of the challenges engineers face in designing piped systems, including water hammer, air entrainment, and thrust forces. But, I’ve never talked about the factors affecting how much fluid actually flows through a pipe and the pressures at which that occurs. So, today we’re going to have a little fun, test out some different configurations of piping, and see how well the engineering equations can predict the pressure and flow. Even if you’re not going to use the equations, hopefully, you’ll gain some intuition from reading how they work in a real situation. Today, we’re talking about closed conduit hydraulics and pressure drop in pipes.


I love engineering analogies, and in this case, there are a lot of similarities between electrical circuits and fluids in pipes. Just like all conventional conductors have some resistance to the flow of current, all pipes impart some resistance to the flow of the fluid inside, usually in the form of friction and turbulence. In fact, this is a lovely analogy because the resistance of a conductor is both a function of the cross-sectional area and length of the conductor—the bigger and shorter the wire, the lower the resistance. The same is true for pipes, but the reasons are a little different. The fluid velocity in a pipe is a function of the flow rate and the pipe’s area. Given a flowrate, a larger pipe will have a lower velocity, and a small pipe will have a higher velocity. This concept is critical to understanding the hydraulics of pipeline design because friction and turbulence are mostly a result of flow velocity.


I built a demonstration in my video that should help us see this in practice. This is a manifold to test out different configurations of pipes and see their effect on the flow and pressure of the fluid inside. It’s connected to my regular tap on the left. The water passes through a flow meter and valve, past some pressure gauges, through the sample pipe in question, and finally through a showerhead. I picked a showerhead since, for many of us, it’s the most tangible and immediate connection we have to pressure problems in plumbing. It’s probably one of the most important factors in the difference between a good shower, and a bad one. Don’t worry, all this water will be given to my plants which need it right now anyway.


I used these clear pipes because they look cool, but there won’t be much to see inside. All the information we need will show up on the gauges (as long as I bleed all the air from the lines each time). The first one measures the flow rate in gallons per minute, the second one measures the pressure in the pipe in pounds per square inch, and the third gauge measures the difference in pressure before and after the sample (also called the head loss) in inches of water. In other words, this gauge measures how much pressure is lost through friction and turbulence in the sample - this is the one to keep your eye on. In simple terms, it’s saying how far do you have to open the valve to achieve a certain rate of flow. I know the metric folks are giggling at these units. For this video, I’m going to break my rule about providing both systems of measurement because these values are just examples anyway. They are just nice round numbers that are easy to compare with no real application outside the demo. Substitute your own preferred units if you want, because it won’t affect the conclusions.


There are a few methods engineers use to estimate the energy losses in pipes carrying water, but one of the simplest is the Hazen-Williams equation. It can be rearranged in a few ways, but this way is nice because it has the variables we can measure. It says that the head loss (in other words the drop in pressure from one end of a pipe to the other) is a function of the flow rate, and the diameter, length, and roughness of the pipe. Now - that’s a lot of variables, so let’s try an example to show how this works. First, we’ll investigate the effect the length of the pipe has on head loss. I’m starting with a short piece of pipe in the manifold, and I’m testing everything at three flow rates: 0.3, 0.6, and 0.9 gallons per minute (or gpm).


At 0.3 gpm, we see pressure drop across the pipe is practically negligible, just under half an inch. At 0.6 gpm, the head loss is about an inch. And, at 0.9 gpm, the head loss is just over 3 inches. Now I’m changing out the sample for a much longer pipe of the same diameter. In this case, it’s 20 times longer than the previous example. Length has an exponent of 1 in the Hazen-Williams equation, so we know if we double the length, we should get double the head loss. And if we multiply the length times 20, we should see the pressure drop increase by a factor of 20 as well. And sure enough, at a flow rate of 0.3 gpm, we see a pressure drop across the pipe of 7.5 inches, just about 20 times what it was with the short pipe. That’s the max we can do here - opening the valve any further just overwhelms the differential pressure gauge. There is so much friction and turbulence in this long pipe that I would need a different gauge just to measure it.


Length is just one factor that influences the hydraulics of a pipe. This demo can also show how the pipe diameter affects the pressure loss. If I switch in this pipe with the same length as the original sample but which has a smaller diameter, we can see the additional pressure drop that occurs. The smaller pipe has ⅔ the diameter of the original sample, and diameter has an exponent of 4.9 in our equation. That’s because, as I mentioned before, changing the diameter changes the fluid velocity, and friction is all about velocity. We expect the pressure drop to be 1 over (⅔)^4.9 or about 7 times higher than the original pipe. At 0.3 gpm, the pressure drop is 3 inches. That’s about 6 times the original. At 0.6 gpm, the pressure drop is 7.5 inches, about 7 times the original. And at 0.9 gpm, we’re off the scale. All of that is to say, we’re getting close to the correct answers, but there’s something else going on here. To explore this even further, let’s take it to the extreme.


We’ll swap out a pipe with a diameter 5 times larger than the original sample. In this case, we’d expect the head loss to be 1 over 5^4.3, basically a tiny fraction of that measured with the original sample. Let’s see if this is the case. At 0.3 gpm, the pressure drop is basically negligible just like last time. At 0.6 and 0.9 gpm, the pressure drop is essentially the same as the original. Obviously, there’s more to the head loss than just the properties of the pipe itself, and maybe you caught this already. There is something conspicuous about the Hazen-Williams equation. It estimates the friction in a pipe, but it doesn’t include the friction and turbulence that occurs at sudden changes in direction or expansion and contraction of the flow. These are called minor losses, because for long pipes they usually are minor. But in some situations like the plumbing in buildings or my little demonstration here, they can add up quickly.


Every time a fluid makes a sudden turn (like around an elbow) or expands or contracts (like through these quick-release fittings), it experiences extra turbulence, which creates an additional loss of pressure. Think of it like you are walking through a hallway with a turn. You anticipate the turn, so you adjust your path accordingly. Water doesn’t, so it has to crash into the side - and then change directions. And, there is actually a formula for these minor losses. It says that they are a function of the fluid’s velocity squared and this k factor that has been measured in laboratory testing for any number of bends, expansions, and contractions. As just another example of this, here’s a sample pipe with four 90-degree bends. If you were just calculating pressure loss from pipe flow, you would expect it to be insignificant. Short, smooth pipe of an appropriate diameter. The reality is that, at each of the flow rates tested in the original straight pipe sample, this one has about double the head loss, maxing out at nearly 6 inches of pressure drop at 0.9 gpm. Engineers have to include “minor” losses to the calculated frictional losses within the pipe to estimate the total head loss. In my demo here, except for the case of the 20’ pipe, most of the pressure drop between the two measurement points is caused by minor losses through the different fittings in the manifold. It’s why, in this example, the pressure drop is essentially the same as the original. Even though the pipe is much larger in diameter, the expansion and contraction required to transition to this large pipe make up for the difference.


One clarification to this demo I want to make: I’ve been adjusting this valve each time to keep the flow rate consistent between each example so that we make fair comparisons. But that’s not how we take showers or use our taps. Maybe you do it differently, but I just turn the valve as far as it will go. The resulting flow rate is a function of the pressure in the tap and the configuration of piping along the way. More pressure or less friction and turbulence in the pipes and fittings will give you more flow (and vice versa).


So let’s tie all this new knowledge together with an example pipeline. Rather than just knowing the total pressure drop from one end to another, engineers like to draw the pressure continuously along a pipe. This is called the hydraulic grade line, and, conveniently, it represents the height the water would reach if you were to tap a vertical tube into the main pipe. With a hydraulic grade line, it’s really easy to see how pressure is lost through pipe friction. Changing the flow rate or diameter of the pipe changes the slope of the hydraulic grade line. It’s also easy to see how fittings create minor losses in the pipe. This type of diagram is advantageous in many ways. For example, you can overlay the pressure rating of the pipe and see if you’re going above it. You can also see where you might need booster pump stations on long pipelines. Finally, you can visualize how changes to a design like pipe size, flow rate, or length affect the hydraulics along the way.


Friction in pipes? Not necessarily the most fascinating hydraulic phenomenon. But, most of engineering is making compromises, usually between cost and performance. That’s why it’s so useful to understand how changing a design can tip the scales. Formulas like the Hazen-Williams and the minor loss equations are just as useful to engineers designing pipelines that carry huge volumes of fluid all the way down to homeowners fixing the plumbing in their houses. It’s intuitive that reducing the length of a pipe or increasing its diameter or reducing the number of bends and fittings ensures that more of the fluid’s pressure makes it to the end of the line. But engineers can’t rely just on intuition. These equations help us understand how much of an improvement can be expected without having to go out to the garage and test it out like I did. Pipe systems are important to us, so it’s critical that we can design them to carry the right amount of flow without too much drop in pressure from one end to the other.


April 06, 2021 /Wesley Crump

What Really Happened During the Texas Power Grid Outage?

March 23, 2021 by Wesley Crump

This February of 2021, a major winter storm made its way through the U.S. central plains, setting all-time records for low temperatures across the country. One of the biggest impacts of the storm happened here in Texas where people across the state suffered extended outages of electricity and water. It was one of the worst winter weather events in history, creating loss-of-life and economic impacts that will take years to unfold. When disaster strikes, the flurry of political positioning and fingerpointing can make it difficult to understand what really happened. This is especially true for the complex systems of infrastructure like the power grid where most people really need a little more context and background than can be provided in a 500-word news story, but a little more boiled down than the discussions between economists and electrical engineers on twitter.


So - for the first time ever, by the way - I’m talking about a current event. This is a developing story, and we’re still learning the details and consequences of what happened during the storm. But, I’ve received a lot of requests for a video like this, and I hope it can provide some clarity and technical knowledge to elevate the dialogue surrounding this disaster. I’m Grady and this is Practical Engineering. Today’s blog is the story of the 2021 Texas Power Grid Emergency.


Before diving into the chronology of events, I want to provide just a quick overview of some important technical topics. I actually have a series of videos explaining the power grid in greater detail linked below, so check that out if you want to learn more. A wide-area interconnection, which is the technical term for a power grid, is the solution to the number one problem of the supply and demand of electricity: volatility. There aren’t many feasible ways to store large quantities of electricity, so for the most part, the supply and demand have to be matched simultaneously. Electricity is produced, transmitted, and consumed all in the exact same instant. Managing those ebbs and flows is a very difficult thing to do unless large groups of power producers and users are connected together, smoothing out the volatility of demands (making them more predictable) and the supply (making it possible to have larger, more efficient generation facilities). Interconnection increases the efficiency and reliability of electricity supply, and the majority of Texans are served by a single interconnection that covers most of the state. I’ll be referring to it as the Texas Power Grid.


Just because there are so many power producers and users interconnected doesn’t mean that there is no volatility in supply and demand. These are some example demand curves on the Texas grid. You can see that demand is always changing throughout the day. When power demand drops due to mild weather or at night when people are asleep, we need generators to shut down. Otherwise, the frequency of the AC power will speed up above 60 hertz. When demand spikes, we need generators spun up to match it. Otherwise, the extra load will bog down the physical generators, and frequency of the AC power will fall below 60 hertz. The consequences of the AC frequency deviating by too much are massive because every generator in the entire system is magnetically coupled. If parts of the system lose synchronization, both generators and equipment connected to the grid can tear themselves apart. Because of that, most parts of the grid (including generation facilities) have breakers that trip to isolate equipment if the frequency deviates too far. The breakers in your house monitor electrical current. If you plug in too many things to one circuit, the breaker will trip. These work the same way except they monitor frequency. And unlike in your house where restoring power is a quick fix, large power generating stations don’t turn on and off with a flip of the switch. So, matching supply and demand to maintain a stable frequency is the most critical part of managing the power grid.


In Texas, the entity in charge of this task is ERCOT, the Electric Reliability Council of Texas. This is a non-profit corporation with board members representing basically every segment of the electricity market from generators to utilities to consumers. ERCOT’s job is to oversee the entire system. They don’t own or operate any facilities themselves, but they tell the operators of generation facilities when to start and stop running depending on the electrical demand. ERCOT also manages scheduled outages. Owners have to have approval from ERCOT before they take facilities offline for maintenance. Finally, ERCOT manages the wholesale market of electricity by setting prices and handling the transactions between power sellers and buyers.


The week before Valentine’s day 2021 had already been a wintry one in many parts of Texas, but as the weekend drew near, the forecasts started to make clear that the next week would be extraordinarily cold. On February 8, a full week before things really hit the fan, ERCOT had already started issuing public communications about very high expected electrical demands across the state. They canceled or delayed approval for a large number of scheduled outages to make sure as much electrical infrastructure as possible would be in service in anticipation of the storm. They worked with the Texas Railroad Commission, which confusingly is in charge of the oil and gas industry, to increase the priority of delivering natural gas to power plants. They also worked with the U.S. Department of Energy to get permission for some power plants to temporarily exceed their emission limits from the EPA during peak needs for electricity. All this to say, the folks managing the power grid were expecting exceptional strain on the system and had already started working the week beforehand to prepare.


When the storm did hit on Valentine’s Day, it really was one for the history books. Three major facts to illustrate this point: First, it was extremely cold. The National Weather Service shows the departure from normal temperatures for three historical winter weather events in the southern U.S. plains. Essentially the entire southern U.S. had average temperatures more than 25 degrees Fahrenheit or 15 degrees Celsius below normal. All-time low temperature records were set in nearly every city across the region. It was the coldest many places had practically ever been. The next point is that it was not just a local event. The storm had significant impacts across the entire state of Texas and beyond. Finally, the duration of frigid temperatures was just so long. Large portions of the state were below freezing for more than 7 continuous days. That might not sound like a lot to those of you in northern climes, maybe even a welcome respite. But, in most parts of Texas, that is unheard of.


The state started Sunday with about a quarter of its total electrical capacity already out of service, mainly due to weather the week before. These outages were about half wind power and half natural gas generators. Even with that lack of capacity, as night began to fall that Sunday, Texas hit its all-time winter peak electrical demand of nearly 70,000 MW, and it met that full demand. The previous peak was 66,000. Everyone in the state was running their heaters to the max trying to stay warm. As the evening continued, though, it became clear to ERCOT and utilities that the amount of electricity available on the grid may not be able to continue to meet the demand. An advisory was issued that electrical reserves were low at around 11:30 PM. Not long after that, generation facility after facility started to trip offline reducing the capacity to meet the high demand.


Texas has a diverse portfolio of power generators. The largest segment of that is natural gas making up about half of the capacity. The second segment is wind turbines at about 30%. The rest of the generation fleet is made up mostly of coal powered plants, nuclear plants, and solar farms. This graph shows the outages of each generation type during the winter storm. You can see that wind and natural gas make up the majority of the lost capacity but no type of power plant was spared during the storm. The most important part of this figure is the natural gas line. Plant after plant went offline to the tune of 15,000 MW of capacity within the span of 8 hours. All the details are still coming out about what really happened, but there was a lot we know that went wrong.


Natural gas wells and pipelines are particularly vulnerable to cold temperatures. Not only does a gas stream contain water vapor that can freeze by itself, that water vapor can also combine with hydrocarbons to create hydrates that solidify at temperatures well above freezing. Combine this with the fact that many roads were completely impassable during the storm, it was nearly impossible for some gas suppliers to keep things flowing. That despite the fact that the wholesale price of natural gas during the storm skyrocketed to more than 100 times its normal price due to the incredible demand, with power plants competing with residential homes that use gas for heat. At that price, suppliers were doing essentially everything in their power to deliver gas to customers, but it just wasn’t enough. Or, in some cases, it was enough, but the generators couldn’t afford to operate their plants with such a high cost for fuel.


But it wasn’t just gas power plants that struggled, and it wasn’t just about fuel. Wind turbines were shut down due to icing. Solar panels were covered in snow. One of the few nuclear units in Texas tripped offline because of cold weather issues with its water supply. Basically, the entire system was ill-prepared for a storm of this magnitude. Texas does have a few connections to other power grids to help alleviate supply problems during emergencies, but those grids were suffering under similar conditions without much extra electricity to spare. This graph shows the total outages during the event. With that huge spike of generators going offline the morning after Valentine’s day, the state had nearly half of its total capacity gone during one of the highest periods of electrical demand on record. Without any remaining reserve of resources, ERCOT had only one option left to keep supply and demand in sync: shed load.


This is the technical term for what is essentially turning off parts of the power grid (in other words, disconnecting customers) to reduce total demands on the system. Here’s a simplified version of how it works: ERCOT tells the transmission operators they need to take X megawatts off the grid. Those megawatts are distributed roughly evenly between the operators you may have heard of, like Oncor, Centerpoint, CPS Energy, Austin Energy, etc. based on their share of the total load. Each operator has a plan in place for how to shed load within their own system when required. It’s not something they decide on the fly. Certain circuits critical to public health and safety like hospitals are prioritized. The other non-critical circuits are usually shut off in a rolling manner at 15-30 minute intervals. That way the inconvenience of lost service is spread out more evenly across the entire service area. Of course, in many places during the storm, that is not what happened.


As more and more resources tripped offline so quickly, the frequency of the Texas power grid began to drop. ERCOT continued ordering additional load to be shed from the system trying to keep up with both the rising demand from the cold weather and the quickly failing power plants. At about 1:50 AM, the frequency fell below 59.4 hertz. It doesn’t sound that significant, but this is a critical threshold for grid stability. Power plant controls are set to automatically disconnect if the frequency stays below 59.4 hertz for more than 9 minutes. In such a situation, as each generator trips, the frequency would quickly plummet until just about every circuit breaker on the grid had disconnected. Four minutes and 37 seconds is all that separated Texas from a complete grid collapse. Without the urgent action to continue shedding load from the system, many might still be without power in Texas a month later. That’s because recovering from a complete collapse, called a “black start” of the power grid, is an immense technical challenge. So much equipment would need to be inspected, repaired or replaced from the damage caused by the collapse. Only then could we start bringing generators and customers online slowly but surely to maintain balance between supply and demand throughout the process. If anything goes wrong during a black start and frequecy deviates too far, breakers will trip to protect the equipment and you have to start all over.


I am not an economist, but the energy market is an important part of this story, so I’ll do my best to summarize the key points here. Unlike other markets that pay generators to secure capacity for the future, the wholesale electricity market in Texas is energy-only. That means if you put power on the grid, you get paid for it. You get no extra points for having generation capacity when it wasn’t needed. The only reason it’s feasible to invest in future capacity is the scarcity pricing of wholesale electricity. When demand is high, the price goes way up. In theory, this incentivizes generators to not only make short-term investments to ensure their facilities are up and running during peak demands but also long-term investments in plants that can spin up during these times when prices are sky high to capitalize on the energy scarcity. This type of price model also favors intermittent sources of electricity like wind and solar which would struggle to compete in a market that valued firm capacity.


While it normally varies between $30 and $50 per megawatt-hour on an average day, the wholesale electricity price went up to the cap of $9,000 per megawatt-hour during the storm and stayed that high for days. The result was that providers spent more money on wholesale electricity in a week than would normally be spent in 4 average years. That’s with the extreme load shedding that occurred and doesn’t include the incredible prices that some utilities paid for natural gas. Most energy users won’t see those massive costs, at least not right away. That’s because nearly every retail provider offers a fixed or at least tiered rate for electricity to their customers, bearing the dips and swings of the wholesale price using a wide variety of financial tools and long-term contracts to try and hedge against the extreme volatility. Prudent planning can only take you so far in an event like this, though, and at least one utility has already filed for bankruptcy protection after the extreme energy bill came due. Other retail energy providers used a different strategy to manage the market unpredictability: pass the risk on to their customers by offering direct access to wholesale energy rates for a monthly fee. The idea is that some users may prefer to manage their own demand, cutting back on electricity usage when rates are high and shifting usage to times when rates are low. Unfortunately, many of these customers were misled about or misunderstood the incredible volatility of the wholesale energy market to which they were being exposed, and there are many reports of residential energy bills in the thousands of dollars.


During the peak of the event, ERCOT ordered 20,000 MW of load to be shed from the system. That’s the equivalent of turning off half of Texas on a normal day. And the load shed orders lasted for three days straight from early morning on the 15th to the end of the 18th. What should have been rolling outages couldn’t roll because many utilities just didn’t have any non-critical circuits on their system left to turn off. The result was that millions of Texans were plunged into darkness, in some cases for days, without heat or light during one of the coldest winter storms on record. My house lost power in the middle of the second night of outages, and it was already 42 degrees Fahrenheit (6 degrees celsius) that morning before we relocated to a family member’s house. And, we were the lucky ones. Many were not so fortunate to have friends or family with power nearby or even to have roads clear enough to safely make the trek. The selection of which circuits were left on versus off seemed arbitrary or even capricious to many. Water utilities started losing service, both because of frozen lines and lack of power available for pumping, reducing the availability of yet another basic human necessity to huge swaths of the population. Even though we had avoided the catastrophe of a total grid collapse, we did not avoid a crisis. Many lives were lost, and the economic impacts of this emergency are untold.


I have my own opinions about what could have or should have been done better during the storm and by whom, but this is not the place. My goal with this video is to try and summarize the facts of this tragic event in a way that is approachable by people who don’t have a working knowledge of the intricacies of the power grid. Many are still recovering from the storm and will be for years to come. Please feel free to share your thoughts and opinions in the comments below. All I ask is that you please be kind and respectful to one another. Thank you, and let me know what you think!


March 23, 2021 /Wesley Crump

What is Storm Surge?

March 02, 2021 by Wesley Crump

Most of the world’s biggest cities and about half of the global population live within 100 kilometers (60 miles) from the ocean. That’s pretty important, especially given the huge amount of land that isn’t near a coastline. We humans just tend toward the ocean - it’s got food, it’s got ships, it’s got beaches and waves. It’s got unimaginable beauty. But not everything about the coast is great, especially when all that ocean water starts finding its way up the shore and into developed areas. We’ve talked about riverine flooding caused by intense precipitation . But, there’s another type of flooding that has almost nothing to do with rain and almost everything to do with air. Hey, I’m Grady and this is Practical Engineering. Today, we’re talking about storm surge and coastal flood protection.

Whether you call them hurricanes or typhoons, tropical cyclones are some of the most devastating phenomena that Mother Nature has to throw at us. They get their own names and their own elite aircrew reconnaissance squadrons and their own cult following of hardcore weather nerds (including me). These gigantic rotating storms coalesce over the ocean, fed by warm tropical waters. And, when they happen to make landfall, the results can be catastrophic. Hurricanes and typhoons are a study in extremes, producing some of the fastest sustained winds and the highest precipitation depths across the globe. But, the most damaging part of a tropical cyclone is an effect called storm surge which produces flooding and inundation of coastal areas that can be practically unimaginable. In fact, the vast majority of lost lives and dollars of damage caused by hurricanes every year can be attributed not to the wind, lightning, or rainfall but to the storm surge. So, what is it?

Storm surge is an increase in sea level that results mostly from all that wind. Still water is level - that means its surface is perpendicular to the local gravity vector - which at small scales is a just straight line. But, when you start introducing other forces, things change. For example, the gravity from the sun and the moon cause seas to bulge and contract, leading to high and low tides. The other force that can affect the level of the sea is wind. When air passes along the surface of a waterbody, it creates a shear force. The viscosity - or stickiness - of the wind transfers some of its momentum, carrying the water along with it. When that water encounters an obstacle, like a shoreline, it has nowhere to go but up. The effect is that the wind blows the water against the shore and it bulges up above the normal sea level. But, the ocean is always windy. This effect is so pronounced for tropical cyclones because of the intensity and consistency of the winds. If you’ve ever experienced one of these storms, you know how bizarre it is. Hurricanes aren’t just gusty, the wind is faster than highway speeds and it’s constant. That’s what allows so much of the sea to move toward the coast and up the shoreline.

Characterizing storm surge might seem pretty simple at first glance. Just measure how high the sea goes for various wind speeds. Connect the dots and you’ll know, for any hurricane, the height of the surge as long as you know the speed of the wind. But, like all real-world challenges (and especially those involving weather), things aren’t quite so simple. First off, just knowing the wind speed of a hurricane is pretty challenging on its own. It varies from the center of the storm to the outside and from the top to the bottom. Magnitude of storm surge also depends on how fast the storm itself is moving and where it makes landfall. Hurricane winds move in a circular motion rather than a straight line, so every part of the shore sees a different wind direction. In fact, one side of the storm can actually pull water away from the shore, creating a reverse storm surge. Tropical cyclone winds change over time as the storm itself changes intensity, direction, and size. Storm surge height is also sensitive to the air pressure and the ocean bathymetry, the depth and shape of the terrain below water. A steep coastal shelf keeps the water deeper for longer which leads to lower storm surge. A gradual, shallow shelf creates a higher surge. All this complexity combined together means it is not that easy to predict the height of storm surge we’ll see along the coast during a hurricane or typhoon. In the U.S., the National Weather Service uses a numerical model to try and capture all these details called Sea, Lake, and Overland Surge from Hurricanes, or just SLOSH. This model gets run to develop an approximate map of potential storm surge that can be used by emergency managers for purposes like coordinating evacuations.

It’s difficult to overstate how vulnerable our cities are to storm surge. We build so much stuff along coastlines: ports, houses, roads, railways, airports, and more. Around half of the world’s economic activity happens in coastal areas. Storm surge can raise the ocean’s level by upwards of 8 meters or 26 feet in the worst cases, like when the storm landfalls during the normal high tide. Once that water’s there, waves crash against buildings and other things that weren’t meant to withstand the enormous force of the sea. The damage can be incredible. Storm surge also makes freshwater flooding worse, something that can be particularly bad during a hurricane because there’s so much rain. Urban drainage systems mostly work by gravity, so storm sewers are sloped downward to carry floodwaters away. If storm surge has pushed seawater inland, you have a smaller difference in elevation between where the water is and where you need it to be, slowing down the drainage of rainwater and making the flooding even worse.

Coastal flooding is a difficult challenge to address, but we do have ways of managing it. The first is detection and monitoring. Real-time sensors track sea levels and relay the data online so we can see the timing, extent, and magnitude of storm surge as it happens. This can help us make critical public safety decisions like where to evacuate. These instruments also help us evaluate coastal flooding after it occurs so we can be better able to predict what will happen the next time. Another way we deal with storm surge is just to build stuff further up. The higher you go in elevation, the lower your vulnerability to storm surge. That can be tough to do in shallow coastal plains where the ground isn’t much higher than sea level. We use elevated foundations like pilings to try and keep water from damaging structures. We can also put more resilient parts of buildings, like parking structures, on the lower floors to minimize flood damage. Another major way we deal with coastal flooding is with barrier infrastructure: walls or dams that hold back the sea when it rises too high. These can be as simple as earthen coastal levees or as sophisticated as the Delta Works in the Netherlands, a complex series of dams, locks, and levees which protect the vulnerable Dutch coastline from storm surge. Many of these structures, like the Maeslantkering are normally opened to reduce impacts on the environment and allow for the passage of ships. They only close when a storm threatens to raise the sea level and flood the coast.

Take a look at a generalized hazard map and you can truly get a sense of how exposed coastlines can be to this type of flooding. And, it’s not getting any better. The Intergovernmental Panel on Climate Change synthesized a huge body of research on tropical storms and specifically how their frequency and intensity might change in the future. Their conclusion was that the frequency of tropical cyclones may stay the same or even go down over time, but their intensity - that is the wind speeds and rainfall amount - is likely to increase due to greenhouse warming. Combine that with expected rise in sea level and it has some pretty important implications to coastal areas. Those risks of flood damage are baked into the cost of development in hurricane-prone regions. Most importantly, it means we need to continue to innovate solutions to not only reduce the likelihood of storm surge with protective infrastructure but also to reduce the consequences of it by making cities more resilient to flooding so that when the next storm comes, they can recover more easily and quickly than ever.

March 02, 2021 /Wesley Crump

Why Do Beaches Disappear?

February 02, 2021 by Wesley Crump

We humans are fascinated with the coast. It’s not just that the sea facilitates commerce and travel. It’s not only because it’s fun to swim in the water and lie in the sun on the beach. There’s something inherently interesting about seeing the place where two things meet; where the vast expanse of ocean touches the land on which we live. Just like campfires, we are naturally drawn to the coast, even just to simply watch and hear the waves crash ashore. It might not seem like it, but there’s an endless battle going on between land and sea along every coastline in the world (and just a hint: the sea is almost always winning). They may look static and unmoving on a map, but coastlines are some of the most dynamic areas in the world. Hey, I’m Grady and today, we’re talking about coastal erosion and the ways we fight against it.


The position of the coastline over time is highly variable. Tides create fluctuations in the level of the sea moving the shore in and out, sometimes hundreds of meters, over the course of a day. But, it’s not just the level of the ocean that influences the shape and topography of the shore, that infinitesimal line between land and sea. The material that makes up the land, soil and rock, is in constant flux, largely due to the interminable power enacted by seawater over time. Although the currents sometimes deposit more sediment than was there already, usually things work the other way around. Rock and sediment are carried out to sea in a process we all know as erosion. The big difference between coastal erosion and other types is the timescale. The sea steals away land so much quicker than other forces on inland areas for many reasons.


Ocean currents move beaches constantly, but the biggest component of coastal erosion is waves. If you’ve ever played in the ocean or even in a wave pool, you’ve probably been surprised at the power behind them. Just like waves wash around swimmers with no hesitation, they can also wash away the coastline. Simply put: waves are destructive because water is heavy. This isn’t exactly a precise law of physics, but it is a good rule of thumb in engineering: when you bash heavy stuff against something, it’s liable to break. When you combine this helpful hint with the fact that a good proportion of coastlines are made of not-very-erosion-resistant loose sandy beaches, you get a recipe for serious erosion.


What happens along coastlines across the world is mostly a physical process where the relentless crashing of water exerts pressure that can separate soil particles and even splinter and remove pieces of rock. A single wave can smash tons of force into a small area, easily washing away loose sediment or wearing away at rocks. Waves also carry sand and sediment from the seabed which gets bashed against the rocks, grinding, scraping, and chipping them over time. In some cases, the seawater can actually dissolve the rocks themselves, a process called chemical weathering. This destructive environment certainly creates some serious erosion, but it gets even worse. All of these processes are amplified during storm events like hurricanes and typhoons which produce some of the fastest sustained winds on earth. That high wind leads to high waves, which accelerate erosion way beyond normal levels. 


That would be fine if the coast wasn’t such a popular place to put stuff - and by stuff I mean houses, commercial buildings, apartments, condos, etc. - basically cities and all the expensive infrastructure that comes with them. Erosion literally steals land away from the shore, carrying it piece by piece out to sea or to be deposited somewhere else along the shore. That means development nearest to the coast is constantly at risk of being claimed by the sea. In addition to that, beaches support massive local economies, providing millions of jobs and billions of dollars of economic activity. As I mentioned before, people love the beach, and they’ll spend lots of money to see and hear and swim in those waves. So, just by adding humans to the mix, what was this perfectly natural geologic process of coastal erosion is now a certified hazard in many places, threatening structures along the shore and the livelihood of huge portions of coastal populations. That’s bad and we don’t want it to happen. So, over time, we’ve developed some solutions to try to mitigate these adverse impacts.


A lot of engineers' solutions to coastal erosion involve armoring the shore with structures like seawalls, bulkheads, and revetments. These involve building some kind of hardened structure that can withstand the continued impacts from waves. Some seawalls even include a recurve to make sure waves don’t crash over the top and erode the area beyond the wall. Another protective structure, called a groin, protrudes into the sea to reduce the currents directly along the shore and retain the soil and sand. Finally breakwaters are structures built parallel to shorelines to break up waves before they make it to the shore. Hard armoring often provides a more long-term solution to erosion, but it also creates a lot of unintended consequences. Smooth seawalls like concrete reflect waves rather than absorbing them. This is not ideal because waves can be sent towards other parts of the coast, worsening erosion at the edges of walls or further downshore. Improperly designed groins can also worsen erosion on the downdrift side. These structures can also affect the quality of habitat in the sea, creating environmental challenges.


So, when possible, we look toward softer solutions to erosion. These might not last as long, but they have fewer unintended consequences. One of those solutions is planting of mangrove forests. These are trees and shrubs that grow in tidal zones along coasts. They can’t grow everywhere, but where they can, they provide a natural stabilization of the coastline, reducing erosion from tides, waves, and storm surge. The other soft solution is simply to reverse the process of erosion by replacing the material that has been lost. This is commonly known as beach nourishment. Beaches are not only important recreation areas and economic drivers, they also serve as buffers between development and the sea. Replenishing lost sand by dredging it from the seafloor and pumping back to the shore protects coastal structures and creates important areas for recreation. It’s not without its own environmental impacts, and it’s certainly not a permanent solution, but beach nourishment is one of the primary tools for addressing coast erosion.


Just like with riverine flooding, sometimes the cheapest option to protect development from erosion is for it not to be there in the first place. For coastal structures, this strategy is called “retreat”: either purchase property and condemn it to serve as a buffer or relocate housing and infrastructure further from the shore. The National Oceanic and Atmospheric Administration projects that, in 50 years, the global mean sea level will be at least a foot higher than it was in the year 2000 and potentially more than 3 feet higher (that’s about a meter). Higher sea levels mean more inundation, more exposure to tides, waves, and storm surge, and ultimately more erosion. This is a real threat that is already affecting coastal areas and will only continue to worsen over time. It’s not necessarily something to panic over, but it is an ongoing challenge for property owners, government officials, politicians, and in some cases, even for engineers. We have to be thoughtful about our relationship to the sea and what solutions are appropriate to manage its constant battle with the land. In many cases, the best option is simply to let nature do what it does best, maintaining the coastline as the vibrant and dynamic place that draws humans to it in the first place.

February 02, 2021 /Wesley Crump

How Do Flood Control Structures Work?

January 05, 2021 by Wesley Crump

Every year floods make their way through populated areas, costing lives and millions of dollars in damages, devastating communities, and grinding local economies to a halt. If you’ve ever experienced one yourself, you know how powerless it feels to be up against mother nature. And if you haven’t, be careful in thinking it can’t happen to you. Nearly every major city across the world is susceptible to extreme rainfall and has areas that are vulnerable to flood risk. Luckily, we’ve developed strategies and structures over the years to reduce our vulnerability and mitigate our risk. We still can’t change how much it rains (at least in the short term), but we’ve found lots of ways to manage that water once it reaches the earth to limit the danger it poses to lives and property. Today, we’re talking about how large scale flood control structures work on rivers.


We all know generally what a flood is: too much water in one place at one time. But, I think there’s still uncertainty in how floods actually occur. Part of the reason for that confusion, I think, is the huge variety of scales we have when talking about flooding. Most river systems are dendritic. The topography of the land and the long-term geologic processes mean that streams join and concentrate the further you move downstream just like the branches of a tree. A watershed is the entire area of land where precipitation collects and drains into a common outlet; it’s a funnel. And as you move downstream, those funnels start to combine. The further you go, the larger the watershed becomes as more and more streams contribute to the drainage. So watersheds can be tiny or gigantic.


Your front yard is a watershed to the gutter on the street. If it happens to be raining hard directly on your house, the gutter will flood, maybe even overtop the road onto the sidewalk. At the complete opposite end of the spectrum, more than a million square miles (or three million square kilometers) make up the drainage area of the Mississippi River in the U.S. A big rainstorm in one city is not going to make a dent in the total flow of this river. But, if everywhere in the basin is having an unseasonably wet year, that can add up into major flooding as all that water concentrates into a single waterway. This seems simple, but it is a real conceptual challenge in understanding flooding, not to mention trying to control it. Smaller watersheds only flood during single intense storm events, called flash floods. Usually, this water is already long gone by the time the next storm comes. In contrast, large watersheds flood in response to widespread and sustained wet weather. They aren’t really affected by single storm events. Of course, in a dendritic system, there’s everything in between which means a flood can be a local event affecting a few houses and streets for a couple of hours during an intense thunderstorm or a months-long ordeal impacting huge swaths of land and multiple communities.


Riverine flooding is also a challenge because it’s not linear. In a cross section through a river, you have the main channel where most normal flows occur. Every unit of rise in the river doesn’t equal that much extra width in inundation. Plus there’s not much development within the banks of a river: maybe some low bridges and a few docks. But, above the channel banks, things change. The slopes aren’t so steep and you end up with wide, flat areas of land. And you know what we humans like to do with wide, flat areas near waterways - we build stuff, like entire cities. That or use it as farm land. The problem is that, once a channel overbanks, every unit of rise in the river equals much wider extents of inundation. You can see now why this is called the floodplain. And looking at a cross sectional view, it’s easy to see one of the most common structural solutions to flooding: levees. If overtopping the banks of the river creates the problem, we can just make the banks of the river higher by building earthen embankments or concrete walls. Levees protect developed areas by confining rivers within artificial banks. That means areas outside the levees flood less frequently. It doesn’t mean they have zero flood risk at all, since it’s always possible to have an extreme event that overwhelms the levees. For earthen structures, overtopping of a levee can cause erosion and even failure (or breach) of the berm. That can make the flooding even worse than it would have been otherwise, especially if people weren’t evacuated from the area ahead of time. So, even though they are a pretty simple solution to the problem of flooding, levees aren’t perfect.


Sometimes getting that water out of the channel is exactly what you want though. Another tried and true flood control technique is diversion canals. These are human-made channels used to divert flood waters to undeveloped areas where it won’t be as damaging. Often it’s not possible to widen an existing river because there’s already too much development or for environmental reasons. So instead, we create a separate channel to divert floodwater around developed areas and back into the natural waterway downstream. In most cases there will be some kind of structure at the head of the diversion channel to help control which route the water takes. For normal conditions, water will flow through the natural river, but when a flood comes, most of that water will be diverted, reducing the flood risk to the developed areas.


But, it would be nice if all that water didn’t make it into the river in the first place. That’s only possible with the other major type of flood control infrastructure: dams. These are structures meant to impound or store large volumes of water, creating reservoirs. Dams meant for flood control are kept partially or completely empty so that, when a major flood event occurs, all that water can be stored and released slowly over time. The theory here isn’t too complicated. We can’t change the volume of water that comes from a flood, but with enough storage, we can change the time period over which it gets released into the river. Big sloshes of water into this bucket come out slowly over time. As long as the sloshes are far enough apart and the bucket is big enough, you almost never see significant flooding out on the other side. But, not all dams are built specifically for flood control. Many reservoirs are intended to stay as full as possible so the water can be used for hydropower, supplying cities, or irrigation of crops. If a water supply reservoir happens to be empty at the time of a big flood, it will work just like a flood control reservoir, storing the water for later use. But, if the reservoir is already full, they have to open the floodgates to let the water through. This can be frustrating for the residents downstream who may have thought they had protection from the dam.


In many cases, a dam can serve multiple purposes at the same time. Different zones, called pools, are established for the different uses. One pool is kept full to be used for hydropower or water supply and one is kept empty to be used for storage in the event of a flood. Finding the right balance point between how much storage to keep full versus empty is a complicated challenge that considers climate, weather, the maximum amount of flow that can be released without damaging property downstream. Some dams vary the size of these pools over the course of a year depending on the seasonality of flooding, and some even use risk indicators like the depth of the snowpack within the watershed to dynamically adjust the volume available to store a potential flood.


I’ve been using the term “flood control”, but the truth is that term is falling out of favor. Now if you ask an engineer or hydrologist, they’re more likely to talk about “flood risk management.” Our ability to quote-unquote “control” mother nature is tenuous at best, and the more we try, the more we realize this: even if expensive infrastructure is helpful in a lot of circumstances, at best it is an incomplete strategy to reduce the impacts of flooding over the long term. For one, flood control structures (especially levees) can protect some areas while exacerbating flooding in other places. For two, overbanking flows are actually beneficial in a lot of ways. Just like wildfires, flooding is a natural phenomenon that has positive effects on the floodplain like improving habitat, ecology, soils, and groundwater recharge. And for three, we are understanding more and more the true value of resiliency - that is instead of reducing the probability of flooding, instead reducing the consequences. This is normally accomplished with strategic development like reserving (or converting) the floodplain for natural wetlands, parks, trails, and other purposes that aren’t as easily damaged by flooding. In fact flood buyouts where high risk property is purchased and converted to green space is often the most cost effective way to reduce flood damages in the long term (even if not the most politically popular strategy). 

It’s not likely we’ll ever have the ability to reduce the volume of rainfall during major storms, and in fact, many locations are already experiencing more extreme rainfall events than they ever have due to climate change. But, we will continue to develop strategies, both structural and non-, to reduce the risk to lives and property posed by flooding.


January 05, 2021 /Wesley Crump

Why Do Engineers Invent Floods?

December 01, 2020 by Wesley Crump

Although it’s an entirely normal and natural process on earth, flooding represents a huge problem for people. Every year we collectively throw billions of dollars essentially into the trash because of flood damage to property, buildings, vehicles, and equipment. But, it’s not just private property that is affected. Nearly every part of the constructed environment is vulnerable in some way to heavy rainfall. Culverts, bridges, sewers, canals, dams and drainage infrastructure; They all have to be designed to withstand at least some amount of flooding. But how do we decide how much is enough, and how do we estimate the magnitude of any particular storm event? Hey I’m Grady and this is Practical Engineering. Today, we’re talking about synthetic floods for designing infrastructure.


A big portion of the constructed environment has at least something to do with drainage. If it’s exposed to the outdoors, and almost all infrastructure is, it’s going to get wet or deal with some water. Designers and engineers have to be thoughtful about how and where that water will go during a storm. This might seem self-evident, but someone had to decide how long to make this storm drain inlet, how high above the river to build this bridge, how wide to make this spillway, and how big to build this culvert. And these types of decisions aren’t arbitrary, because infrastructure is expensive, and it’s always built on a budget. You can’t waste dollars installing pipes that are too big, bridges that are too high, or spillways too wide because then that money can’t be used to fund other projects or improvements. But how much is too much? After all, if you can imagine a flood that meets the capacity of a given structure, you can probably imagine a bigger one that exceeds it. On one hand you have the structure’s cost and on the other, you have its capacity, in other words, its ability to withstand flooding. Finding a balance point between the two is a really important job, and it usually has to do with statistics.


Weather is sporadic; it’s noisy data. Some days it rains, some days it doesn’t. Some years it rains nearly every day, some years not at all. But, behind all that noise, there is a hidden beauty to weather data which is the relationship between a storm’s magnitude and its probability. Small storms happen all the time, multiple times a year. Big storms happen rarely, only every few years. Massive floods occur only once every tens or hundreds of years. Their probability of occurring in a given year is low. This is all relative of course (especially depending on location), but I hope you’re seeing why this matters. Because, if you know the probability a particular storm will occur you also know the average number of times it will happen over a given period of time.


And why does that matter? Let’s use a simple case as an example. Say you have a roadway crossing a stream and you want to install a culvert. By the way, if you want to learn a lot more about culverts, check out my blog post on that topic after this! Say you choose a tiny pipe for your culvert to save some money. That’s fiscal responsibility right? But every time even a small amount of rain comes along, the culvert’s capacity will be exceeded and roadway will overtop and wash out. Your cheap pipe actually ends up being pretty expensive when you have to replace it every year. On the other hand, you can go for broke on a massive pipe that never gets full, even during huge rainstorms. You’ll never have to replace it, but you wasted money by building a much bigger structure than was necessary. That might not seem like a big deal for a single culvert, but if it’s your policy to do it every time you have to cross a stream, you’ll run out of money in a hurry. We can’t just overbuild all our infrastructure to avoid any exposure to flood risk. Usually the most cost effective solution is somewhere in the middle where you’re willing to accept some risk of being overwhelmed, maybe on average it will happen once every 10 years or once every 50 years, to save the cost of overbuilding every single piece of drainage infrastructure. 


This works the same way as the floodplain - the area along rivers and coasts most likely to be impacted by flooding. In the U.S. at least, we arbitrarily decided to use 1% as the dividing line between at-risk for flooding and not. If the land has a 1% probability or greater of being inundated by a flood in a given year, it’s inside the “floodplain,” and the storm that would completely flood this floodplain is colloquially called the 100-year flood. That’s a confusing name, and I made a blog post on that topic quite a while back so I won’t rehash it here. This binary approach of drawing a line in the sand is also a little misleading because it implies this area is safe and this area the reality is that there’s a continuum of flood risk. Those considerations aside, the concept of the floodplain is still really valuable. Knowing our vulnerability to flooding helps us make good decisions about how to manage or mitigate it. But, actually figuring out that vulnerability is pretty challenging.


The truth is that the only way we have to estimate how vulnerable different areas are to flooding is to look at how they’ve flooded in the past. In this U.S., we do have a network of stream gages dutifully recording the level of creeks and rivers, and some of them have been doing so for over a hundred years now. These instruments record the magnitude of floods through history so we can try to understand the relationship between the size of a flood and its frequency of recurring. But, these stream gages are relatively expensive, time-consuming to maintain, and their data is only applicable to the watershed in which they are installed, which means not every location where you might want to build something has a historical flood record to review. However, there is a type of instrument that does exist practically everywhere with long-duration historical records: a rain gauge.


Rain gauges are simple and cheap, and luckily, in the U.S., our government has seen fit to collect huge volumes of rainfall data, synthesize it, and provide the information back to us citizens for our practical application or just our curiosity. The latest version of this is called Atlas 14, and you can use the online web map to get statistical relationships between rainfall volume, duration, and probability for nearly everywhere in the U.S. But, estimating the magnitude of a flood doesn’t stop with knowing how much rain is falling from the sky. It may not surprise you to know that the 100-year storm doesn’t really exist. It’s a synthetic storm event invented by engineers and hydrologists. We fabricate it by taking that statistical amount of rain for a given watershed and use models to estimate how much flooding will result and where that flooding will occur within the landscape. These simulations allow us to understand flood risk so we know where not to build our buildings, how big to make our culverts, how tall to make our bridges, and how wide to make our drainage channels.


But, flooding doesn’t just cost money. It also affects public safety. In fact, some of the worst floods in history, like the Johnstown Flood in Pennsylvania, actually occurred because a storm overwhelmed a dam, causing it to fail and release a sudden wave of water downstream. In that case, over 2000 people lost their lives. With critical infrastructure like this, the calculus changes because it’s not just dollars on the other side of the balance, it’s also human lives. We are much less willing to accept the risk of overwhelming a dam if there are people who could be affected downstream. So how do we know how big spillways should be? Turns out there’s another type of synthetic flood in the tool box: the probable maximum precipitation. This is the most extreme rainstorm that could ever occur given our knowledge of meteorology and atmospheric science. If all the factors perfectly aligned to carry and drop the maximum amount of rainfall in the shortest period of time, could our infrastructure withstand it? In the case of dams, the answer is usually yes. That’s because they’re required to. We’ve spent lots of time, money, and effort researching storms to estimate this probable maximum precipitation across the U.S. for this exact reason: so we can build spillways big enough to safely discharge it without being overwhelmed.


The field of engineering hydrology is huge. Many engineers focus their entire careers on this one topic that we’ve just dipped our toes into. Flooding is one of the biggest challenges of building and developing the modern world. The ways we deal with are constantly evolving, hopefully in a direction that puts a greater emphasis on natural watershed processes and ecosystem services. But no matter how we deal with it, the first step will always be to understand our vulnerability to it. I hope I gave you a little peek into the world of water resources engineering and how we make good decisions about infrastructure’s ability to handle flooding.

December 01, 2020 /Wesley Crump

How Do Cities Manage Stormwater?

November 03, 2020 by Wesley Crump

Cities, those dense congregations of people and buildings, have made possible economies and lifestyles our early ancestors could never have imagined. Whether you thrive in or despise the concrete jungle, there’s no denying its benefits. Putting all the people, houses, jobs, stores, offices, and diversions in one place gives us humans opportunities that wouldn’t be possible if we all lived agrarian lifestyles spread out across the countryside. But, there are some negative consequences that come from cramming so much into such a small area. At no time is this more clear than when it rains. Managing the flow of runoff through a city is an immensely complex challenge that affects us in so many ways from public safety to property rights, from the environment to the health and welfare of citizens. Hey, I’m Grady, and this is Practical Engineering. Today,, we’re talking about urban stormwater management.


The water cycle is one of the most basic science lessons we learn. So basic, in fact, that it’s easy to forget how relevant and important it is to our lives. Take a look out your window when it’s raining, even when it’s raining hard, and it doesn’t seem that significant. Some of the rain soaks into the ground, some gets taken up by plants, some gets caught in puddles, and some runs off downhill, usually into the street. One of the biggest challenges in a city is the proportions of all these different paths the water can take. All those streets, sidewalks, buildings, and parking lots cover the ground with impervious surfaces, which means that instead of water infiltrating, it runs off toward creeks and rivers, swelling them faster and higher and filling them with more pollution. One of the biggest impacts on the environment of building anything is its effect on how water moves above and below the ground during storms. Multiply that to the scale of a city and you can see how remarkably we modify our landscape. Instead of acting like a sponge to absorb rainwater as it falls, urban watersheds act like funnels, gathering and concentrating rainwater runoff. I want to walk you through some of the infrastructure cities use to manage this massive challenge and a few new ideas in stormwater management that are slowly taking hold in urban areas.


Like most of the biggest challenges of building and maintaining a civilization, the negative impacts from adding impervious cover don’t befall the property owner doing the adding, but rather the people downstream. Just like dumping pollution into the river carries away to the next guy, it’s easy to make bad drainage decisions into someone else’s problem. That’s why most large cities have rules about how to manage runoff and flooding when new buildings or neighborhoods get built. Drainage reviews are just a normal part of the process of obtaining a building permit these days. If you live in a major city, just do a search for your local drainage manual to see the kinds of things that are required. Increased runoff has been a problem since people started living in cities in the first place, and the first way we handled it was simply to get the water out and away as quickly as possible. That’s because runoff creates flooding, and flooding causes billions of dollars of property damage and many lives each year. This solution is in the name we still use for how cities manage storms: “drainage.” When it rains or when it pours, we try to give that runoff somewhere to go.


Most cities are organized so the streets serve as the first path of flow for rainfall. Individual lots are graded with a slope toward the street so that water flows away from buildings where it would otherwise cause problems. The standard city street has a crown in the center with gutters on either side for water to flow. This keeps the road mainly dry and safe for vehicle travel while providing a channel to convey runoff. But the streets aren’t the end of the line. Eventually, the road will reach a natural low point and start back uphill or will have collected so much runoff that it can’t hold it all in the gutter.


At this point, the water needs a dedicated system to carry it away. In the past, it was common to simply put all the runoff from the streets directly into the sewage system. It’s a well-developed network of pipes flowing by gravity out of the City… why not use it for stormwater too? Well, actually there’s a really good reason not to do that. At the end of each sanitary sewer system is a wastewater treatment plant that was almost certainly not designed to process a massive influx of combined sewage and stormwater runoff at the whims of mother nature. In the worst cases, these plants have to release untreated wastewater directly into waterways when it is too much to be stored or processed. That’s why most cities now use municipal separate storm sewer systems, usually abbreviated as MS4s. These are networks of ditches, curbs, gutters, sewer pipes, and outfalls solely dedicated to moving runoff from everywhere in the city to the natural waterways that eventually carry it away. These inlets aren’t just places for clowns to hang out, they usually represent a direct path between the street and the nearest creek or river. Just to be clear, there’s not usually any type of treatment happening along the way. These sewers are not for waste. Whatever you put into the storm sewer system goes directly into a waterway, so please don’t dump stuff in there.


It’s easy to see why cities try so hard to get stormwater out as fast as possible if you look at the floodplain. This is just the area most likely to be inundated during a major flood. Land is one of the most valuable things within a city, but its value goes way down if it is exposed to flood risk. No one wants to build something on land that could be flooded. That being said, humans are notoriously bad at assessing risk, and no matter where you look, you’re likely to find development near creeks and rivers. Getting the water out quickly reduces the depth of flooding and thus shrinks the floodplain. That’s a big reason why you see natural waterways in cities enlarged, straightened, and lined with concrete. You can see , for the same amount of flow, a channel with lots of vegetation moves water more slowly and thus at a higher depth. A channel with smooth sides gets the water moving faster, and thus reduces the depth of flooding. But, channelization isn’t all it’s cut out to be. It’s ugly for one. No one wants a big, dirty concrete channel as a part of their surroundings. But, channelization also worsens flooding downstream for the next guy and degrades the habitat of the original waterway. It didn’t take long for cities to realize you can’t just keep widening and lining channels to keep up with the increased runoff from more and more development.


That’s why most cities now require developers to take responsibility for their own increase in runoff. By and large, that means on-site storage for stormwater. Retention and detention ponds act like mini-sponges, absorbing all the rain that rushes off the buildings, streets, and parking lots and releasing it slowly back into waterways. This shaves off the peak of the runoff with the goal of reducing it back down to or less than it was before all those buildings and parking lots got built. They also help reduce pollution by slowing down the water so suspended particles can settle out.


Onsite storage is a pretty effective solution, and one you’ll see everywhere if you’re paying attention. But it still treats stormwater as a waste product, something to be gotten rid of. The reality is that rain is a resource, and natural watersheds do a lot more than just getting rid of it. They serve as habitat for wildlife, they naturally clean runoff with vegetation, they divert rain into the ground to recharge aquifers, and they reduce flooding by slowing down the water at the source rather than letting it quickly wash away and concentrate. That’s why many cities are moving toward ways to replicate and recreate natural watershed functions within developed areas. In the U.S., this is called low-impact development and it includes strategies like rain gardens, vegetated rooftops, rain barrels, and other ways to bring more harmony between the built environment and its original hydrologic and ecological functions. It can also include better management of the floodplain by using it for purposes less vulnerable to flooding like parks and trails. One low-impact strategy is permeable pavement, and I have a post just on that topic if you want to check it out after this one.


One thing I have to mention when talking about flooding is vehicle crossings. Any location where a waterway and a road cross paths, whether it’s a bridge, a culvert, or a low water crossing, there’s always a chance of flooding getting so bad that it overtops the road. If you ever see water over the top of a roadway, just turn around. Half of all flood-related deaths happen when someone tries to drive a car or truck through water over a road. If you can’t see the road you have no idea how deep the water is, and even if you can, it only takes a small amount of swift water to push a vehicle down into a river or creek. Water is heavy. Even when it’s flowing slowly, floodwaters can impart a massive force on a vehicle. Even if it didn’t, most cars will float once the water reaches the floorboard anyway. Some cities have warning systems to help block roads when they’re overtopped by floods, but it’s not something you should count on. It just isn’t worth the risk. Find another way. As they say: Turn around, don’t drown.


Just like cities represent a colossal alteration of the landscape and thus the natural water cycle, we’re also going through a colossal shift in how we think about rainfall and stormwater and how we value the processes of natural watersheds. Look carefully as you travel through your city and you’ll notice all the different pieces and parts of infrastructure that help manage water during storm events. You’ll see plenty of ways to get water out and away from buildings and streets, but you hopefully also notice elements of Low Impact Design - ways of harnessing and benefitting from stormwater on-site, treating it like the resource it truly is.


November 03, 2020 /Wesley Crump

How Does Permeable Pavement Work?

October 06, 2020 by Wesley Crump

As much as I love infrastructure and the urban environment, it definitely has its downsides. Cities represent a remarkable transformation of the landscape from natural to human made. We change almost everything: cut down trees, level the ground, and slice and dice the land into individual plots. But one of the most significant changes to the landscape that comes with urbanization is impervious cover. I’m talking about anything that prevents rain from soaking into the subsurface: buildings, sidewalks, driveways, and the biggest culprits - streets and parking lots. Impervious cover is a big issue. When it rains, that water has to go somewhere. If it can’t soak into the ground, it washes off into creeks and rivers. That means increasing the magnitude of floods and the amount of pollution in waterways. It also means less water goes to recharge groundwater resources. When you pave paradise to put up a parking lot, you cause a pretty significant disruption to some really important natural processes in a watershed. But, not all cover has to be impervious. Today, we’re talking about permeable pavement.

Management of stormwater in urban areas is a vast field of study. Pretty much since humanity started building stuff, we also started building ways to keep that stuff dry. Traditional engineering had a single goal in mind - get stormwater off of the streets and property and into a creek, ditch, or river as quickly as possible. It’s not hard to see the problem with this strategy. Every new road and building means a higher volume of runoff in the waterways during a storm event. As cities grew, flooding problems became more severe and more frequent, streams were eroded, and receiving waterways were polluted. So, over time, municipalities adopted rules to try and curb these problems, focusing primarily on flooding. Now, in nearly every large city (at least in the U.S.), land developers are required in some way or another to make sure their projects won’t worsen downstream flooding. The traditional solution to this is control of flood peaks through onsite detention: essentially having a small pond to store runoff during a storm, allowing gradual release to mitigate flooding.

Detention and retention ponds have a lot of complexity and deserve their own separate video. They definitely help reduce flooding, but they don’t really replace the other functions of the natural landscape: the filtration and reduction of runoff volume that comes from water infiltrating into the ground. Also, these basins are usually pretty ugly and kind of gross, since they concentrate polluted runoff in one mucky area, and beauty is already in short supply in many urban areas. For all these reasons, cities are encouraging (and sometimes requiring) developers to take even greater responsibility for impacts on the natural landscape through a process called Low Impact Design, or just LID. LID practices are ways to integrate stormwater management as a part of land development and mimic natural hydrologic processes. There is a considerable variety of LID strategies that help manage urban stormwater, reduce erosion, minimize pollution, and help with flooding. These are things like rain gardens, green roofs, and vegetated filter strips. If you live in a big city, there’s a good chance your municipality has a manual describing the strategies that work best for your area. One of my favorites of these addresses the problem at its root: just make the cover less impervious.

Pavement serves a vital role in a city. A quick glance at the condition of dirt roads after a good rain is all you need to understand this. Pavement equals accessibility. In most places, the soil making up the ground isn’t a stable, durable surface for people to walk, roll, scoot, or drive. Particularly when the earth gets wet, it loses strength and turns to muck. You can see why we normally prefer pavement to be impermeable to water. Pavements protect against erosion and weakening of the soil. A poorly designed pervious pavement works about as well as if it wasn’t paved at all, since it doesn’t provide any protection against water. If you watched my previous video on potholes, you know the cruel fate of pavement that inadvertently lets water through. So, how is it possible to achieve the good parts without the bad, to allow water to infiltrate into the subsurface through a pavement without softening and weakening its foundation?

Luckily we have a pretty good example to help understand how this works. Some might even call it the OG permeable pavement. I’m talking about steel grating. You’ve almost certainly seen grating used on roads, sidewalks, or other surfaces to allow water in while keeping most everything else out. We can do precisely the same thing with traditional pavement as well. Concrete is a mix of cement, rocks, sand, and water. If you leave out the sand, you get something really cool: a material that behaves almost exactly like regular concrete, but that is full of voids and holes that can let water pass through.  

This is a really cool effect that is almost an optical illusion. Our brains are so used to seeing water runoff a paved surface, they almost can’t make sense when it flows straight through. This has led to quite a few viral clips of water disappearing into parking lots or roadways. And this isn’t just possible with concrete. Asphalt can be made similarly porous, along with different kinds of pavers. The permeability of the pavement isn’t the end of the story, though. Going back to our permeable pavement proxy, steel grates don’t just sit directly on the ground. Look through, and you’ll see, the water passing through has to have somewhere to go. Soil usually can’t absorb 100 percent of the water when it rains. If it could, we’d never have any runoff and hardly ever any floods. That means, even if we can get rain to percolate through pavement, it needs somewhere to go after that.

The pavement itself gets all the glory, but the real workhorse of a permeable pavement system is the reservoir below. This is generally made from a layer of stones of uniform size to create voids that temporarily store water coming through the surface pavement. The design of the stone reservoir is just as crucial as the pavement above because it depends on how much water must be stored and how quickly that water can infiltrate into the ground. Both of these require careful engineering. For certain types of impermeable soils, like clay, it may not be feasible to try and get all that water to infiltrate, so some permeable pavements work like detention ponds, where the water is stored temporarily and released gradually over time through drains. Whether it soaks into the ground or is discharged into a waterway little by little, the permeable pavement has made a considerable improvement over the alternative of having rainwater wash right off the surface.

This is a really helpful strategy to address stormwater in urban areas, but it’s not without challenges. Most importantly, permeable pavement isn’t that strong. If you make concrete or asphalt with a bunch of holes and voids, it makes sense that it probably can’t hold up the weight of traditional mixes. That’s why we really don’t use these systems in areas with heavy traffic. Permeable pavements are mainly relegated to parking lots and road shoulders. But we also need to keep them away from buildings where you don’t really want a lot of water soaking into the foundation soils. And we can’t use them on slopes either, because the stored water would just flow along the slope through the reservoir and eventually back out, rather than staying in storage. The pavement itself can be clogged by dirt and leaves over time, so it has to be swept or washed regularly to remain permeable. Finally, although they help snow and ice melt faster naturally, using porous pavements in colder climates requires special consideration to avoid damage from freezing water and deicing salts. Even given its simplicity and use over the past few decades, permeable pavement is still a fairly new and innovative way to manage urban stormwater. There’s still a lot to learn about how to implement it effectively and efficiently. It’s a great example of using engineering to try and bring more harmony between constructed and natural environments..

October 06, 2020 /Wesley Crump

How Do Potholes Work?

September 01, 2020 by Wesley Crump

If you consider it, having paved roadways is somewhat of a luxury. Streets have always been around, but they haven’t always been safe, comfortable, or able to accommodate the enormous number and weight of vehicles that use our present system of roadways every day. Whether or not you love how much roads dominate the landscape, you have to marvel at the fact that, in most parts of the modern world, anyone can get in a bus, car, bike, truck, motorcycle, or scooter, and go almost anywhere else in relative ease and comfort. In fact, roads make travel so convenient that not having them - or having them be in poor condition - is a significant source of frustration. There are definitely times when driving does not feel that luxurious, and one of them is something we’ve all experienced once or twice. Hey, I’m Grady, and this is Practical Engineering. Today, we’re talking about potholes in paved roadways.

I remember the excitement of getting my first car as a teenager and finally being able to drive. Sad to say, that was probably the most joy that driving a vehicle will ever give me. Now, it’s kind of a chore. And I hope I’m not out of line by saying this, but I think for most people, driving is a little dull. It’s the thing we do in between where we are and where we’re trying to be. I don’t know about you, but I don’t wake up in the morning excited to jump in the car for my morning commute. Driving is something that most of us take for granted. But, the only reason we’re able to do that - to regard vehicle travel so indifferently - is because roadways are so well designed and constructed. 

There are lots of ways to build a road. From yellow bricks to rainbows to simple dirt and water, the combinations of materials and construction techniques are practically endless. And yet, across the world, there’s really one design that makes up the vast majority of our roadways. It consists of one or more layers of angular rock called a base course and then a layer of asphalt concrete (also called blacktop or tarmac). It turns out that this design strikes the perfect balance between being cost-effective while creating a smooth and durable road surface. But, asphalt roadways aren’t invincible, and they do suffer from a few common problems, one of those being potholes.

The formation of a pothole happens in steps. And the first of those steps is the deterioration of the surface pavement. Asphalt stands up to a lot of abuse. Exposure to the constant barrage of traffic in addition to harsh sunlight, rain, snow, sleet, and freezing weather will eventually wear down any material, no matter how strong. When that happens to asphalt, the first sign is cracking. They might seem innocuous, but cracks are the Achilles heel of pavement systems. Why? Because they let in water. And not just let it in, but let it come back out as well. A hole is a lack of substance or material. It’s the only thing that gets bigger the more you take away. If you started without a hole and now you have one, that material had to go somewhere. In the case of a pothole, the material is the soil below the road (called the subgrade), and where it goes has everything to do with water.

As water finds its way into cracks and below the pavement, it can get trapped above the subgrade. Eventually, these soils get waterlogged, softening and weakening, and then the traffic shows up. Cars and trucks are heavy, and they pass over the road at rapid speeds. Because of this, traffic is just a generally destructive environment. It’s a lot for any road to stand up to, let alone one that’s waterlogged and weakened. Asphalt is called a “flexible pavement” because it doesn’t distribute these loads across a large area like something more rigid would. So, every time a tire hits this soft area, it pushes some of the water back out of the pavement. That water carries particles of soil with it. 

This is a slow process at first, but every little bit of subgrade eroded from beneath the pavement means less support, and less support means more free volume below the pavement for water to be pumped in and out by traffic. This, in turn, creates more erosion in a positive feedback loop. Eventually, the pavement loses enough support that it fails, breaking off and crumbling, and you’ve got a pothole.

Of course, this whole process is made even worse in climates with freezing weather. Water expands when it freezes, and it does so with tremendous force. Thin layers of water between pavement and base freeze and grow into formations called lenses. When those lenses thaw out, all the ice that was supporting the pavement goes away, creating voids. In addition, the lower layers of soil stay frozen, trapping that meltwater between the pavement and the subgrade and accelerating the erosion. Potholes exist everywhere you have asphalt concrete roadways, but they’re worse in areas with cold climates and much worse in the spring as the ground begins to thaw.

They’re annoying, yes, but they’re not just that. Potholes cause billions of dollars of damage to tires, shocks, and wheels of vehicles. Even worse, they’re dangerous. Cars swerve to miss them, sometimes at high speeds, and if a bike, motorcycle, or scooter hits one, it can be bad news. So, roadway owners spend a lot of time and money fixing them. There is a large variety of types of pothole fixes depending on the materials, cost, and climate conditions. But, they all mostly do the same thing: replace the soil and pavement that was lost and (hopefully) seal the area off from further intrusion of water. That second part is obviously critical but much harder to do. A pothole repair is a bandage after all, and it doesn’t always create a perfect connection to the rest of the roadway. This is why, even after they’re repaired, potholes seem to recur in the same location over and over again.

After understanding how these annoying and sometimes damaging defects occur, the next logical question is, how do we prevent them in the first place. Obviously, we could build our roadways out of more robust and more durable materials. Many highways are paved with concrete for this exact reason. But, roads are unusual in that even a tiny change in design has a significant overall impact on cost. Choosing a pavement that’s even just a centimeter thicker could mean millions of tons of additional asphalt because that centimeter gets multiplied by a vast area. So, we balance the cost of the original pavement with the expense of maintaining it over its lifetime. In the case of asphalt pavement, that maintenance primarily means sealing cracks to prevent intrusion of water. If you can do that and do it regularly, you can extend the life of asphalt pavement for many years.

Since roadways are mostly public infrastructure, their condition (at least to a certain extent) reflects the importance we all place on vehicle travel. In the broadest and most general sense, we choose potholes by choosing how much tax we pay, how much of those taxes we’re willing to budget toward streets, and how large and how many vehicles we drive over them. Pavement is one of the highest value assets owned by a City, County, or DOT. It’s essential, and it’s expensive, which means there’s an entire industry surrounding how to design, build, and maintain roadways as safely and cost-effectively as possible. Politicians, government officials, engineers, and contractors drive on the same roads as everyone else, so they all have a vested interest in keeping those roads as pothole-free as possible so that we all can enjoy the luxury of driving on paved streets in safety and comfort. Thank you for reading, and let me know what you think!

September 01, 2020 /Wesley Crump

The World’s Most Recycled Material

August 04, 2020 by Wesley Crump

Of all the ubiquitous things in our environment, roads are probably one of the least noticed. They’re pretty hard to get away from, and yet, most of us don’t give much consideration for how they’re made. Turns out, there are a lot of ways to make a road. Not to get too philosophical, but there’s really no right answer to what a road even is. How much improvement of the ground is needed before it stops being just the ground and becomes a road? Depending on the capabilities of your vehicle, sometimes not much. Over the years, the demands on roadways have increased as more people and goods are on the move. So, the designs have evolved alongside. The Romans were famous for their stone-paved roads, many of which still exist a couple of thousand years later. In modern times, the design of pavement has converged significantly. The vast majority of roadways worldwide, if they’re paved at all, are paved with one material. Today, we’re talking about asphalt concrete for roadways.

When you hear the word concrete, asphalt isn’t the first thing you think of. In fact, in some ways, it’s the opposite of what we traditionally know as concrete. But we engineers can be pedantic, especially when our designs can affect public safety. When the cost of making a mistake is severe, it’s super important that communication is crystal clear. The strict definition of concrete is essentially rocks plus a binder material. For the hard grey concrete, we’re all familiar with, that binder is portland cement. And in fact, we do use cement concrete as pavement for roadways. It is really hard and really durable, akin to those Roman roads I mentioned in the intro. You’ll mostly see concrete used for pavement on highways with lots of truck traffic because it can withstand these forces much better, and it lasts a lot longer than other types of pavements. 

But, concrete isn’t the ultimate solution for roadway surfaces. It’s harder to repair because it takes a long time to cure, extending the duration of road and lane closures. It’s not as grippy, so it has to be grooved for traction with tires. It’s not flexible, so it cracks if the ground settles or shifts. And most importantly it’s expensive. Even when you compare lifecycle costs, which include the fact that concrete lasts longer and requires less maintenance over time, it often still comes out less cost-effective. So, luckily other materials can bind rocks together, the most prevalent by far of those being asphalt.

Asphalt concrete just ticks so many of the boxes needed for modern roadways: It’s easy to construct.The materials are readily available. It provides excellent traction with tires without needing grooves. That means it’s relatively quiet, which can matter a lot depending on the location. It’s flexible, so it can accommodate some movement of the subgrade without failure. It’s also easy to fix and ready to drive on almost right after it’s placed. This is why so many of our roadways use asphalt concrete for pavement. But what is it? On the one hand, it’s a straightforward question to answer because asphalt concrete really only has two ingredients: rocks (known as aggregate in the industry) and asphalt, also sometimes called bitumen. The asphalt is a thick, sticky binder material that is occasionally found naturally occurring but most often comes from the refinement of crude oil.

On the other hand, the answer to the question of what is asphalt pavement is much more complicated. The science of pavement is huge because the pavement industry is huge. The average person makes several trips to various places on a given day by car, bike, or public transportation, and all those vehicles need roads. We collectively spend tremendous amounts of money on building and maintaining roadways each year. It might not seem like it, but we ask a lot of our roads: we want them to be stable and durable, resistant to skidding, impermeable to water intrusion, and we’d like it if they were quiet to boot. Accomplishing all this in various geographic regions with different material availability, varied climates and weather patterns, and different types of traffic is next to impossible. That’s why, just like cement concrete, the mix design of asphalt can be pretty complicated.

You might think rock is rock, and asphalt is the same as any other refined residue from the crude oil refinement process. But you’d be wrong, and if you go to just mixing any old aggregate with any old bitumen, you could end up with a pavement that doesn’t work very well as a roadway surface. The only way to know for sure is either to mix the same materials in the same proportions as some previous mixture that you know was successful or by testing a bunch of small batches with different blends of materials. In the U.S., we’ve combined both of those processes into a system called Superpave, which provides guidelines for the qualities of materials and various testing needed to mix up a successful and high-performance batch of asphalt concrete.

But, even once you get the rocks and binder right, there’s more to the mix. We include a wide variety of additives that can extend the life of pavement by improving various properties of the asphalt. Polymers, hydrocarbons, and even recycled tires get added to the mix to help with fatigue resistance, reduce sensitivity to moisture, and, most importantly, help a pavement perform better at extreme temperatures. This is because, unlike cement concrete that goes through a chemical process to cure and harden, asphalt is the same stuff when you’re installing it as it is when you’re driving over it. The only difference is its temperature. When viewing a graph of the viscosity (or stiffness) of asphalt over a range of temperatures, you can see that the hotter it gets, the less stiffness it has. Most asphalts used in roadways are known as “hot mix” because you have to get it hot for it to be workable enough to mix, transport, place, and compact. As it cools down, the  asphalt gains stiffness that makes it strong and durable against traffic. 

But, when it gets too cold, asphalt can also get too stiff. Without the ability to flex under the weight of traffic, it can begin to crack apart. Those cracks reduce the life of the pavement, but they can cause worse problems by letting in water that can soften and weaken the base and subgrade materials beneath. In that same vein, on warm sunny days, the asphalt can get too soft, leading to ruts and deformation of the pavement. Ideally, the road surface would maintain a single stiffness across all expected temperatures and only become soft and workable at the temperatures used to place it. Additives and mix design help get us closer to that ideal performance.

The other way we have to improve the serviceability of pavement is to make it thicker. Asphalt is considered a flexible pavement, which means exactly what it sounds like. Instead of distributing loads over a large area as a concrete slab would, it relies on the strength of the base course below it, which is usually a layer of crushed rock that sits on top of the subgrade. Choosing the thickness of the base course and surface pavement is mostly a question of economics. You can estimate how long a pavement will last based on the strength of the subgrade soils and how much traffic you expect. Then it’s just a matter of balancing the initial cost of installation vs. the costs associated with maintenance and, ultimately, replacement. Of course, there’s a lot more that goes into it, which is why we have transportation engineers.

It’s also why we have weight limits. Roadways have to be designed to withstand the heaviest traffic that passes through. It’s not worth all the extra cost to build our highways for the occasional gigantic truck that might come along. So, instead, we say “sorry” and cap the maximum weight at something that can accommodate most truck traffic without breaking the bank to construct. It’s just like a weight limit on a bridge, but if you break the rules, it doesn’t lead to spectacular failure, only accelerated deterioration of the roadway. But what do we do when the road does start to break down? There are lots of ways to rejuvenate asphalt pavement without full-depth replacement. One option, called a chip seal, involves spreading a thin layer of tar or asphalt onto the roadway and then rolling gravel into it. This helps seal cracks and fill in gaps for a very low cost, but it does make the road rough and loud and can leave a mess of loose rocks and tar if not applied well.

Most pavement rehabilitation takes advantage of asphalt’s most interesting property: it is nearly 100% recyclable. In fact, asphalt concrete is the world’s most recycled material. As I mentioned, asphalt doesn’t go through a chemical reaction to cure. We only use temperature as a way to transform it from a workable mix to a stable driving surface, and that process is entirely reversible and repeatable. Many of the roads you drive on every day probably came, at least in part, from other nearby streets or highways that reached the end of their life. We even have equipment that can recycle pavement in place, minimizing interruptions of traffic and the costs of hauling all that material to the job site. 

We don’t usually recognize the incredible feat that roadway engineering is. We notice the ruts, potholes, cracks, and endless orange cones. We see an ancient Roman roadway that lasted over a thousand years and think “They just don’t build things like they used to.” But we also drive heavier trucks than we used to. Our roads see tremendous volumes of traffic and withstand considerable variations in weather and climate, and they do it on a pretty tight budget. That’s really only possible because of all the scientists, engineers, contractors, and public works crews keeping up with this simple but incredible material called asphalt.

August 04, 2020 /Wesley Crump

How Are Highway Speed Limits Set?

July 07, 2020 by Wesley Crump

Laying out a new roadway seems like a simple endeavor. You have two points to connect, and you’re trying to create a simple, efficient path between them. But, there are lots of small decisions that make up a roadway design, nearly every one of which is made to keep motorists safe and comfortable. Although many of us are regular drivers, we rarely put much thought into roads. That’s on purpose. If you’re thinking about the roadway itself at all while you’re driving, it’s probably because it was poorly designed. Either that or you, like me, are just innately curious about the constructed environment. If you put it in the context of human history and evolution, it’s a remarkable thing we’re able to put ourselves in metal boxes that hurtle away at incredible speeds from place to place. It’s not entirely safe, but it’s safe enough that most of the world chooses to do it on a regular basis. And the place that level of safety and comfort starts isn’t immediately evident to the casual observer. Hey, I’m Grady, and this is Practical Engineering. Today, we’re talking about roadway geometrics and the shape of highways.

Designing a road is like designing anything complicated. There are a multitude of conflicting constraints to balance and hundreds of decisions to make. In an ideal world, every road would be a straight, flat path with no intersections, driveways, or other vehicles at all. We could race along at whatever speed we wanted. But reality dictates that engineers choose the maximum speed of a roadway based on a careful balancing act of terrain, traffic, existing obstacles, and of course, safety. If you’re going to sign your name on a roadway design, and especially if you’re going to choose a speed motorists are allowed to travel, you have to be confident that vehicles can traverse the road at that speed safely. That confidence has everything to do with the roadway’s geometry. You would never put a 60 mile per hour (100 kph) speed limit on a city street. Why? Because hardly any competent driver could navigate a turn that fast, let alone avoid a hazard, maneuver through traffic, or survive a speed bump. So how do we know what kinds of road features are manageable for a given speed?

There are three main features of roadway geometry that are decided as a part of the design: the cross-section, the alignment, and the profile, and there are fascinating details involved in each one. The first one, cross-section, is the shape of the road if you were to cut across it. The roadway cross-section shows so much information like the number of lanes, their widths and slopes, and whether there’s a median, shoulders, sidewalks, or curbs. One thing you might notice looking at roadway cross-sections is that they’re almost never flat. The reason is that a flat surface doesn’t shed water quickly. This accumulation of water on the road is dangerous to vehicles by making roads slippery and creating more ice in the winter. So, nearly all roads are crowned, which means they have a cross slope away from the center. This accelerates the drainage of precipitation and keeps the surface of the road dry.

But, not all roadways are crowned. There’s another type of cross slope that helps make roads safer. In curved sections, engineers make the outside edge higher or superelevated above the centerline. This is also to help with friction. Any object going around a curve needs a centripetal force toward the center of the turn. Otherwise, it will just continue in a straight line. For a vehicle, this centripetal force comes from the friction between the tires and the road. Without this friction - on a flat surface - there would be no way to make a turn at all. For example, if I roll a ball down a flat roadway, it’s not going to go around the corner of the road because there’s no traction. Rubber tires provide this traction against a road surface, but it’s not entirely reliable. Rain, snow, and ice significantly reduce friction. Different weights of vehicles and conditions of tires also create variability. Rather than design every curve for the worst-case scenario, it would be nice not to have to count on tire friction for this needed centripetal force.

Superelevating a roadway around a curve reduces the need for tire friction by utilizing the normal, or perpendicular, force from the pavement instead. If I roll the ball again and get the bank angle just right, the ball goes around the corner perfectly even without any lateral friction with the track. Banking roadways also makes them more comfortable, because the centrifugal force pushes passengers into their seats rather than out of them. If the superelevation angle is just right, and you’re traveling at precisely the design speed of the roadway, your cup of coffee won’t spill at all around the bend. Superelevation also helps reduce rollover risk by lowering a vehicle’s center of gravity. If you pay attention on a highway, you’ll notice that the cross slope changes direction on the outside of curves, and you go from a crown to a superelevation. The faster the design speed of the road, the higher the bank around the bend.

The shape of curves themselves is the second aspect of roadway geometry I want to discuss. Just like superelevation, the radius of a curve has a significant impact on safety—the tighter the turn, the more centripetal force needed to keep a vehicle in its lane. Crashes are most likely when radii are small, so engineers follow guidelines based on the design speed to make sure curves are sufficiently gentle. It’s not only the curves that need to be gentle but also the transitions between straight sections. At first glance, connecting circular curves to straight sections of roadway looks like a perfectly smooth ride. But forces experienced by vehicles and passengers are a function of the radius of curvature. So if you go directly from a straight section (which has an infinite radius) to a circular curve, the centrifugal force comes on abruptly. Another way to think about this is by using the steering wheel. Every position of your wheel corresponds to a certain radius of turn. If straight sections of roadway were connected directly to circular curves, you would have to turn the steering wheel at the transition instantaneously. That’s not really a feasible or safe thing to ask drivers to do. So instead, we use spiral easements that gradually transition between straight and curved sections of roadway. Spirals use variable radii to smooth out the centrifugal force that comes from going around a bend, and they allow the driver to steer gradually into and out of each curve without having to make sudden adjustments. 

Even with all those measures to make curves safe and easy to navigate, drivers still usually have a little bit of trouble staying centered in a lane around a bend. This is partly because tires don’t track perfectly inline with each other when turning (especially for large vehicles like trucks), but also because the forces are changing, and that takes compensation. Because of this, engineers often widen the lanes around curves to provide a little more wiggle room for vehicles. This happens gradually, so it’s relatively imperceptible. But if you pay attention on a highway around a curve, you may notice your lane feeling a little more spacious.

One other important aspect when designing a curve comes from the simple but crucial fact that drivers need to see what’s coming up to be able to react accordingly. Sight distance is the required length of the roadway required to recognize and respond to changes. It varies by driver reaction time and vehicle speed. The slower you react and the faster you’re going, the more distance you need to observe turns or obstacles and decide how to manage. Sight distance also varies by what is required of the driver. The amount of roadway necessary to bring the vehicle to a stop is different than the amount needed to safely pass another vehicle or avoid a hazard in the lane. Even if a curve is gentle enough for a car to traverse, it may not have enough sight distance for safety due to an obstacle like a wooded area. In this case, sight distance will require the engineer to make the curve even gentler.

The final aspect of roadway geometry is the profile - or vertical alignment. Roads rarely traverse areas that are perfectly flat. Instead, they go up and over hills and down into valleys. Engineers have to be thoughtful about how that happens as well. The slope, or grade, of a roadway, is obviously essential. You don’t want roads that are too steep, mainly because it would be hard for trucks to go up and down. You also want smooth transitions between grades for the comfort of drivers. But, on top of all that, vertical curves also have the same issue with sight distance.

Crest curves - the ones that are convex upwards - cause the roadway to hide itself beyond the top. If you’re traveling quickly up a hill, a stalled vehicle or animal on the other side could take you by surprise. If that curve is too tight, you may not have enough distance to recognize and react to the obstacle. So, crest curves must be gentle so that you can still see enough of the roadway as you go up and over. Sag curves - the ones that are concave upwards - don’t have this same issue. You can see all of the roadway on both sides of the curve. Or at least you can during the day. At night things change. Vehicles rely on headlights to illuminate the road ahead, and sometimes this can be the limiting factor for sight distance. If a sag curve is too tight, your lights won’t throw as far. That has the effect of obscuring some of your sight distance, potentially making it difficult to react to obstacles at night. So, sag curves also need to be gentle enough to maintain headlight sight distance.

Of course, there are equations for all of these different parts of roadway geometry that can tell you, based on the design speed and other factors, how much crown is required, or how high to superelevate, or the allowable radius of a curve, etcetera. Different countries and even different states, counties, and cities often have their own guidelines for how roadway design is done. And even then, the speed used by the engineers to design the roadway isn’t always the one that gets posted as the speed limit. There are just so many factors that go into highway safety, many of which are more philosophical or psychological than pure physics and engineering. It may seem like you can just plug in your criteria to some software that could spit out a roadway project in a nice neat bow. But to a certain extent, highway design is an art form. Designers even consider how the driver’s view will unfold as they travel along. If you pay attention, you’ll notice newer roadways are less of a series of straight lines connected by short curves and more of a continuous flow of gradual turns. This is not only more enjoyable, but it also helps keep drivers more alert. There are so many factors and criteria that go into the design of a roadway, and it takes significant judgment to keep them in balance and make sure the final product is as safe and comfortable for drivers as possible. Thank you for reading and let me know what you think.

July 07, 2020 /Wesley Crump

Why Does Road Construction Take So Long?

June 02, 2020 by Grady Hillhouse

From rugged dirt paths to modern superhighways, roads are one of those consistent background characters in nearly every person’s story. And, if you’ve ever been a driver, I know another similar character in your life: road construction. Most of us love having wide, smooth roadways to take us to work, to home, and everywhere else we travel. But, we’re hardly ever excited to see a construction project starting on our favorite roadway. I’m here to change that - or least to try. I love construction - always have - and when it happens along my commute, I love it even more because I get to see the slow but steady progress each day. And, I think - or at least I hope - that if you can know a little bit more about what’s going on behind those orange cones, you might appreciate it a little more as well. So, I’ll start with step one, and if people are interested, I’ll keep this series going. Hey, I’m Grady, and this is Practical Engineering. Today, we’re talking about earthwork for roadways.

The first roads in history were probably formed as people or animals followed the same trail long enough to tamp down the vegetation and establish a route between two points. But that’s not enough for the roads of today. Why? Because the earth is full of irregularities that aren’t conducive to safe, efficient, and convenient travel. There’s a reason we have the distinction of off-road vehicles. ATVs and dirt bikes are fun, but most of us don’t want to wear a protective bodysuit for our daily commute. Safe and efficient travel means smooth curves, both horizontally and vertically. It means grades that aren’t too steep, and it means paths that are relatively direct between points of interest. In a very general sense, that means to build a roadway, we need a way to smooth out the surface of the earth.

A lot of people use words and writing to communicate. But, roadway engineers and contractors use the cross-section. This is a special kind of drawing that shows a slice through a particular location, and it’s the literal language of road building. On it, you can see the level of the earth before construction, and the proposed surface afterward. Any difference in these two lines means some earthwork is going to be required. Areas above the proposed roadway need to be excavated away, also call cut. And, areas below the proposed road need to be filled in. Cut and fill are the most fundamental concepts in any earthwork project. And, keeping cut and fill in balance with one another is a critical part of roadway engineering.

After all, if you need to fill in some areas, that soil is going to have to come from somewhere. Rather than importing soil to a project, it makes a lot more sense to take it from somewhere that already needs it removed. And if you’re going to have to excavate tons of soil from some part of your project, it sure would be nice if rather than having to dispose of it, you could take it to some other part of your project that needed additional material. If the amount of cut and fill on a project is balanced, every shovelful of dirt is doing two jobs: taking soil away from where it’s not needed, and gathering soil for where it is. So, engineers designing roadways keep track of these quantities between each cross-section.

Of course, earthwork may seem simple when you’re just looking at a drawing, but here are a couple of things to keep in mind: soil is heavy, and roads are long. Just because you have the same volume of excavation as you have fill doesn’t necessarily lead to efficiency. Because if all the cut is miles away from all the fill, you’re going to have to make a lot of trips. So, roadway design not only needs to balance cut and fill but also try to minimize the haul distance. Mass haul diagrams show the net change in earthwork volume over the length of the roadway. This gives the pros a quick understanding of the amount and distance of earthwork for an entire roadway project.

But we’re still not there yet. Because, once you get all the soil in the right place, you can’t just build a road on top. I’ve said it before, and I’ll say it again: Soil’s not that strong, especially in loose piles fresh from the bed of a dump truck or scraper. We have to compact it down. But, even that’s not so simple. There may be no other material more tested than soil - maybe blood, but if you measure by weight, I don’t know. In testing labs all over the world, probably at this very moment, there are people looking at and taking pictures of, shaping and rolling soil, inserting it into equipment, taking measurements and writing those measurements down on clipboards. Why? Because soil is really important. The cost of building roads varies from place to place, but very roughly, it’s about $3M for a mile of 2-lane roadway. That’s about $2M for a kilometer. Roads might be the most expensive thing you touch in a typical day because they take a lot of work and a lot of material to build. So if we’re going to go to all that expense just to make it easier to drive our cars from place to place, we need to make sure that the roads we build have a good foundation.

That mainly means proper compaction. Soil settles and compresses over time, and if this happens with something on top (like a road or any other structure) it can lead to damage and deterioration. Compaction speeds up that settlement process so it all happens during construction instead of afterwards. If soil is compacted to its maximum density, that means it can’t settle further over time. But how do we know whether it’s compacted enough? That’s where the testing comes in. Soil labs do a ubiquitous analysis called a Proctor test. If you add different amounts of water to soil and try to compact it, you’ll see that you get different densities. With low moisture content, it’s nearly impossible to do any compaction—same thing with high moisture content. But, somewhere in the middle, you’ll get the maximum density. This estimate of the maximum density is one of the most crucial measurements in earthwork. There are a few ways to test density, but we mostly use nuclear gauges that measure the radiation passing through the soil to estimate its degree of compaction.

Soil used for filling areas is first placed in roughly the correct locations by a dump truck or scraper. Then it’s smoothed into a consistent layer, called a lift, by a bulldozer or motor grader. Finally, each lift is compacted using a compactor. This is at the heart of why earthwork takes so long to complete. You can’t compact soil more than around a foot at a time (that’s 30 centimeters). Rolling over thicker layers will only compact the surface, leaving the rest lo and free to settle over time. So areas of fill, and especially tall embankments (like the approaches to a bridge), need a lot of individual layers. By necessity, they come up slowly little by little, lift by lift. Every so often along the way, someone does a test to check the density of the compacted soil. We compare that measurement with the maximum density measured in the lab. If it’s close, it’s okay. If not, we keep compacting until it is. That gives engineers and contractors the confidence that when the roadway surface is placed, it’s going to be there to stay. But, it’s one of the biggest reasons that roadway projects take so long to complete. We can move a lot of earth in a short period of time, but to place and densify it into a foundation that will stand the test of time is a process, and it takes some time.

One last thing I want to point out: during the construction of a roadway (or really construction of just about anything), this earthwork causes a lot of disturbance. What used to be grass, plants, or some other type of covering over the ground is now just bare soil. That may not seem like a big deal, but to all the aquatic wildlife in nearby creeks and rivers, it is. That’s because any time it rains, all that unprotected soil gets quickly washed away from the construction site into waterways where it reduces the quality and quantity of habitat. So, pretty much every construction site you see should have erosion and sediment control measures in place to keep soil from washing away. Silt fences and mulch socks slow down runoff so the sediment can drop out, and rock entrances knock most of the mud off the tires of vehicles before they leave the site.

Like it or not, roads are part of the fabric of society. Travel is a fundamental part of life for nearly everyone. Unfortunately, that means road construction is too. But, I hope this video gives you a little more appreciation for what’s going on behind the orange cones. You know that metaphorically significant planar surface where the rubber meets the road? Well, it couldn’t even exist without the engineers and construction workers designing and building that planar surface just below where the road meets the earthwork. Thank you, and let me know what you think!

June 02, 2020 /Grady Hillhouse

What is a Culvert?

May 05, 2020 by Grady Hillhouse

A surprising amount of engineering is just avoiding conflicts. I’m not talking about arguments in the office, I mean conflicts when two or more things need to be in the same place. There are a lot of challenges in getting facilities over, under, around, or between each other, and there’s a specific structure, ubiquitous in the constructed environment, that’s sole purpose is to deal with the conflict between roadways and streams, canals, and ditches. Hey I’m Grady and this is Practical Engineering. Today, we’re talking about culverts.

Culverts are one of those things that seem so obvious that you never take the time to even consider them. They’re also so common that they practically blend into the background. But, without them, life in this world would be quite a bit more complicated. Let me explain what I mean. Imagine you’re designing a brand new roadway to connect point A to point B. It would be nice if the landscape between these points was perfectly flat, with no obstructions or topographic relief. But, that’s rarely true. More likely, on the way, you’ll encounter hills and valleys, structures and streams, and you’ll have to decide how to deal with each one. Your road can go around some obstacles, but for the most part you’ll have to work with what you’ve got. A roadway has to have gentle curves both horizontally and vertically, so you might have to take soil or rock from the high spots and build up the low spots along the way, also called cut and fill. But you’ve got to be careful about filling in low spots, because that’s where water flows.

Sometimes it’s obvious like rivers or perennial streams, but lots of watercourses are ephemeral, meaning they only flow when it rains. If you fill across any low area in the natural landscape, you run the risk of creating an impoundment. If water can’t get through your embankment, it’s going to flow over the top. Not only can this lead to damage of the roadway, it can be extremely dangerous to motorists and other vehicles. One obvious solution to this obvious problem is a bridge: the classic way to drive a vehicle over a body of water. But, bridges are expensive. You have to hire a structural engineer, install supports, girders, and road decks. It’s just not feasible for most small creeks and ditches. So instead we do fill the low spots in, but we include a pipe so the water can get through. That pipe is called a culvert, and there’s actually quite a bit of engineering behind this innocuous bit of infrastructure.

I know what you’re thinking: “Just a pipe under a road? How complicated could it be?” Well, allow me to introduce you to the U.S. Federal Highway Administration’s Hydraulic Design of Highway Culverts, third edition. Yes you’re seeing that right - 323 pages of wonderful guidelines on how to get water to flow under a road. But worry not, because I have taken my favorite parts of this manual and built a demonstration in the video so you can appreciate the modern marvel that is the highway culvert as much as any red-blooded civil engineer.

A culvert really only has two jobs: it has to be able to hold up the weight of the traffic passing over without collapsing, and it has to be able to let enough water pass through without overtopping the roadway. Both jobs are pretty complicated, but it’s the second one I want to talk about. And it turns out that figuring out how much water can pass through a culvert before the roadway overtops is a pretty complicated question. In fact, there are eight factors that can influence the hydraulics of a culvert: (1) Headwater, or the depth of flow upstream of the culvert, (2) The cross-sectional area of the culvert barrel, (3) the cross-sectional shape of the culvert barrel, (4) the configuration of the culvert inlet, (5) the roughness of the culvert barrel, (6) the length of the culvert, (7) the slope of the culvert, and (8) the tailwater, or depth of flow downstream. We don’t have time to demonstrate how all these parameters affect the culvert flow, but the Federal Highway Administration actually has a pretty comprehensive video on YouTube (with a much nicer flume than mine) if you want to see more [https://www.youtube.com/watch?v=vnXmGyb_hKQ].

One thing I do want to show is the two primary flow regimes for culverts which are outlet control and inlet control. And these are pretty much exactly what they sound like. Outlet control happens when water can flow into the culvert faster than it can flow out. That means flow is limited by either the roughness and friction in the culvert barrel or the tailwater depth at the outlet. The entire area of the barrel is being taken advantage of for flow. In outlet control flow, conditions downstream of the culvert can affect the flow rate. For example, if a tree falls across a ditch downstream, that can back up water reducing flow through the culvert and causing the roadway to overtop.

Inlet control happens when the culvert inlet is constricting the flow more than any of those other factors. Everything that affects the amount of water passing below the road is happening at the inlet. That means changing the roughness of the inside of the barrel or anything downstream won’t change how much flow makes it through. It’s easy to show this in my model because you can see inside the culvert barrel. You can tell that the flow depth in the culvert is shallow and the full flow area of the barrel is not being taken advantage of. There are a wide variety of configurations that the inlet to a culvert can have. If you pay attention, you’ll see all kinds of culvert inlets. Some common types include projecting, where the pipe protrudes from the embankment, mitered where the pipe is cut flush to the embankment, and headwall where the culvert begins at a vertical concrete wall, sometimes accompanied by concrete wing walls to further direct flow into the barrel. Unsurprisingly, each of the multitude of different inlet configurations has a different effect on the culvert hydraulics.

In my demo, I can do a test of two of these inlet configurations to show the difference. First I’m testing the projecting inlet. This is one of the least efficient configurations because there’s nothing to help train the flow into the culvert. You can see that the headwater elevation is quite high, even close to overtopping the headwall in my flume. And, even with all that pressure upstream, there’s not that much water coming through the culvert. It’s only flowing about half full.

Next I reconfigured the demo to make the culvert flush with the headwall. And I also rounded over the inside edge of the pipe, giving the flow a smoother entrance. I didn’t change how much flow the pump is creating, but you can see that the headwater is much, much lower. That means the inlet is more efficient, because it takes less driving headwater to get the same amount of flow through the barrel. In fact, as I cranked up the flowrate higher and higher, I realized that - even with as much headwater as I could create - this configuration was still acting as an outlet-controlled culvert. The smooth and flush inlet was allowing as much flow as possible through.

Of course, there are really elaborate culvert inlets that can be extremely efficient, but like all infrastructure, culvert design is an exercise in balancing cost with other factors. You can spend a lot of money on a fancy culvert inlet that has perfectly smooth edges to guide the water gently into the barrel, or you could just bump up to the next pipe size. Calculating flow through a culvert can be quite complicated, because culverts can transition between inlet and outlet control depending on flow rate. And, even within these two major flow regimes of inlet and outlet control, there are a whole host of subregimes - each of which has its own hydraulic equations. Of course we have software now, but back in the 1960s and 70s the Federal Highway Administration came up with a whole group of cool nomographs to simplify the hydraulic design of culverts. The way this works is you first find the right chart for your situation [7A]. The one in the video is a culvert with a submerged outlet flowing full. Each one is a little different, but in this one you draw a line connecting the culvert length to its diameter. Then draw a line connecting the headwater depth to the intersection of your other line with the turning line. Extend this line to the discharge scale to find out the flow rate passing through the culvert. I love little tricks like this that boil down all that hydraulic complexity into a quick calculation you can do with a straightedge in less than a minute.

Next time you’re driving or walking along a street keep an eye out for culverts. And, if it’s raining, take a look at the flow. See if you can identify whether the culvert is outlet or inlet controlled, and be thankful that we have this ordinary, but remarkable, bit of infrastructure to let you safely walk or drive right over.

May 05, 2020 /Grady Hillhouse

How Do Canal Locks Work?

April 07, 2020 by Wesley Crump

Freight transportation is an absolutely essential part of modern life. Maintaining the complex supply chains of raw materials to finished goods requires a seemingly endless amount of hustle and bustle. Millions of tons of freight are moved each day, mainly on trucks and trains. But, “shipping” got its name for a reason, and we still use ships to move a lot of our stuff. One of the main reasons is that it’s efficient. In fact, moving a ton of goods the same distance on a boat takes roughly half the amount of energy than it would by train and roughly a fifth of the energy it would take on a truck. You can prove this to yourself pretty easily. Even heavy stuff is practically effortless to move around once it’s floating on water. Of course, shipping by waterway also has its limitations. It’s slow (for one) and not every place that needs goods is accessible by boat. We’ve overcome this obstacle somewhat through the use of constructed waterways, or canals. Canals and shipping are described in the earliest works of written history. But there’s another limitation more difficult to surmount. Water is self-leveling. Unlike roads or rail, you can’t lay water up on a slope to get up or down a hill. Luckily, we have a solution to this problem. It may seem simple at first glance, but there is a lot of fascinating complexity to getting boats up and down within a river or canal. Hey I’m Grady and this is Practical Engineering. Today, we’re talking about locks for navigation.

The efficiency of water transportation has a surprising amount to do with how the world looks today. Nearly every major city across the globe is located on a waterway accessible by shipping traffic. Waterway transportation is weaved into the history of just about everything. So, it’s no surprise that, even since thousands of years ago, humans have sought to bring access by boat to areas otherwise inaccessible. But, creating waterways navigable by boats isn’t as simple as digging a ditch. Unlike the open sea, the endless and uncluttered surface of water, land has obstructions and obstacles. The topography dips and rises, rivers and ponds get in the way, and manmade infrastructure like cities, roads, and utilities impede otherwise unhindered paths from point A to point  B. The quintessential example of this is the Panama Canal: the famous cut through that narrow Isthmus saving ships the lengthy and dangerous trip around Cape Horn. At scale, this seems pretty straightforward - just cut a ditch from the Atlantic to the Pacific. But the details of what is one of the largest civil engineering projects of the modern world are more complex. One of the most important of those details is that the majority of the Panama Canal isn’t at sea level, but actually 26 meters or 85 feet higher.

This is due to sheer practicality. Construction of the Canal was already one of the largest excavation projects in history. Keeping boats at sea level would require cutting, at a minimum, an 85-foot-deep canyon through the peninsula involving millions and millions of tons of extra earthwork that would be completely infeasible. So, rather than cutting the channel deeper, we instead raise the boats up from sea level on one side and lower them back down on the other. And we do this using locks, an ingenious and ancient technology that has made possible navigation on canals and waterways that otherwise could never have existed.

The way a lock works is dead simple. And of course I have a little demonstration here to make this more intuitive. For a boat going up, it first enters the empty lock. The lower gate is closed. Then water from above is allowed to fill the lock. This is usually done through a smaller gate or a dedicated plumbing system, but I’m just cracking the upper gate open. Once the level in the lock reaches the correct height, the upper gate can be fully opened, and the boat can continue on its way. Going down follows the same steps in reverse. The boat enters the full lock. The upper gate is closed, and the water in the lock is allowed to drain. Again, I’m just cracking the gate in the demo, but this is often done through a slightly more sophisticated way in the real world. Once the lock is drained, the lower gate can be fully opened, and the boat can continue on. I hope you see the genius of this system. It’s a completely reversible lift system that, in its simplest form, requires no external source of power to work… except for the water itself.

One thing to notice about a lock is that even though boats can move through in both directions, water only moves through in one . The lock always fills from the upper canal and always drains to the lower canal. This is because… gravity. Hopefully that’s obvious. But, it’s important to realize that even though we’re not using pumps, the energy required to raise and lower boats through a lock isn’t necessarily “free”. Each time the lock is operated, you lose a “lockful” of water downstream. And sometimes that matters. Canals aren’t full of limitless water, and if there is a lot of traffic or the locks are particularly large, this could mean losing millions of liters of water per day. On large rivers, it’s usually not enough to worry about, but in some cases this could cause a canal or reservoir to go completely dry. So, canals that use locks need some way to replenish the lost water or at least limit how much water is lost each cycle. What if there was a way to save the water used to fill the lock and reuse it?

On the Panama Canal, the locks use water from Gatun Lake, a critical source of drinking water for the country. During periods of drought, water supply becomes a serious issue. That’s why, when the canal was expanded in 2016, the new locks included water saving basins. Like the locks themselves, these basins are an extremely simple and yet an ingenious way to limit the amount of water lost each time the locks are filled. Let me show you how this works. On my demo, instead of draining the lock into the downstream canal, I can drain it partially into a nearby reservoir. Then, when the time comes to fill the lock, I can recycle the water from the basin, also called a side pond, to partially raise the level. Of course, I still need to use water from the upper canal to fully fill the lock, but it’s still less water than I would have otherwise used.

In fact, if the water saving basin is the same area as the lock, you can save exactly one third of the water. The reason, again, is gravity. Water doesn’t flow uphill - it has to always be going down. To save water, you need a volume within the lock for it to come from, a lower volume for it to drain to and wait in the side pond, and finally an even lower volume for the saved water to go within the lock. That means the best you can do with a single basin is to save a third of the water that would otherwise be lost. But, it’s possible to do better than this. One option is to have the water saving basin have a larger area. Imagine an infinitely large basin such that no matter how much water drains into it, its level never rises. In this case, you could drain the upper half of the volume of the lock into the side pond, and then use that water to fill the lower half of the lock on the way up. So, the area of the basin is important, with a larger area providing a greater water-saving benefit. The other way we can do better is to have more basins.

Notice on the diagram that the bottom two volume divisions are lost each cycle. When the lock drains, each volume division moves from the lock to the side pond one division below, except for the bottom two divisions which are lost downstream. That water can’t be stored in a side pond because the pond would have to be at or below the bottom of the lock. And when the lock is filled, each side pond fills the volume of the lock again one division lower. The top two divisions can’t be filled from a side pond, so they are filled from the upper canal. It’s pretty easy to see why more basins equals smaller divisions and why that equals less water lost for each cycle. Of course, for both the number of ponds and their area, there are practical limitations to how much land is available and the expense of all that plumbing, etc. So, you have to balance the value of saving the water in the locks versus the capital and ongoing expenses of constructing and operating these basins. That’s made a lot easier with a pretty simple formula to calculate the ratio of how much water is used with side ponds versus without them.

The new locks at the Panama Canal each use three basins which are about the same area as the locks themselves. Plugging in 3 for the number of basins, and 1 for the lock to basin area ratio, you can see that the new locks use only 40% of the water that would be required to operate without the basins. That’s pretty impressive and definitely seems worth the cost of the basins. But, it’s not the only example of this. Another lock in Hannover Germany has ten basins, reducing the lost water by about three-fourths, although the tanks are underground so they’re harder to see. I’ve been talking about freight transportation in this video, but people use boats for all kinds of different reasons, and in the same way, there are all kinds, shapes, sizes, and ages of locks across the world. In fact there are a lot of canals where you can operate the lock yourself.  They’re also not the only way of moving boats up or down, but that’s a topic for another video. Next time you see a lock, consider where that water comes from, and keep an eye out for side ponds that help save a little or a lot of it for the next time. As always, thanks so much, and let me know what you think!

April 07, 2020 /Wesley Crump
  • Newer
  • Older