Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

Why Do Beaches Disappear?

February 02, 2021 by Wesley Crump

We humans are fascinated with the coast. It’s not just that the sea facilitates commerce and travel. It’s not only because it’s fun to swim in the water and lie in the sun on the beach. There’s something inherently interesting about seeing the place where two things meet; where the vast expanse of ocean touches the land on which we live. Just like campfires, we are naturally drawn to the coast, even just to simply watch and hear the waves crash ashore. It might not seem like it, but there’s an endless battle going on between land and sea along every coastline in the world (and just a hint: the sea is almost always winning). They may look static and unmoving on a map, but coastlines are some of the most dynamic areas in the world. Hey, I’m Grady and today, we’re talking about coastal erosion and the ways we fight against it.


The position of the coastline over time is highly variable. Tides create fluctuations in the level of the sea moving the shore in and out, sometimes hundreds of meters, over the course of a day. But, it’s not just the level of the ocean that influences the shape and topography of the shore, that infinitesimal line between land and sea. The material that makes up the land, soil and rock, is in constant flux, largely due to the interminable power enacted by seawater over time. Although the currents sometimes deposit more sediment than was there already, usually things work the other way around. Rock and sediment are carried out to sea in a process we all know as erosion. The big difference between coastal erosion and other types is the timescale. The sea steals away land so much quicker than other forces on inland areas for many reasons.


Ocean currents move beaches constantly, but the biggest component of coastal erosion is waves. If you’ve ever played in the ocean or even in a wave pool, you’ve probably been surprised at the power behind them. Just like waves wash around swimmers with no hesitation, they can also wash away the coastline. Simply put: waves are destructive because water is heavy. This isn’t exactly a precise law of physics, but it is a good rule of thumb in engineering: when you bash heavy stuff against something, it’s liable to break. When you combine this helpful hint with the fact that a good proportion of coastlines are made of not-very-erosion-resistant loose sandy beaches, you get a recipe for serious erosion.


What happens along coastlines across the world is mostly a physical process where the relentless crashing of water exerts pressure that can separate soil particles and even splinter and remove pieces of rock. A single wave can smash tons of force into a small area, easily washing away loose sediment or wearing away at rocks. Waves also carry sand and sediment from the seabed which gets bashed against the rocks, grinding, scraping, and chipping them over time. In some cases, the seawater can actually dissolve the rocks themselves, a process called chemical weathering. This destructive environment certainly creates some serious erosion, but it gets even worse. All of these processes are amplified during storm events like hurricanes and typhoons which produce some of the fastest sustained winds on earth. That high wind leads to high waves, which accelerate erosion way beyond normal levels. 


That would be fine if the coast wasn’t such a popular place to put stuff - and by stuff I mean houses, commercial buildings, apartments, condos, etc. - basically cities and all the expensive infrastructure that comes with them. Erosion literally steals land away from the shore, carrying it piece by piece out to sea or to be deposited somewhere else along the shore. That means development nearest to the coast is constantly at risk of being claimed by the sea. In addition to that, beaches support massive local economies, providing millions of jobs and billions of dollars of economic activity. As I mentioned before, people love the beach, and they’ll spend lots of money to see and hear and swim in those waves. So, just by adding humans to the mix, what was this perfectly natural geologic process of coastal erosion is now a certified hazard in many places, threatening structures along the shore and the livelihood of huge portions of coastal populations. That’s bad and we don’t want it to happen. So, over time, we’ve developed some solutions to try to mitigate these adverse impacts.


A lot of engineers' solutions to coastal erosion involve armoring the shore with structures like seawalls, bulkheads, and revetments. These involve building some kind of hardened structure that can withstand the continued impacts from waves. Some seawalls even include a recurve to make sure waves don’t crash over the top and erode the area beyond the wall. Another protective structure, called a groin, protrudes into the sea to reduce the currents directly along the shore and retain the soil and sand. Finally breakwaters are structures built parallel to shorelines to break up waves before they make it to the shore. Hard armoring often provides a more long-term solution to erosion, but it also creates a lot of unintended consequences. Smooth seawalls like concrete reflect waves rather than absorbing them. This is not ideal because waves can be sent towards other parts of the coast, worsening erosion at the edges of walls or further downshore. Improperly designed groins can also worsen erosion on the downdrift side. These structures can also affect the quality of habitat in the sea, creating environmental challenges.


So, when possible, we look toward softer solutions to erosion. These might not last as long, but they have fewer unintended consequences. One of those solutions is planting of mangrove forests. These are trees and shrubs that grow in tidal zones along coasts. They can’t grow everywhere, but where they can, they provide a natural stabilization of the coastline, reducing erosion from tides, waves, and storm surge. The other soft solution is simply to reverse the process of erosion by replacing the material that has been lost. This is commonly known as beach nourishment. Beaches are not only important recreation areas and economic drivers, they also serve as buffers between development and the sea. Replenishing lost sand by dredging it from the seafloor and pumping back to the shore protects coastal structures and creates important areas for recreation. It’s not without its own environmental impacts, and it’s certainly not a permanent solution, but beach nourishment is one of the primary tools for addressing coast erosion.


Just like with riverine flooding, sometimes the cheapest option to protect development from erosion is for it not to be there in the first place. For coastal structures, this strategy is called “retreat”: either purchase property and condemn it to serve as a buffer or relocate housing and infrastructure further from the shore. The National Oceanic and Atmospheric Administration projects that, in 50 years, the global mean sea level will be at least a foot higher than it was in the year 2000 and potentially more than 3 feet higher (that’s about a meter). Higher sea levels mean more inundation, more exposure to tides, waves, and storm surge, and ultimately more erosion. This is a real threat that is already affecting coastal areas and will only continue to worsen over time. It’s not necessarily something to panic over, but it is an ongoing challenge for property owners, government officials, politicians, and in some cases, even for engineers. We have to be thoughtful about our relationship to the sea and what solutions are appropriate to manage its constant battle with the land. In many cases, the best option is simply to let nature do what it does best, maintaining the coastline as the vibrant and dynamic place that draws humans to it in the first place.

February 02, 2021 /Wesley Crump

How Do Flood Control Structures Work?

January 05, 2021 by Wesley Crump

Every year floods make their way through populated areas, costing lives and millions of dollars in damages, devastating communities, and grinding local economies to a halt. If you’ve ever experienced one yourself, you know how powerless it feels to be up against mother nature. And if you haven’t, be careful in thinking it can’t happen to you. Nearly every major city across the world is susceptible to extreme rainfall and has areas that are vulnerable to flood risk. Luckily, we’ve developed strategies and structures over the years to reduce our vulnerability and mitigate our risk. We still can’t change how much it rains (at least in the short term), but we’ve found lots of ways to manage that water once it reaches the earth to limit the danger it poses to lives and property. Today, we’re talking about how large scale flood control structures work on rivers.


We all know generally what a flood is: too much water in one place at one time. But, I think there’s still uncertainty in how floods actually occur. Part of the reason for that confusion, I think, is the huge variety of scales we have when talking about flooding. Most river systems are dendritic. The topography of the land and the long-term geologic processes mean that streams join and concentrate the further you move downstream just like the branches of a tree. A watershed is the entire area of land where precipitation collects and drains into a common outlet; it’s a funnel. And as you move downstream, those funnels start to combine. The further you go, the larger the watershed becomes as more and more streams contribute to the drainage. So watersheds can be tiny or gigantic.


Your front yard is a watershed to the gutter on the street. If it happens to be raining hard directly on your house, the gutter will flood, maybe even overtop the road onto the sidewalk. At the complete opposite end of the spectrum, more than a million square miles (or three million square kilometers) make up the drainage area of the Mississippi River in the U.S. A big rainstorm in one city is not going to make a dent in the total flow of this river. But, if everywhere in the basin is having an unseasonably wet year, that can add up into major flooding as all that water concentrates into a single waterway. This seems simple, but it is a real conceptual challenge in understanding flooding, not to mention trying to control it. Smaller watersheds only flood during single intense storm events, called flash floods. Usually, this water is already long gone by the time the next storm comes. In contrast, large watersheds flood in response to widespread and sustained wet weather. They aren’t really affected by single storm events. Of course, in a dendritic system, there’s everything in between which means a flood can be a local event affecting a few houses and streets for a couple of hours during an intense thunderstorm or a months-long ordeal impacting huge swaths of land and multiple communities.


Riverine flooding is also a challenge because it’s not linear. In a cross section through a river, you have the main channel where most normal flows occur. Every unit of rise in the river doesn’t equal that much extra width in inundation. Plus there’s not much development within the banks of a river: maybe some low bridges and a few docks. But, above the channel banks, things change. The slopes aren’t so steep and you end up with wide, flat areas of land. And you know what we humans like to do with wide, flat areas near waterways - we build stuff, like entire cities. That or use it as farm land. The problem is that, once a channel overbanks, every unit of rise in the river equals much wider extents of inundation. You can see now why this is called the floodplain. And looking at a cross sectional view, it’s easy to see one of the most common structural solutions to flooding: levees. If overtopping the banks of the river creates the problem, we can just make the banks of the river higher by building earthen embankments or concrete walls. Levees protect developed areas by confining rivers within artificial banks. That means areas outside the levees flood less frequently. It doesn’t mean they have zero flood risk at all, since it’s always possible to have an extreme event that overwhelms the levees. For earthen structures, overtopping of a levee can cause erosion and even failure (or breach) of the berm. That can make the flooding even worse than it would have been otherwise, especially if people weren’t evacuated from the area ahead of time. So, even though they are a pretty simple solution to the problem of flooding, levees aren’t perfect.


Sometimes getting that water out of the channel is exactly what you want though. Another tried and true flood control technique is diversion canals. These are human-made channels used to divert flood waters to undeveloped areas where it won’t be as damaging. Often it’s not possible to widen an existing river because there’s already too much development or for environmental reasons. So instead, we create a separate channel to divert floodwater around developed areas and back into the natural waterway downstream. In most cases there will be some kind of structure at the head of the diversion channel to help control which route the water takes. For normal conditions, water will flow through the natural river, but when a flood comes, most of that water will be diverted, reducing the flood risk to the developed areas.


But, it would be nice if all that water didn’t make it into the river in the first place. That’s only possible with the other major type of flood control infrastructure: dams. These are structures meant to impound or store large volumes of water, creating reservoirs. Dams meant for flood control are kept partially or completely empty so that, when a major flood event occurs, all that water can be stored and released slowly over time. The theory here isn’t too complicated. We can’t change the volume of water that comes from a flood, but with enough storage, we can change the time period over which it gets released into the river. Big sloshes of water into this bucket come out slowly over time. As long as the sloshes are far enough apart and the bucket is big enough, you almost never see significant flooding out on the other side. But, not all dams are built specifically for flood control. Many reservoirs are intended to stay as full as possible so the water can be used for hydropower, supplying cities, or irrigation of crops. If a water supply reservoir happens to be empty at the time of a big flood, it will work just like a flood control reservoir, storing the water for later use. But, if the reservoir is already full, they have to open the floodgates to let the water through. This can be frustrating for the residents downstream who may have thought they had protection from the dam.


In many cases, a dam can serve multiple purposes at the same time. Different zones, called pools, are established for the different uses. One pool is kept full to be used for hydropower or water supply and one is kept empty to be used for storage in the event of a flood. Finding the right balance point between how much storage to keep full versus empty is a complicated challenge that considers climate, weather, the maximum amount of flow that can be released without damaging property downstream. Some dams vary the size of these pools over the course of a year depending on the seasonality of flooding, and some even use risk indicators like the depth of the snowpack within the watershed to dynamically adjust the volume available to store a potential flood.


I’ve been using the term “flood control”, but the truth is that term is falling out of favor. Now if you ask an engineer or hydrologist, they’re more likely to talk about “flood risk management.” Our ability to quote-unquote “control” mother nature is tenuous at best, and the more we try, the more we realize this: even if expensive infrastructure is helpful in a lot of circumstances, at best it is an incomplete strategy to reduce the impacts of flooding over the long term. For one, flood control structures (especially levees) can protect some areas while exacerbating flooding in other places. For two, overbanking flows are actually beneficial in a lot of ways. Just like wildfires, flooding is a natural phenomenon that has positive effects on the floodplain like improving habitat, ecology, soils, and groundwater recharge. And for three, we are understanding more and more the true value of resiliency - that is instead of reducing the probability of flooding, instead reducing the consequences. This is normally accomplished with strategic development like reserving (or converting) the floodplain for natural wetlands, parks, trails, and other purposes that aren’t as easily damaged by flooding. In fact flood buyouts where high risk property is purchased and converted to green space is often the most cost effective way to reduce flood damages in the long term (even if not the most politically popular strategy). 

It’s not likely we’ll ever have the ability to reduce the volume of rainfall during major storms, and in fact, many locations are already experiencing more extreme rainfall events than they ever have due to climate change. But, we will continue to develop strategies, both structural and non-, to reduce the risk to lives and property posed by flooding.


January 05, 2021 /Wesley Crump

Why Do Engineers Invent Floods?

December 01, 2020 by Wesley Crump

Although it’s an entirely normal and natural process on earth, flooding represents a huge problem for people. Every year we collectively throw billions of dollars essentially into the trash because of flood damage to property, buildings, vehicles, and equipment. But, it’s not just private property that is affected. Nearly every part of the constructed environment is vulnerable in some way to heavy rainfall. Culverts, bridges, sewers, canals, dams and drainage infrastructure; They all have to be designed to withstand at least some amount of flooding. But how do we decide how much is enough, and how do we estimate the magnitude of any particular storm event? Hey I’m Grady and this is Practical Engineering. Today, we’re talking about synthetic floods for designing infrastructure.


A big portion of the constructed environment has at least something to do with drainage. If it’s exposed to the outdoors, and almost all infrastructure is, it’s going to get wet or deal with some water. Designers and engineers have to be thoughtful about how and where that water will go during a storm. This might seem self-evident, but someone had to decide how long to make this storm drain inlet, how high above the river to build this bridge, how wide to make this spillway, and how big to build this culvert. And these types of decisions aren’t arbitrary, because infrastructure is expensive, and it’s always built on a budget. You can’t waste dollars installing pipes that are too big, bridges that are too high, or spillways too wide because then that money can’t be used to fund other projects or improvements. But how much is too much? After all, if you can imagine a flood that meets the capacity of a given structure, you can probably imagine a bigger one that exceeds it. On one hand you have the structure’s cost and on the other, you have its capacity, in other words, its ability to withstand flooding. Finding a balance point between the two is a really important job, and it usually has to do with statistics.


Weather is sporadic; it’s noisy data. Some days it rains, some days it doesn’t. Some years it rains nearly every day, some years not at all. But, behind all that noise, there is a hidden beauty to weather data which is the relationship between a storm’s magnitude and its probability. Small storms happen all the time, multiple times a year. Big storms happen rarely, only every few years. Massive floods occur only once every tens or hundreds of years. Their probability of occurring in a given year is low. This is all relative of course (especially depending on location), but I hope you’re seeing why this matters. Because, if you know the probability a particular storm will occur you also know the average number of times it will happen over a given period of time.


And why does that matter? Let’s use a simple case as an example. Say you have a roadway crossing a stream and you want to install a culvert. By the way, if you want to learn a lot more about culverts, check out my blog post on that topic after this! Say you choose a tiny pipe for your culvert to save some money. That’s fiscal responsibility right? But every time even a small amount of rain comes along, the culvert’s capacity will be exceeded and roadway will overtop and wash out. Your cheap pipe actually ends up being pretty expensive when you have to replace it every year. On the other hand, you can go for broke on a massive pipe that never gets full, even during huge rainstorms. You’ll never have to replace it, but you wasted money by building a much bigger structure than was necessary. That might not seem like a big deal for a single culvert, but if it’s your policy to do it every time you have to cross a stream, you’ll run out of money in a hurry. We can’t just overbuild all our infrastructure to avoid any exposure to flood risk. Usually the most cost effective solution is somewhere in the middle where you’re willing to accept some risk of being overwhelmed, maybe on average it will happen once every 10 years or once every 50 years, to save the cost of overbuilding every single piece of drainage infrastructure. 


This works the same way as the floodplain - the area along rivers and coasts most likely to be impacted by flooding. In the U.S. at least, we arbitrarily decided to use 1% as the dividing line between at-risk for flooding and not. If the land has a 1% probability or greater of being inundated by a flood in a given year, it’s inside the “floodplain,” and the storm that would completely flood this floodplain is colloquially called the 100-year flood. That’s a confusing name, and I made a blog post on that topic quite a while back so I won’t rehash it here. This binary approach of drawing a line in the sand is also a little misleading because it implies this area is safe and this area the reality is that there’s a continuum of flood risk. Those considerations aside, the concept of the floodplain is still really valuable. Knowing our vulnerability to flooding helps us make good decisions about how to manage or mitigate it. But, actually figuring out that vulnerability is pretty challenging.


The truth is that the only way we have to estimate how vulnerable different areas are to flooding is to look at how they’ve flooded in the past. In this U.S., we do have a network of stream gages dutifully recording the level of creeks and rivers, and some of them have been doing so for over a hundred years now. These instruments record the magnitude of floods through history so we can try to understand the relationship between the size of a flood and its frequency of recurring. But, these stream gages are relatively expensive, time-consuming to maintain, and their data is only applicable to the watershed in which they are installed, which means not every location where you might want to build something has a historical flood record to review. However, there is a type of instrument that does exist practically everywhere with long-duration historical records: a rain gauge.


Rain gauges are simple and cheap, and luckily, in the U.S., our government has seen fit to collect huge volumes of rainfall data, synthesize it, and provide the information back to us citizens for our practical application or just our curiosity. The latest version of this is called Atlas 14, and you can use the online web map to get statistical relationships between rainfall volume, duration, and probability for nearly everywhere in the U.S. But, estimating the magnitude of a flood doesn’t stop with knowing how much rain is falling from the sky. It may not surprise you to know that the 100-year storm doesn’t really exist. It’s a synthetic storm event invented by engineers and hydrologists. We fabricate it by taking that statistical amount of rain for a given watershed and use models to estimate how much flooding will result and where that flooding will occur within the landscape. These simulations allow us to understand flood risk so we know where not to build our buildings, how big to make our culverts, how tall to make our bridges, and how wide to make our drainage channels.


But, flooding doesn’t just cost money. It also affects public safety. In fact, some of the worst floods in history, like the Johnstown Flood in Pennsylvania, actually occurred because a storm overwhelmed a dam, causing it to fail and release a sudden wave of water downstream. In that case, over 2000 people lost their lives. With critical infrastructure like this, the calculus changes because it’s not just dollars on the other side of the balance, it’s also human lives. We are much less willing to accept the risk of overwhelming a dam if there are people who could be affected downstream. So how do we know how big spillways should be? Turns out there’s another type of synthetic flood in the tool box: the probable maximum precipitation. This is the most extreme rainstorm that could ever occur given our knowledge of meteorology and atmospheric science. If all the factors perfectly aligned to carry and drop the maximum amount of rainfall in the shortest period of time, could our infrastructure withstand it? In the case of dams, the answer is usually yes. That’s because they’re required to. We’ve spent lots of time, money, and effort researching storms to estimate this probable maximum precipitation across the U.S. for this exact reason: so we can build spillways big enough to safely discharge it without being overwhelmed.


The field of engineering hydrology is huge. Many engineers focus their entire careers on this one topic that we’ve just dipped our toes into. Flooding is one of the biggest challenges of building and developing the modern world. The ways we deal with are constantly evolving, hopefully in a direction that puts a greater emphasis on natural watershed processes and ecosystem services. But no matter how we deal with it, the first step will always be to understand our vulnerability to it. I hope I gave you a little peek into the world of water resources engineering and how we make good decisions about infrastructure’s ability to handle flooding.

December 01, 2020 /Wesley Crump

How Do Cities Manage Stormwater?

November 03, 2020 by Wesley Crump

Cities, those dense congregations of people and buildings, have made possible economies and lifestyles our early ancestors could never have imagined. Whether you thrive in or despise the concrete jungle, there’s no denying its benefits. Putting all the people, houses, jobs, stores, offices, and diversions in one place gives us humans opportunities that wouldn’t be possible if we all lived agrarian lifestyles spread out across the countryside. But, there are some negative consequences that come from cramming so much into such a small area. At no time is this more clear than when it rains. Managing the flow of runoff through a city is an immensely complex challenge that affects us in so many ways from public safety to property rights, from the environment to the health and welfare of citizens. Hey, I’m Grady, and this is Practical Engineering. Today,, we’re talking about urban stormwater management.


The water cycle is one of the most basic science lessons we learn. So basic, in fact, that it’s easy to forget how relevant and important it is to our lives. Take a look out your window when it’s raining, even when it’s raining hard, and it doesn’t seem that significant. Some of the rain soaks into the ground, some gets taken up by plants, some gets caught in puddles, and some runs off downhill, usually into the street. One of the biggest challenges in a city is the proportions of all these different paths the water can take. All those streets, sidewalks, buildings, and parking lots cover the ground with impervious surfaces, which means that instead of water infiltrating, it runs off toward creeks and rivers, swelling them faster and higher and filling them with more pollution. One of the biggest impacts on the environment of building anything is its effect on how water moves above and below the ground during storms. Multiply that to the scale of a city and you can see how remarkably we modify our landscape. Instead of acting like a sponge to absorb rainwater as it falls, urban watersheds act like funnels, gathering and concentrating rainwater runoff. I want to walk you through some of the infrastructure cities use to manage this massive challenge and a few new ideas in stormwater management that are slowly taking hold in urban areas.


Like most of the biggest challenges of building and maintaining a civilization, the negative impacts from adding impervious cover don’t befall the property owner doing the adding, but rather the people downstream. Just like dumping pollution into the river carries away to the next guy, it’s easy to make bad drainage decisions into someone else’s problem. That’s why most large cities have rules about how to manage runoff and flooding when new buildings or neighborhoods get built. Drainage reviews are just a normal part of the process of obtaining a building permit these days. If you live in a major city, just do a search for your local drainage manual to see the kinds of things that are required. Increased runoff has been a problem since people started living in cities in the first place, and the first way we handled it was simply to get the water out and away as quickly as possible. That’s because runoff creates flooding, and flooding causes billions of dollars of property damage and many lives each year. This solution is in the name we still use for how cities manage storms: “drainage.” When it rains or when it pours, we try to give that runoff somewhere to go.


Most cities are organized so the streets serve as the first path of flow for rainfall. Individual lots are graded with a slope toward the street so that water flows away from buildings where it would otherwise cause problems. The standard city street has a crown in the center with gutters on either side for water to flow. This keeps the road mainly dry and safe for vehicle travel while providing a channel to convey runoff. But the streets aren’t the end of the line. Eventually, the road will reach a natural low point and start back uphill or will have collected so much runoff that it can’t hold it all in the gutter.


At this point, the water needs a dedicated system to carry it away. In the past, it was common to simply put all the runoff from the streets directly into the sewage system. It’s a well-developed network of pipes flowing by gravity out of the City… why not use it for stormwater too? Well, actually there’s a really good reason not to do that. At the end of each sanitary sewer system is a wastewater treatment plant that was almost certainly not designed to process a massive influx of combined sewage and stormwater runoff at the whims of mother nature. In the worst cases, these plants have to release untreated wastewater directly into waterways when it is too much to be stored or processed. That’s why most cities now use municipal separate storm sewer systems, usually abbreviated as MS4s. These are networks of ditches, curbs, gutters, sewer pipes, and outfalls solely dedicated to moving runoff from everywhere in the city to the natural waterways that eventually carry it away. These inlets aren’t just places for clowns to hang out, they usually represent a direct path between the street and the nearest creek or river. Just to be clear, there’s not usually any type of treatment happening along the way. These sewers are not for waste. Whatever you put into the storm sewer system goes directly into a waterway, so please don’t dump stuff in there.


It’s easy to see why cities try so hard to get stormwater out as fast as possible if you look at the floodplain. This is just the area most likely to be inundated during a major flood. Land is one of the most valuable things within a city, but its value goes way down if it is exposed to flood risk. No one wants to build something on land that could be flooded. That being said, humans are notoriously bad at assessing risk, and no matter where you look, you’re likely to find development near creeks and rivers. Getting the water out quickly reduces the depth of flooding and thus shrinks the floodplain. That’s a big reason why you see natural waterways in cities enlarged, straightened, and lined with concrete. You can see , for the same amount of flow, a channel with lots of vegetation moves water more slowly and thus at a higher depth. A channel with smooth sides gets the water moving faster, and thus reduces the depth of flooding. But, channelization isn’t all it’s cut out to be. It’s ugly for one. No one wants a big, dirty concrete channel as a part of their surroundings. But, channelization also worsens flooding downstream for the next guy and degrades the habitat of the original waterway. It didn’t take long for cities to realize you can’t just keep widening and lining channels to keep up with the increased runoff from more and more development.


That’s why most cities now require developers to take responsibility for their own increase in runoff. By and large, that means on-site storage for stormwater. Retention and detention ponds act like mini-sponges, absorbing all the rain that rushes off the buildings, streets, and parking lots and releasing it slowly back into waterways. This shaves off the peak of the runoff with the goal of reducing it back down to or less than it was before all those buildings and parking lots got built. They also help reduce pollution by slowing down the water so suspended particles can settle out.


Onsite storage is a pretty effective solution, and one you’ll see everywhere if you’re paying attention. But it still treats stormwater as a waste product, something to be gotten rid of. The reality is that rain is a resource, and natural watersheds do a lot more than just getting rid of it. They serve as habitat for wildlife, they naturally clean runoff with vegetation, they divert rain into the ground to recharge aquifers, and they reduce flooding by slowing down the water at the source rather than letting it quickly wash away and concentrate. That’s why many cities are moving toward ways to replicate and recreate natural watershed functions within developed areas. In the U.S., this is called low-impact development and it includes strategies like rain gardens, vegetated rooftops, rain barrels, and other ways to bring more harmony between the built environment and its original hydrologic and ecological functions. It can also include better management of the floodplain by using it for purposes less vulnerable to flooding like parks and trails. One low-impact strategy is permeable pavement, and I have a post just on that topic if you want to check it out after this one.


One thing I have to mention when talking about flooding is vehicle crossings. Any location where a waterway and a road cross paths, whether it’s a bridge, a culvert, or a low water crossing, there’s always a chance of flooding getting so bad that it overtops the road. If you ever see water over the top of a roadway, just turn around. Half of all flood-related deaths happen when someone tries to drive a car or truck through water over a road. If you can’t see the road you have no idea how deep the water is, and even if you can, it only takes a small amount of swift water to push a vehicle down into a river or creek. Water is heavy. Even when it’s flowing slowly, floodwaters can impart a massive force on a vehicle. Even if it didn’t, most cars will float once the water reaches the floorboard anyway. Some cities have warning systems to help block roads when they’re overtopped by floods, but it’s not something you should count on. It just isn’t worth the risk. Find another way. As they say: Turn around, don’t drown.


Just like cities represent a colossal alteration of the landscape and thus the natural water cycle, we’re also going through a colossal shift in how we think about rainfall and stormwater and how we value the processes of natural watersheds. Look carefully as you travel through your city and you’ll notice all the different pieces and parts of infrastructure that help manage water during storm events. You’ll see plenty of ways to get water out and away from buildings and streets, but you hopefully also notice elements of Low Impact Design - ways of harnessing and benefitting from stormwater on-site, treating it like the resource it truly is.


November 03, 2020 /Wesley Crump

How Does Permeable Pavement Work?

October 06, 2020 by Wesley Crump

As much as I love infrastructure and the urban environment, it definitely has its downsides. Cities represent a remarkable transformation of the landscape from natural to human made. We change almost everything: cut down trees, level the ground, and slice and dice the land into individual plots. But one of the most significant changes to the landscape that comes with urbanization is impervious cover. I’m talking about anything that prevents rain from soaking into the subsurface: buildings, sidewalks, driveways, and the biggest culprits - streets and parking lots. Impervious cover is a big issue. When it rains, that water has to go somewhere. If it can’t soak into the ground, it washes off into creeks and rivers. That means increasing the magnitude of floods and the amount of pollution in waterways. It also means less water goes to recharge groundwater resources. When you pave paradise to put up a parking lot, you cause a pretty significant disruption to some really important natural processes in a watershed. But, not all cover has to be impervious. Today, we’re talking about permeable pavement.

Management of stormwater in urban areas is a vast field of study. Pretty much since humanity started building stuff, we also started building ways to keep that stuff dry. Traditional engineering had a single goal in mind - get stormwater off of the streets and property and into a creek, ditch, or river as quickly as possible. It’s not hard to see the problem with this strategy. Every new road and building means a higher volume of runoff in the waterways during a storm event. As cities grew, flooding problems became more severe and more frequent, streams were eroded, and receiving waterways were polluted. So, over time, municipalities adopted rules to try and curb these problems, focusing primarily on flooding. Now, in nearly every large city (at least in the U.S.), land developers are required in some way or another to make sure their projects won’t worsen downstream flooding. The traditional solution to this is control of flood peaks through onsite detention: essentially having a small pond to store runoff during a storm, allowing gradual release to mitigate flooding.

Detention and retention ponds have a lot of complexity and deserve their own separate video. They definitely help reduce flooding, but they don’t really replace the other functions of the natural landscape: the filtration and reduction of runoff volume that comes from water infiltrating into the ground. Also, these basins are usually pretty ugly and kind of gross, since they concentrate polluted runoff in one mucky area, and beauty is already in short supply in many urban areas. For all these reasons, cities are encouraging (and sometimes requiring) developers to take even greater responsibility for impacts on the natural landscape through a process called Low Impact Design, or just LID. LID practices are ways to integrate stormwater management as a part of land development and mimic natural hydrologic processes. There is a considerable variety of LID strategies that help manage urban stormwater, reduce erosion, minimize pollution, and help with flooding. These are things like rain gardens, green roofs, and vegetated filter strips. If you live in a big city, there’s a good chance your municipality has a manual describing the strategies that work best for your area. One of my favorites of these addresses the problem at its root: just make the cover less impervious.

Pavement serves a vital role in a city. A quick glance at the condition of dirt roads after a good rain is all you need to understand this. Pavement equals accessibility. In most places, the soil making up the ground isn’t a stable, durable surface for people to walk, roll, scoot, or drive. Particularly when the earth gets wet, it loses strength and turns to muck. You can see why we normally prefer pavement to be impermeable to water. Pavements protect against erosion and weakening of the soil. A poorly designed pervious pavement works about as well as if it wasn’t paved at all, since it doesn’t provide any protection against water. If you watched my previous video on potholes, you know the cruel fate of pavement that inadvertently lets water through. So, how is it possible to achieve the good parts without the bad, to allow water to infiltrate into the subsurface through a pavement without softening and weakening its foundation?

Luckily we have a pretty good example to help understand how this works. Some might even call it the OG permeable pavement. I’m talking about steel grating. You’ve almost certainly seen grating used on roads, sidewalks, or other surfaces to allow water in while keeping most everything else out. We can do precisely the same thing with traditional pavement as well. Concrete is a mix of cement, rocks, sand, and water. If you leave out the sand, you get something really cool: a material that behaves almost exactly like regular concrete, but that is full of voids and holes that can let water pass through.  

This is a really cool effect that is almost an optical illusion. Our brains are so used to seeing water runoff a paved surface, they almost can’t make sense when it flows straight through. This has led to quite a few viral clips of water disappearing into parking lots or roadways. And this isn’t just possible with concrete. Asphalt can be made similarly porous, along with different kinds of pavers. The permeability of the pavement isn’t the end of the story, though. Going back to our permeable pavement proxy, steel grates don’t just sit directly on the ground. Look through, and you’ll see, the water passing through has to have somewhere to go. Soil usually can’t absorb 100 percent of the water when it rains. If it could, we’d never have any runoff and hardly ever any floods. That means, even if we can get rain to percolate through pavement, it needs somewhere to go after that.

The pavement itself gets all the glory, but the real workhorse of a permeable pavement system is the reservoir below. This is generally made from a layer of stones of uniform size to create voids that temporarily store water coming through the surface pavement. The design of the stone reservoir is just as crucial as the pavement above because it depends on how much water must be stored and how quickly that water can infiltrate into the ground. Both of these require careful engineering. For certain types of impermeable soils, like clay, it may not be feasible to try and get all that water to infiltrate, so some permeable pavements work like detention ponds, where the water is stored temporarily and released gradually over time through drains. Whether it soaks into the ground or is discharged into a waterway little by little, the permeable pavement has made a considerable improvement over the alternative of having rainwater wash right off the surface.

This is a really helpful strategy to address stormwater in urban areas, but it’s not without challenges. Most importantly, permeable pavement isn’t that strong. If you make concrete or asphalt with a bunch of holes and voids, it makes sense that it probably can’t hold up the weight of traditional mixes. That’s why we really don’t use these systems in areas with heavy traffic. Permeable pavements are mainly relegated to parking lots and road shoulders. But we also need to keep them away from buildings where you don’t really want a lot of water soaking into the foundation soils. And we can’t use them on slopes either, because the stored water would just flow along the slope through the reservoir and eventually back out, rather than staying in storage. The pavement itself can be clogged by dirt and leaves over time, so it has to be swept or washed regularly to remain permeable. Finally, although they help snow and ice melt faster naturally, using porous pavements in colder climates requires special consideration to avoid damage from freezing water and deicing salts. Even given its simplicity and use over the past few decades, permeable pavement is still a fairly new and innovative way to manage urban stormwater. There’s still a lot to learn about how to implement it effectively and efficiently. It’s a great example of using engineering to try and bring more harmony between constructed and natural environments..

October 06, 2020 /Wesley Crump

How Do Potholes Work?

September 01, 2020 by Wesley Crump

If you consider it, having paved roadways is somewhat of a luxury. Streets have always been around, but they haven’t always been safe, comfortable, or able to accommodate the enormous number and weight of vehicles that use our present system of roadways every day. Whether or not you love how much roads dominate the landscape, you have to marvel at the fact that, in most parts of the modern world, anyone can get in a bus, car, bike, truck, motorcycle, or scooter, and go almost anywhere else in relative ease and comfort. In fact, roads make travel so convenient that not having them - or having them be in poor condition - is a significant source of frustration. There are definitely times when driving does not feel that luxurious, and one of them is something we’ve all experienced once or twice. Hey, I’m Grady, and this is Practical Engineering. Today, we’re talking about potholes in paved roadways.

I remember the excitement of getting my first car as a teenager and finally being able to drive. Sad to say, that was probably the most joy that driving a vehicle will ever give me. Now, it’s kind of a chore. And I hope I’m not out of line by saying this, but I think for most people, driving is a little dull. It’s the thing we do in between where we are and where we’re trying to be. I don’t know about you, but I don’t wake up in the morning excited to jump in the car for my morning commute. Driving is something that most of us take for granted. But, the only reason we’re able to do that - to regard vehicle travel so indifferently - is because roadways are so well designed and constructed. 

There are lots of ways to build a road. From yellow bricks to rainbows to simple dirt and water, the combinations of materials and construction techniques are practically endless. And yet, across the world, there’s really one design that makes up the vast majority of our roadways. It consists of one or more layers of angular rock called a base course and then a layer of asphalt concrete (also called blacktop or tarmac). It turns out that this design strikes the perfect balance between being cost-effective while creating a smooth and durable road surface. But, asphalt roadways aren’t invincible, and they do suffer from a few common problems, one of those being potholes.

The formation of a pothole happens in steps. And the first of those steps is the deterioration of the surface pavement. Asphalt stands up to a lot of abuse. Exposure to the constant barrage of traffic in addition to harsh sunlight, rain, snow, sleet, and freezing weather will eventually wear down any material, no matter how strong. When that happens to asphalt, the first sign is cracking. They might seem innocuous, but cracks are the Achilles heel of pavement systems. Why? Because they let in water. And not just let it in, but let it come back out as well. A hole is a lack of substance or material. It’s the only thing that gets bigger the more you take away. If you started without a hole and now you have one, that material had to go somewhere. In the case of a pothole, the material is the soil below the road (called the subgrade), and where it goes has everything to do with water.

As water finds its way into cracks and below the pavement, it can get trapped above the subgrade. Eventually, these soils get waterlogged, softening and weakening, and then the traffic shows up. Cars and trucks are heavy, and they pass over the road at rapid speeds. Because of this, traffic is just a generally destructive environment. It’s a lot for any road to stand up to, let alone one that’s waterlogged and weakened. Asphalt is called a “flexible pavement” because it doesn’t distribute these loads across a large area like something more rigid would. So, every time a tire hits this soft area, it pushes some of the water back out of the pavement. That water carries particles of soil with it. 

This is a slow process at first, but every little bit of subgrade eroded from beneath the pavement means less support, and less support means more free volume below the pavement for water to be pumped in and out by traffic. This, in turn, creates more erosion in a positive feedback loop. Eventually, the pavement loses enough support that it fails, breaking off and crumbling, and you’ve got a pothole.

Of course, this whole process is made even worse in climates with freezing weather. Water expands when it freezes, and it does so with tremendous force. Thin layers of water between pavement and base freeze and grow into formations called lenses. When those lenses thaw out, all the ice that was supporting the pavement goes away, creating voids. In addition, the lower layers of soil stay frozen, trapping that meltwater between the pavement and the subgrade and accelerating the erosion. Potholes exist everywhere you have asphalt concrete roadways, but they’re worse in areas with cold climates and much worse in the spring as the ground begins to thaw.

They’re annoying, yes, but they’re not just that. Potholes cause billions of dollars of damage to tires, shocks, and wheels of vehicles. Even worse, they’re dangerous. Cars swerve to miss them, sometimes at high speeds, and if a bike, motorcycle, or scooter hits one, it can be bad news. So, roadway owners spend a lot of time and money fixing them. There is a large variety of types of pothole fixes depending on the materials, cost, and climate conditions. But, they all mostly do the same thing: replace the soil and pavement that was lost and (hopefully) seal the area off from further intrusion of water. That second part is obviously critical but much harder to do. A pothole repair is a bandage after all, and it doesn’t always create a perfect connection to the rest of the roadway. This is why, even after they’re repaired, potholes seem to recur in the same location over and over again.

After understanding how these annoying and sometimes damaging defects occur, the next logical question is, how do we prevent them in the first place. Obviously, we could build our roadways out of more robust and more durable materials. Many highways are paved with concrete for this exact reason. But, roads are unusual in that even a tiny change in design has a significant overall impact on cost. Choosing a pavement that’s even just a centimeter thicker could mean millions of tons of additional asphalt because that centimeter gets multiplied by a vast area. So, we balance the cost of the original pavement with the expense of maintaining it over its lifetime. In the case of asphalt pavement, that maintenance primarily means sealing cracks to prevent intrusion of water. If you can do that and do it regularly, you can extend the life of asphalt pavement for many years.

Since roadways are mostly public infrastructure, their condition (at least to a certain extent) reflects the importance we all place on vehicle travel. In the broadest and most general sense, we choose potholes by choosing how much tax we pay, how much of those taxes we’re willing to budget toward streets, and how large and how many vehicles we drive over them. Pavement is one of the highest value assets owned by a City, County, or DOT. It’s essential, and it’s expensive, which means there’s an entire industry surrounding how to design, build, and maintain roadways as safely and cost-effectively as possible. Politicians, government officials, engineers, and contractors drive on the same roads as everyone else, so they all have a vested interest in keeping those roads as pothole-free as possible so that we all can enjoy the luxury of driving on paved streets in safety and comfort. Thank you for reading, and let me know what you think!

September 01, 2020 /Wesley Crump

The World’s Most Recycled Material

August 04, 2020 by Wesley Crump

Of all the ubiquitous things in our environment, roads are probably one of the least noticed. They’re pretty hard to get away from, and yet, most of us don’t give much consideration for how they’re made. Turns out, there are a lot of ways to make a road. Not to get too philosophical, but there’s really no right answer to what a road even is. How much improvement of the ground is needed before it stops being just the ground and becomes a road? Depending on the capabilities of your vehicle, sometimes not much. Over the years, the demands on roadways have increased as more people and goods are on the move. So, the designs have evolved alongside. The Romans were famous for their stone-paved roads, many of which still exist a couple of thousand years later. In modern times, the design of pavement has converged significantly. The vast majority of roadways worldwide, if they’re paved at all, are paved with one material. Today, we’re talking about asphalt concrete for roadways.

When you hear the word concrete, asphalt isn’t the first thing you think of. In fact, in some ways, it’s the opposite of what we traditionally know as concrete. But we engineers can be pedantic, especially when our designs can affect public safety. When the cost of making a mistake is severe, it’s super important that communication is crystal clear. The strict definition of concrete is essentially rocks plus a binder material. For the hard grey concrete, we’re all familiar with, that binder is portland cement. And in fact, we do use cement concrete as pavement for roadways. It is really hard and really durable, akin to those Roman roads I mentioned in the intro. You’ll mostly see concrete used for pavement on highways with lots of truck traffic because it can withstand these forces much better, and it lasts a lot longer than other types of pavements. 

But, concrete isn’t the ultimate solution for roadway surfaces. It’s harder to repair because it takes a long time to cure, extending the duration of road and lane closures. It’s not as grippy, so it has to be grooved for traction with tires. It’s not flexible, so it cracks if the ground settles or shifts. And most importantly it’s expensive. Even when you compare lifecycle costs, which include the fact that concrete lasts longer and requires less maintenance over time, it often still comes out less cost-effective. So, luckily other materials can bind rocks together, the most prevalent by far of those being asphalt.

Asphalt concrete just ticks so many of the boxes needed for modern roadways: It’s easy to construct.The materials are readily available. It provides excellent traction with tires without needing grooves. That means it’s relatively quiet, which can matter a lot depending on the location. It’s flexible, so it can accommodate some movement of the subgrade without failure. It’s also easy to fix and ready to drive on almost right after it’s placed. This is why so many of our roadways use asphalt concrete for pavement. But what is it? On the one hand, it’s a straightforward question to answer because asphalt concrete really only has two ingredients: rocks (known as aggregate in the industry) and asphalt, also sometimes called bitumen. The asphalt is a thick, sticky binder material that is occasionally found naturally occurring but most often comes from the refinement of crude oil.

On the other hand, the answer to the question of what is asphalt pavement is much more complicated. The science of pavement is huge because the pavement industry is huge. The average person makes several trips to various places on a given day by car, bike, or public transportation, and all those vehicles need roads. We collectively spend tremendous amounts of money on building and maintaining roadways each year. It might not seem like it, but we ask a lot of our roads: we want them to be stable and durable, resistant to skidding, impermeable to water intrusion, and we’d like it if they were quiet to boot. Accomplishing all this in various geographic regions with different material availability, varied climates and weather patterns, and different types of traffic is next to impossible. That’s why, just like cement concrete, the mix design of asphalt can be pretty complicated.

You might think rock is rock, and asphalt is the same as any other refined residue from the crude oil refinement process. But you’d be wrong, and if you go to just mixing any old aggregate with any old bitumen, you could end up with a pavement that doesn’t work very well as a roadway surface. The only way to know for sure is either to mix the same materials in the same proportions as some previous mixture that you know was successful or by testing a bunch of small batches with different blends of materials. In the U.S., we’ve combined both of those processes into a system called Superpave, which provides guidelines for the qualities of materials and various testing needed to mix up a successful and high-performance batch of asphalt concrete.

But, even once you get the rocks and binder right, there’s more to the mix. We include a wide variety of additives that can extend the life of pavement by improving various properties of the asphalt. Polymers, hydrocarbons, and even recycled tires get added to the mix to help with fatigue resistance, reduce sensitivity to moisture, and, most importantly, help a pavement perform better at extreme temperatures. This is because, unlike cement concrete that goes through a chemical process to cure and harden, asphalt is the same stuff when you’re installing it as it is when you’re driving over it. The only difference is its temperature. When viewing a graph of the viscosity (or stiffness) of asphalt over a range of temperatures, you can see that the hotter it gets, the less stiffness it has. Most asphalts used in roadways are known as “hot mix” because you have to get it hot for it to be workable enough to mix, transport, place, and compact. As it cools down, the  asphalt gains stiffness that makes it strong and durable against traffic. 

But, when it gets too cold, asphalt can also get too stiff. Without the ability to flex under the weight of traffic, it can begin to crack apart. Those cracks reduce the life of the pavement, but they can cause worse problems by letting in water that can soften and weaken the base and subgrade materials beneath. In that same vein, on warm sunny days, the asphalt can get too soft, leading to ruts and deformation of the pavement. Ideally, the road surface would maintain a single stiffness across all expected temperatures and only become soft and workable at the temperatures used to place it. Additives and mix design help get us closer to that ideal performance.

The other way we have to improve the serviceability of pavement is to make it thicker. Asphalt is considered a flexible pavement, which means exactly what it sounds like. Instead of distributing loads over a large area as a concrete slab would, it relies on the strength of the base course below it, which is usually a layer of crushed rock that sits on top of the subgrade. Choosing the thickness of the base course and surface pavement is mostly a question of economics. You can estimate how long a pavement will last based on the strength of the subgrade soils and how much traffic you expect. Then it’s just a matter of balancing the initial cost of installation vs. the costs associated with maintenance and, ultimately, replacement. Of course, there’s a lot more that goes into it, which is why we have transportation engineers.

It’s also why we have weight limits. Roadways have to be designed to withstand the heaviest traffic that passes through. It’s not worth all the extra cost to build our highways for the occasional gigantic truck that might come along. So, instead, we say “sorry” and cap the maximum weight at something that can accommodate most truck traffic without breaking the bank to construct. It’s just like a weight limit on a bridge, but if you break the rules, it doesn’t lead to spectacular failure, only accelerated deterioration of the roadway. But what do we do when the road does start to break down? There are lots of ways to rejuvenate asphalt pavement without full-depth replacement. One option, called a chip seal, involves spreading a thin layer of tar or asphalt onto the roadway and then rolling gravel into it. This helps seal cracks and fill in gaps for a very low cost, but it does make the road rough and loud and can leave a mess of loose rocks and tar if not applied well.

Most pavement rehabilitation takes advantage of asphalt’s most interesting property: it is nearly 100% recyclable. In fact, asphalt concrete is the world’s most recycled material. As I mentioned, asphalt doesn’t go through a chemical reaction to cure. We only use temperature as a way to transform it from a workable mix to a stable driving surface, and that process is entirely reversible and repeatable. Many of the roads you drive on every day probably came, at least in part, from other nearby streets or highways that reached the end of their life. We even have equipment that can recycle pavement in place, minimizing interruptions of traffic and the costs of hauling all that material to the job site. 

We don’t usually recognize the incredible feat that roadway engineering is. We notice the ruts, potholes, cracks, and endless orange cones. We see an ancient Roman roadway that lasted over a thousand years and think “They just don’t build things like they used to.” But we also drive heavier trucks than we used to. Our roads see tremendous volumes of traffic and withstand considerable variations in weather and climate, and they do it on a pretty tight budget. That’s really only possible because of all the scientists, engineers, contractors, and public works crews keeping up with this simple but incredible material called asphalt.

August 04, 2020 /Wesley Crump

How Are Highway Speed Limits Set?

July 07, 2020 by Wesley Crump

Laying out a new roadway seems like a simple endeavor. You have two points to connect, and you’re trying to create a simple, efficient path between them. But, there are lots of small decisions that make up a roadway design, nearly every one of which is made to keep motorists safe and comfortable. Although many of us are regular drivers, we rarely put much thought into roads. That’s on purpose. If you’re thinking about the roadway itself at all while you’re driving, it’s probably because it was poorly designed. Either that or you, like me, are just innately curious about the constructed environment. If you put it in the context of human history and evolution, it’s a remarkable thing we’re able to put ourselves in metal boxes that hurtle away at incredible speeds from place to place. It’s not entirely safe, but it’s safe enough that most of the world chooses to do it on a regular basis. And the place that level of safety and comfort starts isn’t immediately evident to the casual observer. Hey, I’m Grady, and this is Practical Engineering. Today, we’re talking about roadway geometrics and the shape of highways.

Designing a road is like designing anything complicated. There are a multitude of conflicting constraints to balance and hundreds of decisions to make. In an ideal world, every road would be a straight, flat path with no intersections, driveways, or other vehicles at all. We could race along at whatever speed we wanted. But reality dictates that engineers choose the maximum speed of a roadway based on a careful balancing act of terrain, traffic, existing obstacles, and of course, safety. If you’re going to sign your name on a roadway design, and especially if you’re going to choose a speed motorists are allowed to travel, you have to be confident that vehicles can traverse the road at that speed safely. That confidence has everything to do with the roadway’s geometry. You would never put a 60 mile per hour (100 kph) speed limit on a city street. Why? Because hardly any competent driver could navigate a turn that fast, let alone avoid a hazard, maneuver through traffic, or survive a speed bump. So how do we know what kinds of road features are manageable for a given speed?

There are three main features of roadway geometry that are decided as a part of the design: the cross-section, the alignment, and the profile, and there are fascinating details involved in each one. The first one, cross-section, is the shape of the road if you were to cut across it. The roadway cross-section shows so much information like the number of lanes, their widths and slopes, and whether there’s a median, shoulders, sidewalks, or curbs. One thing you might notice looking at roadway cross-sections is that they’re almost never flat. The reason is that a flat surface doesn’t shed water quickly. This accumulation of water on the road is dangerous to vehicles by making roads slippery and creating more ice in the winter. So, nearly all roads are crowned, which means they have a cross slope away from the center. This accelerates the drainage of precipitation and keeps the surface of the road dry.

But, not all roadways are crowned. There’s another type of cross slope that helps make roads safer. In curved sections, engineers make the outside edge higher or superelevated above the centerline. This is also to help with friction. Any object going around a curve needs a centripetal force toward the center of the turn. Otherwise, it will just continue in a straight line. For a vehicle, this centripetal force comes from the friction between the tires and the road. Without this friction - on a flat surface - there would be no way to make a turn at all. For example, if I roll a ball down a flat roadway, it’s not going to go around the corner of the road because there’s no traction. Rubber tires provide this traction against a road surface, but it’s not entirely reliable. Rain, snow, and ice significantly reduce friction. Different weights of vehicles and conditions of tires also create variability. Rather than design every curve for the worst-case scenario, it would be nice not to have to count on tire friction for this needed centripetal force.

Superelevating a roadway around a curve reduces the need for tire friction by utilizing the normal, or perpendicular, force from the pavement instead. If I roll the ball again and get the bank angle just right, the ball goes around the corner perfectly even without any lateral friction with the track. Banking roadways also makes them more comfortable, because the centrifugal force pushes passengers into their seats rather than out of them. If the superelevation angle is just right, and you’re traveling at precisely the design speed of the roadway, your cup of coffee won’t spill at all around the bend. Superelevation also helps reduce rollover risk by lowering a vehicle’s center of gravity. If you pay attention on a highway, you’ll notice that the cross slope changes direction on the outside of curves, and you go from a crown to a superelevation. The faster the design speed of the road, the higher the bank around the bend.

The shape of curves themselves is the second aspect of roadway geometry I want to discuss. Just like superelevation, the radius of a curve has a significant impact on safety—the tighter the turn, the more centripetal force needed to keep a vehicle in its lane. Crashes are most likely when radii are small, so engineers follow guidelines based on the design speed to make sure curves are sufficiently gentle. It’s not only the curves that need to be gentle but also the transitions between straight sections. At first glance, connecting circular curves to straight sections of roadway looks like a perfectly smooth ride. But forces experienced by vehicles and passengers are a function of the radius of curvature. So if you go directly from a straight section (which has an infinite radius) to a circular curve, the centrifugal force comes on abruptly. Another way to think about this is by using the steering wheel. Every position of your wheel corresponds to a certain radius of turn. If straight sections of roadway were connected directly to circular curves, you would have to turn the steering wheel at the transition instantaneously. That’s not really a feasible or safe thing to ask drivers to do. So instead, we use spiral easements that gradually transition between straight and curved sections of roadway. Spirals use variable radii to smooth out the centrifugal force that comes from going around a bend, and they allow the driver to steer gradually into and out of each curve without having to make sudden adjustments. 

Even with all those measures to make curves safe and easy to navigate, drivers still usually have a little bit of trouble staying centered in a lane around a bend. This is partly because tires don’t track perfectly inline with each other when turning (especially for large vehicles like trucks), but also because the forces are changing, and that takes compensation. Because of this, engineers often widen the lanes around curves to provide a little more wiggle room for vehicles. This happens gradually, so it’s relatively imperceptible. But if you pay attention on a highway around a curve, you may notice your lane feeling a little more spacious.

One other important aspect when designing a curve comes from the simple but crucial fact that drivers need to see what’s coming up to be able to react accordingly. Sight distance is the required length of the roadway required to recognize and respond to changes. It varies by driver reaction time and vehicle speed. The slower you react and the faster you’re going, the more distance you need to observe turns or obstacles and decide how to manage. Sight distance also varies by what is required of the driver. The amount of roadway necessary to bring the vehicle to a stop is different than the amount needed to safely pass another vehicle or avoid a hazard in the lane. Even if a curve is gentle enough for a car to traverse, it may not have enough sight distance for safety due to an obstacle like a wooded area. In this case, sight distance will require the engineer to make the curve even gentler.

The final aspect of roadway geometry is the profile - or vertical alignment. Roads rarely traverse areas that are perfectly flat. Instead, they go up and over hills and down into valleys. Engineers have to be thoughtful about how that happens as well. The slope, or grade, of a roadway, is obviously essential. You don’t want roads that are too steep, mainly because it would be hard for trucks to go up and down. You also want smooth transitions between grades for the comfort of drivers. But, on top of all that, vertical curves also have the same issue with sight distance.

Crest curves - the ones that are convex upwards - cause the roadway to hide itself beyond the top. If you’re traveling quickly up a hill, a stalled vehicle or animal on the other side could take you by surprise. If that curve is too tight, you may not have enough distance to recognize and react to the obstacle. So, crest curves must be gentle so that you can still see enough of the roadway as you go up and over. Sag curves - the ones that are concave upwards - don’t have this same issue. You can see all of the roadway on both sides of the curve. Or at least you can during the day. At night things change. Vehicles rely on headlights to illuminate the road ahead, and sometimes this can be the limiting factor for sight distance. If a sag curve is too tight, your lights won’t throw as far. That has the effect of obscuring some of your sight distance, potentially making it difficult to react to obstacles at night. So, sag curves also need to be gentle enough to maintain headlight sight distance.

Of course, there are equations for all of these different parts of roadway geometry that can tell you, based on the design speed and other factors, how much crown is required, or how high to superelevate, or the allowable radius of a curve, etcetera. Different countries and even different states, counties, and cities often have their own guidelines for how roadway design is done. And even then, the speed used by the engineers to design the roadway isn’t always the one that gets posted as the speed limit. There are just so many factors that go into highway safety, many of which are more philosophical or psychological than pure physics and engineering. It may seem like you can just plug in your criteria to some software that could spit out a roadway project in a nice neat bow. But to a certain extent, highway design is an art form. Designers even consider how the driver’s view will unfold as they travel along. If you pay attention, you’ll notice newer roadways are less of a series of straight lines connected by short curves and more of a continuous flow of gradual turns. This is not only more enjoyable, but it also helps keep drivers more alert. There are so many factors and criteria that go into the design of a roadway, and it takes significant judgment to keep them in balance and make sure the final product is as safe and comfortable for drivers as possible. Thank you for reading and let me know what you think.

July 07, 2020 /Wesley Crump

Why Does Road Construction Take So Long?

June 02, 2020 by Grady Hillhouse

From rugged dirt paths to modern superhighways, roads are one of those consistent background characters in nearly every person’s story. And, if you’ve ever been a driver, I know another similar character in your life: road construction. Most of us love having wide, smooth roadways to take us to work, to home, and everywhere else we travel. But, we’re hardly ever excited to see a construction project starting on our favorite roadway. I’m here to change that - or least to try. I love construction - always have - and when it happens along my commute, I love it even more because I get to see the slow but steady progress each day. And, I think - or at least I hope - that if you can know a little bit more about what’s going on behind those orange cones, you might appreciate it a little more as well. So, I’ll start with step one, and if people are interested, I’ll keep this series going. Hey, I’m Grady, and this is Practical Engineering. Today, we’re talking about earthwork for roadways.

The first roads in history were probably formed as people or animals followed the same trail long enough to tamp down the vegetation and establish a route between two points. But that’s not enough for the roads of today. Why? Because the earth is full of irregularities that aren’t conducive to safe, efficient, and convenient travel. There’s a reason we have the distinction of off-road vehicles. ATVs and dirt bikes are fun, but most of us don’t want to wear a protective bodysuit for our daily commute. Safe and efficient travel means smooth curves, both horizontally and vertically. It means grades that aren’t too steep, and it means paths that are relatively direct between points of interest. In a very general sense, that means to build a roadway, we need a way to smooth out the surface of the earth.

A lot of people use words and writing to communicate. But, roadway engineers and contractors use the cross-section. This is a special kind of drawing that shows a slice through a particular location, and it’s the literal language of road building. On it, you can see the level of the earth before construction, and the proposed surface afterward. Any difference in these two lines means some earthwork is going to be required. Areas above the proposed roadway need to be excavated away, also call cut. And, areas below the proposed road need to be filled in. Cut and fill are the most fundamental concepts in any earthwork project. And, keeping cut and fill in balance with one another is a critical part of roadway engineering.

After all, if you need to fill in some areas, that soil is going to have to come from somewhere. Rather than importing soil to a project, it makes a lot more sense to take it from somewhere that already needs it removed. And if you’re going to have to excavate tons of soil from some part of your project, it sure would be nice if rather than having to dispose of it, you could take it to some other part of your project that needed additional material. If the amount of cut and fill on a project is balanced, every shovelful of dirt is doing two jobs: taking soil away from where it’s not needed, and gathering soil for where it is. So, engineers designing roadways keep track of these quantities between each cross-section.

Of course, earthwork may seem simple when you’re just looking at a drawing, but here are a couple of things to keep in mind: soil is heavy, and roads are long. Just because you have the same volume of excavation as you have fill doesn’t necessarily lead to efficiency. Because if all the cut is miles away from all the fill, you’re going to have to make a lot of trips. So, roadway design not only needs to balance cut and fill but also try to minimize the haul distance. Mass haul diagrams show the net change in earthwork volume over the length of the roadway. This gives the pros a quick understanding of the amount and distance of earthwork for an entire roadway project.

But we’re still not there yet. Because, once you get all the soil in the right place, you can’t just build a road on top. I’ve said it before, and I’ll say it again: Soil’s not that strong, especially in loose piles fresh from the bed of a dump truck or scraper. We have to compact it down. But, even that’s not so simple. There may be no other material more tested than soil - maybe blood, but if you measure by weight, I don’t know. In testing labs all over the world, probably at this very moment, there are people looking at and taking pictures of, shaping and rolling soil, inserting it into equipment, taking measurements and writing those measurements down on clipboards. Why? Because soil is really important. The cost of building roads varies from place to place, but very roughly, it’s about $3M for a mile of 2-lane roadway. That’s about $2M for a kilometer. Roads might be the most expensive thing you touch in a typical day because they take a lot of work and a lot of material to build. So if we’re going to go to all that expense just to make it easier to drive our cars from place to place, we need to make sure that the roads we build have a good foundation.

That mainly means proper compaction. Soil settles and compresses over time, and if this happens with something on top (like a road or any other structure) it can lead to damage and deterioration. Compaction speeds up that settlement process so it all happens during construction instead of afterwards. If soil is compacted to its maximum density, that means it can’t settle further over time. But how do we know whether it’s compacted enough? That’s where the testing comes in. Soil labs do a ubiquitous analysis called a Proctor test. If you add different amounts of water to soil and try to compact it, you’ll see that you get different densities. With low moisture content, it’s nearly impossible to do any compaction—same thing with high moisture content. But, somewhere in the middle, you’ll get the maximum density. This estimate of the maximum density is one of the most crucial measurements in earthwork. There are a few ways to test density, but we mostly use nuclear gauges that measure the radiation passing through the soil to estimate its degree of compaction.

Soil used for filling areas is first placed in roughly the correct locations by a dump truck or scraper. Then it’s smoothed into a consistent layer, called a lift, by a bulldozer or motor grader. Finally, each lift is compacted using a compactor. This is at the heart of why earthwork takes so long to complete. You can’t compact soil more than around a foot at a time (that’s 30 centimeters). Rolling over thicker layers will only compact the surface, leaving the rest lo and free to settle over time. So areas of fill, and especially tall embankments (like the approaches to a bridge), need a lot of individual layers. By necessity, they come up slowly little by little, lift by lift. Every so often along the way, someone does a test to check the density of the compacted soil. We compare that measurement with the maximum density measured in the lab. If it’s close, it’s okay. If not, we keep compacting until it is. That gives engineers and contractors the confidence that when the roadway surface is placed, it’s going to be there to stay. But, it’s one of the biggest reasons that roadway projects take so long to complete. We can move a lot of earth in a short period of time, but to place and densify it into a foundation that will stand the test of time is a process, and it takes some time.

One last thing I want to point out: during the construction of a roadway (or really construction of just about anything), this earthwork causes a lot of disturbance. What used to be grass, plants, or some other type of covering over the ground is now just bare soil. That may not seem like a big deal, but to all the aquatic wildlife in nearby creeks and rivers, it is. That’s because any time it rains, all that unprotected soil gets quickly washed away from the construction site into waterways where it reduces the quality and quantity of habitat. So, pretty much every construction site you see should have erosion and sediment control measures in place to keep soil from washing away. Silt fences and mulch socks slow down runoff so the sediment can drop out, and rock entrances knock most of the mud off the tires of vehicles before they leave the site.

Like it or not, roads are part of the fabric of society. Travel is a fundamental part of life for nearly everyone. Unfortunately, that means road construction is too. But, I hope this video gives you a little more appreciation for what’s going on behind the orange cones. You know that metaphorically significant planar surface where the rubber meets the road? Well, it couldn’t even exist without the engineers and construction workers designing and building that planar surface just below where the road meets the earthwork. Thank you, and let me know what you think!

June 02, 2020 /Grady Hillhouse

What is a Culvert?

May 05, 2020 by Grady Hillhouse

A surprising amount of engineering is just avoiding conflicts. I’m not talking about arguments in the office, I mean conflicts when two or more things need to be in the same place. There are a lot of challenges in getting facilities over, under, around, or between each other, and there’s a specific structure, ubiquitous in the constructed environment, that’s sole purpose is to deal with the conflict between roadways and streams, canals, and ditches. Hey I’m Grady and this is Practical Engineering. Today, we’re talking about culverts.

Culverts are one of those things that seem so obvious that you never take the time to even consider them. They’re also so common that they practically blend into the background. But, without them, life in this world would be quite a bit more complicated. Let me explain what I mean. Imagine you’re designing a brand new roadway to connect point A to point B. It would be nice if the landscape between these points was perfectly flat, with no obstructions or topographic relief. But, that’s rarely true. More likely, on the way, you’ll encounter hills and valleys, structures and streams, and you’ll have to decide how to deal with each one. Your road can go around some obstacles, but for the most part you’ll have to work with what you’ve got. A roadway has to have gentle curves both horizontally and vertically, so you might have to take soil or rock from the high spots and build up the low spots along the way, also called cut and fill. But you’ve got to be careful about filling in low spots, because that’s where water flows.

Sometimes it’s obvious like rivers or perennial streams, but lots of watercourses are ephemeral, meaning they only flow when it rains. If you fill across any low area in the natural landscape, you run the risk of creating an impoundment. If water can’t get through your embankment, it’s going to flow over the top. Not only can this lead to damage of the roadway, it can be extremely dangerous to motorists and other vehicles. One obvious solution to this obvious problem is a bridge: the classic way to drive a vehicle over a body of water. But, bridges are expensive. You have to hire a structural engineer, install supports, girders, and road decks. It’s just not feasible for most small creeks and ditches. So instead we do fill the low spots in, but we include a pipe so the water can get through. That pipe is called a culvert, and there’s actually quite a bit of engineering behind this innocuous bit of infrastructure.

I know what you’re thinking: “Just a pipe under a road? How complicated could it be?” Well, allow me to introduce you to the U.S. Federal Highway Administration’s Hydraulic Design of Highway Culverts, third edition. Yes you’re seeing that right - 323 pages of wonderful guidelines on how to get water to flow under a road. But worry not, because I have taken my favorite parts of this manual and built a demonstration in the video so you can appreciate the modern marvel that is the highway culvert as much as any red-blooded civil engineer.

A culvert really only has two jobs: it has to be able to hold up the weight of the traffic passing over without collapsing, and it has to be able to let enough water pass through without overtopping the roadway. Both jobs are pretty complicated, but it’s the second one I want to talk about. And it turns out that figuring out how much water can pass through a culvert before the roadway overtops is a pretty complicated question. In fact, there are eight factors that can influence the hydraulics of a culvert: (1) Headwater, or the depth of flow upstream of the culvert, (2) The cross-sectional area of the culvert barrel, (3) the cross-sectional shape of the culvert barrel, (4) the configuration of the culvert inlet, (5) the roughness of the culvert barrel, (6) the length of the culvert, (7) the slope of the culvert, and (8) the tailwater, or depth of flow downstream. We don’t have time to demonstrate how all these parameters affect the culvert flow, but the Federal Highway Administration actually has a pretty comprehensive video on YouTube (with a much nicer flume than mine) if you want to see more [https://www.youtube.com/watch?v=vnXmGyb_hKQ].

One thing I do want to show is the two primary flow regimes for culverts which are outlet control and inlet control. And these are pretty much exactly what they sound like. Outlet control happens when water can flow into the culvert faster than it can flow out. That means flow is limited by either the roughness and friction in the culvert barrel or the tailwater depth at the outlet. The entire area of the barrel is being taken advantage of for flow. In outlet control flow, conditions downstream of the culvert can affect the flow rate. For example, if a tree falls across a ditch downstream, that can back up water reducing flow through the culvert and causing the roadway to overtop.

Inlet control happens when the culvert inlet is constricting the flow more than any of those other factors. Everything that affects the amount of water passing below the road is happening at the inlet. That means changing the roughness of the inside of the barrel or anything downstream won’t change how much flow makes it through. It’s easy to show this in my model because you can see inside the culvert barrel. You can tell that the flow depth in the culvert is shallow and the full flow area of the barrel is not being taken advantage of. There are a wide variety of configurations that the inlet to a culvert can have. If you pay attention, you’ll see all kinds of culvert inlets. Some common types include projecting, where the pipe protrudes from the embankment, mitered where the pipe is cut flush to the embankment, and headwall where the culvert begins at a vertical concrete wall, sometimes accompanied by concrete wing walls to further direct flow into the barrel. Unsurprisingly, each of the multitude of different inlet configurations has a different effect on the culvert hydraulics.

In my demo, I can do a test of two of these inlet configurations to show the difference. First I’m testing the projecting inlet. This is one of the least efficient configurations because there’s nothing to help train the flow into the culvert. You can see that the headwater elevation is quite high, even close to overtopping the headwall in my flume. And, even with all that pressure upstream, there’s not that much water coming through the culvert. It’s only flowing about half full.

Next I reconfigured the demo to make the culvert flush with the headwall. And I also rounded over the inside edge of the pipe, giving the flow a smoother entrance. I didn’t change how much flow the pump is creating, but you can see that the headwater is much, much lower. That means the inlet is more efficient, because it takes less driving headwater to get the same amount of flow through the barrel. In fact, as I cranked up the flowrate higher and higher, I realized that - even with as much headwater as I could create - this configuration was still acting as an outlet-controlled culvert. The smooth and flush inlet was allowing as much flow as possible through.

Of course, there are really elaborate culvert inlets that can be extremely efficient, but like all infrastructure, culvert design is an exercise in balancing cost with other factors. You can spend a lot of money on a fancy culvert inlet that has perfectly smooth edges to guide the water gently into the barrel, or you could just bump up to the next pipe size. Calculating flow through a culvert can be quite complicated, because culverts can transition between inlet and outlet control depending on flow rate. And, even within these two major flow regimes of inlet and outlet control, there are a whole host of subregimes - each of which has its own hydraulic equations. Of course we have software now, but back in the 1960s and 70s the Federal Highway Administration came up with a whole group of cool nomographs to simplify the hydraulic design of culverts. The way this works is you first find the right chart for your situation [7A]. The one in the video is a culvert with a submerged outlet flowing full. Each one is a little different, but in this one you draw a line connecting the culvert length to its diameter. Then draw a line connecting the headwater depth to the intersection of your other line with the turning line. Extend this line to the discharge scale to find out the flow rate passing through the culvert. I love little tricks like this that boil down all that hydraulic complexity into a quick calculation you can do with a straightedge in less than a minute.

Next time you’re driving or walking along a street keep an eye out for culverts. And, if it’s raining, take a look at the flow. See if you can identify whether the culvert is outlet or inlet controlled, and be thankful that we have this ordinary, but remarkable, bit of infrastructure to let you safely walk or drive right over.

May 05, 2020 /Grady Hillhouse

How Do Canal Locks Work?

April 07, 2020 by Wesley Crump

Freight transportation is an absolutely essential part of modern life. Maintaining the complex supply chains of raw materials to finished goods requires a seemingly endless amount of hustle and bustle. Millions of tons of freight are moved each day, mainly on trucks and trains. But, “shipping” got its name for a reason, and we still use ships to move a lot of our stuff. One of the main reasons is that it’s efficient. In fact, moving a ton of goods the same distance on a boat takes roughly half the amount of energy than it would by train and roughly a fifth of the energy it would take on a truck. You can prove this to yourself pretty easily. Even heavy stuff is practically effortless to move around once it’s floating on water. Of course, shipping by waterway also has its limitations. It’s slow (for one) and not every place that needs goods is accessible by boat. We’ve overcome this obstacle somewhat through the use of constructed waterways, or canals. Canals and shipping are described in the earliest works of written history. But there’s another limitation more difficult to surmount. Water is self-leveling. Unlike roads or rail, you can’t lay water up on a slope to get up or down a hill. Luckily, we have a solution to this problem. It may seem simple at first glance, but there is a lot of fascinating complexity to getting boats up and down within a river or canal. Hey I’m Grady and this is Practical Engineering. Today, we’re talking about locks for navigation.

The efficiency of water transportation has a surprising amount to do with how the world looks today. Nearly every major city across the globe is located on a waterway accessible by shipping traffic. Waterway transportation is weaved into the history of just about everything. So, it’s no surprise that, even since thousands of years ago, humans have sought to bring access by boat to areas otherwise inaccessible. But, creating waterways navigable by boats isn’t as simple as digging a ditch. Unlike the open sea, the endless and uncluttered surface of water, land has obstructions and obstacles. The topography dips and rises, rivers and ponds get in the way, and manmade infrastructure like cities, roads, and utilities impede otherwise unhindered paths from point A to point  B. The quintessential example of this is the Panama Canal: the famous cut through that narrow Isthmus saving ships the lengthy and dangerous trip around Cape Horn. At scale, this seems pretty straightforward - just cut a ditch from the Atlantic to the Pacific. But the details of what is one of the largest civil engineering projects of the modern world are more complex. One of the most important of those details is that the majority of the Panama Canal isn’t at sea level, but actually 26 meters or 85 feet higher.

This is due to sheer practicality. Construction of the Canal was already one of the largest excavation projects in history. Keeping boats at sea level would require cutting, at a minimum, an 85-foot-deep canyon through the peninsula involving millions and millions of tons of extra earthwork that would be completely infeasible. So, rather than cutting the channel deeper, we instead raise the boats up from sea level on one side and lower them back down on the other. And we do this using locks, an ingenious and ancient technology that has made possible navigation on canals and waterways that otherwise could never have existed.

The way a lock works is dead simple. And of course I have a little demonstration here to make this more intuitive. For a boat going up, it first enters the empty lock. The lower gate is closed. Then water from above is allowed to fill the lock. This is usually done through a smaller gate or a dedicated plumbing system, but I’m just cracking the upper gate open. Once the level in the lock reaches the correct height, the upper gate can be fully opened, and the boat can continue on its way. Going down follows the same steps in reverse. The boat enters the full lock. The upper gate is closed, and the water in the lock is allowed to drain. Again, I’m just cracking the gate in the demo, but this is often done through a slightly more sophisticated way in the real world. Once the lock is drained, the lower gate can be fully opened, and the boat can continue on. I hope you see the genius of this system. It’s a completely reversible lift system that, in its simplest form, requires no external source of power to work… except for the water itself.

One thing to notice about a lock is that even though boats can move through in both directions, water only moves through in one . The lock always fills from the upper canal and always drains to the lower canal. This is because… gravity. Hopefully that’s obvious. But, it’s important to realize that even though we’re not using pumps, the energy required to raise and lower boats through a lock isn’t necessarily “free”. Each time the lock is operated, you lose a “lockful” of water downstream. And sometimes that matters. Canals aren’t full of limitless water, and if there is a lot of traffic or the locks are particularly large, this could mean losing millions of liters of water per day. On large rivers, it’s usually not enough to worry about, but in some cases this could cause a canal or reservoir to go completely dry. So, canals that use locks need some way to replenish the lost water or at least limit how much water is lost each cycle. What if there was a way to save the water used to fill the lock and reuse it?

On the Panama Canal, the locks use water from Gatun Lake, a critical source of drinking water for the country. During periods of drought, water supply becomes a serious issue. That’s why, when the canal was expanded in 2016, the new locks included water saving basins. Like the locks themselves, these basins are an extremely simple and yet an ingenious way to limit the amount of water lost each time the locks are filled. Let me show you how this works. On my demo, instead of draining the lock into the downstream canal, I can drain it partially into a nearby reservoir. Then, when the time comes to fill the lock, I can recycle the water from the basin, also called a side pond, to partially raise the level. Of course, I still need to use water from the upper canal to fully fill the lock, but it’s still less water than I would have otherwise used.

In fact, if the water saving basin is the same area as the lock, you can save exactly one third of the water. The reason, again, is gravity. Water doesn’t flow uphill - it has to always be going down. To save water, you need a volume within the lock for it to come from, a lower volume for it to drain to and wait in the side pond, and finally an even lower volume for the saved water to go within the lock. That means the best you can do with a single basin is to save a third of the water that would otherwise be lost. But, it’s possible to do better than this. One option is to have the water saving basin have a larger area. Imagine an infinitely large basin such that no matter how much water drains into it, its level never rises. In this case, you could drain the upper half of the volume of the lock into the side pond, and then use that water to fill the lower half of the lock on the way up. So, the area of the basin is important, with a larger area providing a greater water-saving benefit. The other way we can do better is to have more basins.

Notice on the diagram that the bottom two volume divisions are lost each cycle. When the lock drains, each volume division moves from the lock to the side pond one division below, except for the bottom two divisions which are lost downstream. That water can’t be stored in a side pond because the pond would have to be at or below the bottom of the lock. And when the lock is filled, each side pond fills the volume of the lock again one division lower. The top two divisions can’t be filled from a side pond, so they are filled from the upper canal. It’s pretty easy to see why more basins equals smaller divisions and why that equals less water lost for each cycle. Of course, for both the number of ponds and their area, there are practical limitations to how much land is available and the expense of all that plumbing, etc. So, you have to balance the value of saving the water in the locks versus the capital and ongoing expenses of constructing and operating these basins. That’s made a lot easier with a pretty simple formula to calculate the ratio of how much water is used with side ponds versus without them.

The new locks at the Panama Canal each use three basins which are about the same area as the locks themselves. Plugging in 3 for the number of basins, and 1 for the lock to basin area ratio, you can see that the new locks use only 40% of the water that would be required to operate without the basins. That’s pretty impressive and definitely seems worth the cost of the basins. But, it’s not the only example of this. Another lock in Hannover Germany has ten basins, reducing the lost water by about three-fourths, although the tanks are underground so they’re harder to see. I’ve been talking about freight transportation in this video, but people use boats for all kinds of different reasons, and in the same way, there are all kinds, shapes, sizes, and ages of locks across the world. In fact there are a lot of canals where you can operate the lock yourself.  They’re also not the only way of moving boats up or down, but that’s a topic for another video. Next time you see a lock, consider where that water comes from, and keep an eye out for side ponds that help save a little or a lot of it for the next time. As always, thanks so much, and let me know what you think!

April 07, 2020 /Wesley Crump

Why Do Pipes Move Underground?

March 10, 2020 by Wesley Crump

We use pipes to carry all kinds of fluids. Pretty much anyone can tell you how they work. You put a liquid or a gas in one side and it comes out the other. But, designing pipe systems is not always as simple as it seems. Pipes don’t float in the air on their own; they have to be held in some way. We often bury pipes to protect them and keep them out of the way, but the ground isn’t always that good at holding pipes together. Hey I’m Grady and this is Practical Engineering. Today, we’re talking about thrust forces in pipe systems.

Designing systems of piping might seem intuitive. I think most people have a general understanding about how pipes work because most of us have them in our home delivering fresh water to the taps and carrying our waste away. But, the bigger a pipe gets and the more pressure it contains, the more complicated it becomes. Engineers design systems of pipes that can be enormous - sometimes big enough to drive a car through - and that can hold many times the pressure of your typical household plumbing. Those larger diameters and higher pressures create greater forces, and those forces need to  be accounted for in design. There are two types of forces in pipelines that engineers need to consider: hydrostatic and hydrodynamic.

Hydrostatic forces are the ones that don’t require any fluid to be moving. They result just from the pressure within a pipe. A fluid’s pressure is its force applied over an area. Pressure works in every direction at the same time. So, within a section of pressurized pipe, you have forces acting on the walls of the pipe. This force is resisted by the hoop of pipe material. But, you also have forces acting along the axis of the pipe. This force is equal to the pressure times the area of the pipe, and it’s resisted by the fluid in the adjacent section of pipe. I can demonstrate this with clear tubing in my video. Even though the tube slides into this straight coupler fairly easily, I can pressurize it without too much issue. If you ignore the small leaks from the imprecision of my demo, you’d hardly know anything was happening at all if you weren’t paying attention to the pressure gauge. That’s because, in this example, all the hydrostatic forces are balanced. But, there’s not always an adjacent section of pipe to resist this longitudinal force. Eventually, you get to the end of the pipe where you need a cap, or you get to a place where you need to make a bend, a tee, or a wye. These are places where you end up with an imbalance in hydrostatic forces within the pipe. Let’s try pressurizing this demo for a couple of cases where the hydrostatic forces aren’t balanced to see what happens.

With a tee, you have two thrust forces that do balance each other out, and one that doesn’t. Can you guess what happens when I pressurize the tubing? The force from the top tube has nothing to resist it, so it easily separates the fitting from the tube. With an elbow, there are unbalanced forces in both directions. It doesn’t take much pressure for the fitting to pop right off. Now, this is a pretty cool demo if I do say so myself, but maybe it’s a little simplistic and perhaps even a bit self evident. Plus, it only shows the hydrostatic forces that occur within pipes. Actually, there’s a pretty cool demonstration of both hydrostatic and hydrodynamic forces: a water rocket. I’m okay explaining this concept to kindergartners, but I’ve asked for some help from the team behind the water rocket altitude world record and awesome YouTube channel, Air Command Rockets, to show how these two types of force work in an entirely different setting than pipelines.

Thanks Grady. Let’s have a look at how water rockets produce thrust. Now, it doesn’t matter if you’re a conventional rocket or water rocket, your life is governed by the thrust equation, which is derived from Newton’s second law. And here it is in its simplified form. Over here you’ve got the thrust, or the force, that the rocket produces to propel it upwards. And that’s made out of two terms. This one is the momentum thrust, and that’s just the mass flow rate, in other words the rate at which the water or air flows through the nozzle, times the velocity at which it exits. And, over here is the pressure thrust, and that relates to the exit pressure versus the ambient pressure. So, while the rocket is sitting on the pad pressurized, this term is zero because there’s no flow out of the nozzle. So, we end up with the pressure inside versus the outside times the nozzle’s cross sectional area. That’s the actual force of the rocket trying to get off the pad. So, when you release the rocket, the momentum thrust comes back into play. The compressed air is pushing the water out through the nozzle. And, the water comes out at probably about one tenth of the speed of sound for regular types of rockets, which is quite low. But, the mass flow rate is high because the water is so heavy. Now, when the water runs out and the compressed air starts coming out, the mass flow rate really drops because air is so much lighter than water. But, the exit velocity gets very high because the air comes out at the speed of sound. So as it turns out during the air phase only, you get about two thirds the amount of thrust as you get with the water phase. And this is in fact why water rockets use water for improved performance. Now all of this is an oversimplification, and in real life it’s a little bit more complicated than that, simply because you have a finite volume inside of the rocket, and as soon as you release it, the pressure stops dropping and so does the force that it generates. So you end up with a decayed thrust curve like here. Now, let’s have a look at a couple of examples of the water rocket. This one is a low pressure one. This one would be a typical one that you’d launch and it produces about 100 N peak thrust. And, this one over here is a higher pressure one (if you really crank up the pressure) and this one generates about 2,500 N peak thrust, so that’s a lot more. And, here’s what happens when you crank up the pressure too much. Okay back to you Grady before we blow something else up.

Just like in rockets, engineers call these forces in pipelines “thrusts.” But unlike those aerospace guys and gals, civil engineers don’t want the things they design to go flying through the air. We want our pipelines to stay put, which means in this case thrust is a bad thing and must be resisted. I know what you’re probably thinking after seeing all these demonstrations. “Just glue the joints.” And I promise we’re getting there, but the reality is that a lot of the pressure piping we use underground, particularly in municipal settings - such as water mains for drinking water and force mains for sewers - use push-on fittings. These joints use gaskets and tight tolerances to achieve a watertight seal, but they don’t provide longitudinal restraint. The pipes can still slide fairly freely in and out of the joint. We use these types of push-on fittings because they are inexpensive, reliable, and most-importantly, they are easy to install speeding up the construction time which benefits everyone, from the contractor to the owner to even the citizens waiting on a road to open back up after a main break. In plumbing we use glue or threaded connections for pipes, but those options are a lot less feasible for certain types of large diameter pipes. But, because push-on fittings don’t offer any longitudinal restraint, we have to provide that restraint somewhere else. In most cases, that comes from burying the pipe. Encasing the line in compacted soil holds it in place to prevent the pieces from slipping apart.

But, it’s not that simple. These pipelines can be under enormous pressure, sometimes two or three times the pressure at the tap in your house, and in some industrial settings many times higher. Also - and this is straight from geotechnical engineering 101 - soil isn’t that strong. Anyone who’s ever tried to walk through the mud knows this. So, we rarely trust soil on its own to hold our pipelines together underground. Relying on soil for restraint is essentially asking the soil to be as strong as the pipe material. If it doesn’t hold the pipe still against hydrostatic and hydrodynamic forces, you can get separation of joints and leakage from the pipes. Fixing this can be a huge endeavor, leading to loss of service and creating significant expenses. For water mains, it takes a maintenance crew closing traffic, excavating the line, repairing the damaged section, backfilling, and restoring the pavement. And, although public works crews are awesome at this job, most people would agree that it would be better to avoid the need in the first place if possible.

So, what do we do? The classic solution to this problem is thrust blocks: masses of concrete that distribute thrust forces over a larger area against the soil. If you could make the subsurface invisible so you could see all the water mains below your city, it’s a fairly sure bet that at each and every bend, tee, wye, or reduction there is an adjacent block of concrete transferring thrust forces to the soil through the larger bearing area of the block so that the strength of the soil isn’t exceeded. In fact, one very important job of a pipeline engineer is sizing the thrust blocks based on the type of fitting, test pressure of the pipe, and soil conditions at the site. But, thrust blocks aren’t a panacea for thrust forces in pipes. They’re big and bulky, they get in the way of other subsurface utilities, they make it difficult to excavate and repair lines when needed, and because they’re made of concrete, they often take several days to cure before you can pressurize and test the line before backfilling. So, the other way we deal with thrusts in pipelines is to take a cue from the plumbers and provide longitudinal restraint at the joints themselves.

If you restrain the joints for some distance on either side of a location that creates a thrust force, like a bend in the pipe, you essentially convert that entire section of pipe into its own thrust block. This allows you to distribute the thrust force over the length of the restrained section. A wide variety of pipe fittings that can provide longitudinal restraint are becoming more popular. They’re still usually more expensive than using concrete reaction blocks, but they have a lot of other benefits as described. Of course, in certain situations, it makes more sense to fully restrain a subsurface pipe. Most petroleum pipelines are fully welded at every joint, and you can fuse polyethylene pipe at the joints as well. It’s the engineer’s job to decide what type of restraint is needed based on all the considerations involved. Next time you see a crew working on a pipeline, try to sneak a peek into the trench and see which type of restraint system they’re using, or ask one of the workers if they’re installing thrust blocks or restrained fittings (or both) to make sure the pipe stays put. Thank you, and let me know what you think!

March 10, 2020 /Wesley Crump

What is Air Lock?

February 11, 2020 by Wesley Crump

Engineering nearly always involves assumptions and simplifications. There are just too many variables in the real world to keep track of them all, so we simplify. We neglect the variables that don’t matter and make assumptions about the variables we can’t measure or predict. But what happens when one of those assumptions is wrong? One of the most basic assumptions made by engineers who design pipelines is that those pipelines carry only the fluid that’s intended. But, that’s not always the case. Hey I’m Grady and this is Practical Engineering. Today, we’re talking about air lock in pipe systems.

Put simply, air lock is a constriction in flow that happens when a gas gets trapped in a pipe. That’s the answer to the title, but it’s not very satisfying on its own. In fact, if you’re as curious as I am, it just leads to more questions. The first three that come to mind are: (1) Where does the gas come from? (2) How does it get trapped? And (3) Why should I care? We don’t normally get to see inside pipelines and observe how they work, so, I built a little model here in my garage we can use to talk about airlock, how it happens, and why it matters.

The first question is: Where does the gas come from? It might surprise you to learn that getting gasses, like air, in liquid pipelines is somewhat inevitable. Sometimes they sneak in by being dissolved into the liquid just like carbon dioxide is dissolved in a coke. Most liquids have at least some dissolved gasses. Even the water coming out of the tap often has a certain amount of dissolved air. This gas can come out of solution when the fluid is warmed or agitated or if it goes through a chemical reaction. Another potential source of gasses in liquid pipelines is leaks through damaged areas or loose fitting joints. If these occur in an area of the pipe with a pressure below the ambient air pressure, air can leak from outside the pipe into the line. But, I haven’t mentioned the most obvious source of air. After all, when you buy pipe from a manufacturer or supplier, it doesn’t come pre-filled with liquid. It starts out empty, or more accurately, it starts out full of air. When you add liquid to a pipe that’s full of air, whether it’s for the first time or after the pipe was drained for maintenance, that’s a perfect opportunity for it to be trapped, which leads me to the second question: How does gas get trapped?

This one’s a little easier to answer. Because gasses are so much less dense than liquids, they almost always float. That means any high spot in a pipe is susceptible to trapping bubbles. And unfortunately, avoiding these high spots is often easier said than done. Take the example of an irrigation line on a farm. These lines can’t be buried because they need to be moved from time to time. So they sit on the surface of the ground and, as such, following the natural contours with low spots in valleys and high spots over hills and embankments. These high spots are perfect traps for air. Even if the pipes can be buried, like water or petroleum pipelines, it’s not always feasible to avoid undulations. After all, the deeper you dig, the higher the cost. Often it just makes sense to follow a hill or ridge up and back down rather than going straight through. In buildings and houses, fresh water and heating lines have to avoid all sorts of obstacles which often means routing them in ways that create high spots which can trap air bubbles. The same is true in industrial settings for a wide variety of types of pipelines.

You might be thinking, “Big Deal - air gets trapped where it’s not supposed to all the time. That’s why we have burps and farts and bleed valves on brake lines.” But the thing you have to remember is that air takes up space. It doesn’t necessarily seem like it out in the open, but when it’s trapped in a pipe, it’s taking up cross sectional area that could otherwise be used for flow. It’s a constriction, just like a kink in a rubber hose, which means it can cause a serious reduction in the flow rate. Pipes can be expensive, and the bigger they are, the more they cost. So, engineers try to use the smallest pipe possible to meet the specific need. If you’ve got a bunch of air trapped in your pipe, that’s taking up valuable space without any contribution to the flow rate.

Designing pipes is an exercise in managing energy. The fluid starts at one end with a certain amount of it, and the flow rate depends on how much energy gets lost as it makes its way to the other end. Engineers use a graphical tool called the hydraulic grade line to show this visually. The line represents the potential energy available in the fluid at any point along the pipe. It’s also the level that the liquid would reach if you were to tap in a vertical standpipe at any location along the pipe. The hydraulic grade line slopes downward along pipes as the fluid loses energy to friction. It also drops steeply at sharp bends and valves which cause turbulence in the flow. And, you know what also causes a loss of energy? Air lock. In fact, as the bubble grows and grows in the pipe, you end up with a condition called waterfall flow. You can see why it’s called that in the demonstration. In this case, you lose the energy equivalent to the height of the waterfall which is easy to see on the hydraulic grade line. Unlike friction or turbulence in the pipe, this doesn’t depend on flow. And it adds up. Every undulation in a pipe with a trapped bubble of air is going to rob the fluid of this energy. And if the hydraulic grade line drops below the outlet of the pipe, you won’t get any flow at all. That’s the definition of airlock or vapor lock.

A pipe that doesn’t flow is not very useful, so we’ve come up with a bunch of ways of dealing with this problem. The simplest, but not necessarily the cheapest, is to just deal with the airlock with a bigger pump. You can be okay knowing that you’ll always have trapped gasses in your pipe if you can just use more pressure to overcome the energy losses associated with airlock. That’s not always feasible, though. Consider a long pipeline with lots of undulations. If you use a single pump to overcome all that airlock, the pressure rating of the pipe near the pump will have to be enormous. The second option is just to design pipelines that don’t trap air. If the flow of the fluid in your pipeline is fast enough, trapped air will just be blown out. And, if there aren’t any high spots in your pipes, there won’t be anywhere for it to be trapped in the first place. Again, this is not always feasible. Consider a pipeline moving water from one end of a hill to the other. Drawing a straight line between points A and B is easy, but digging a trench this deep to install the pipe, or worse, tunneling it, is not an inexpensive endeavor. 

The other option is to bleed the gas through a valve. Pretty simple in some cases, but not necessarily in all cases. Cities don’t want to send out technicians to bleed the air out of their pipelines every day. So, many pipelines are equipped with automatic air release valves. These are a simple but clever solution for releasing air from high points without any human intervention. I built an example of this by gluing a float to a check valve. When there’s no air in the pipe, the float holds the valve closed. But when a big enough bubble grows in the pipe, the float acts like a weight and pulls the valve open, venting the air from the pipe. Keep an eye out for these types of valves when you’re perusing the constructed environment and now you’ll know how they work.

The job of an engineer is to take the science and knowledge we have and apply that to design completely new and sometimes untested systems. It almost always involves making assumptions. And if you make bad assumptions, you get bad answers and ultimately bad designs. That’s certainly true for air lock, where, if you assume that gasses don’t get into pipes or that they can’t constrict the flow, you might design a pipeline that doesn’t work. Luckily for engineers, this is a well-known phenomenon in pipe systems. It’s just one of the complexities that come with the job and we’ve come up a with a lot of creative ways to overcome it. Thank you, and let me know what you think!

February 11, 2020 /Wesley Crump

What is a Trompe?

January 14, 2020 by Wesley Crump

There is a hydropower plant on the Montreal River in eastern Ontario, Canada called Ragged Chute. It doesn’t look like much from an aerial photo, but that’s because the most interesting parts of this facility are underground: two massive vertical shafts and large tunnel connecting the two. Before it was converted to generate electricity, Ragged Chute was one of the world’s only water-powered compressed air plants. Starting around 1910, this plant sold compressed air to be used in the silver mines around Cobalt, Ontario. The way this ingenious facility harnessed the power of water to generate compressed air with no moving parts is fascinating and its use is seeing a small revival in modern days. On today’s blog we’re talking about the trompe.

Compressed air is an excellent way to store and transport energy. It’s not quite as convenient as electricity for homes and businesses, which is why you don’t see air lines strung on poles throughout cities, but in certain situations it makes a lot of sense. This is particularly true in mines, where a variety of tools and equipment need a consistent and safe source of power. But it’s not just pneumatic tools; Pretty much every step of the mining process - including exploration, blasting, ventilation, smelting, and refining - makes use of compressed air as a source of power. It’s reliable, simple, easy to transport, and often safer than the other options because it doesn’t have the risk of sparks or explosions that come with electricity or diesel.

We normally get compressed air from... a compressor, a device that does exactly what you’d expect: uses a mechanism to take outside air and squish it into a tank. But, air compressors had a major disadvantage to the mining professionals working in the early 20th century: they didn’t exist (at least not ones that were commercially available). Also, a compressor is just an energy converter. It takes one type of energy (usually rotational kinetic energy from a diesel or electric motor) and converts into potential energy stored in pressurized air. You still need a source of power. So, to be able to operate a mine using compressed air back in the day would have required both maintaining a separate source of power and a complicated and custom piece of machinery just to keep the tools and equipment running.

You can imagine how valuable it would be to be able to take advantage of a natural source of power - falling water - and avoid the need for complicated machinery and moving parts. That’s exactly what a trompe provided, and I’ve built a miniature version of one so I can show you how it works. And of course it’s made of clear pipe so we can see exactly what’s going on inside. The first step is the water supply. Just like hydroelectric facilities, the amount of hydraulic energy you can convert to compressed air is based on both the height and flow rate available. In my case, I’m using a garden hose, but most trompes built for mines or forges took advantage of small streams or rivers.

As the water enters the first vertical shaft it passes by a series of air inlets. Because of the water’s velocity as it travels down the shaft, the pressure at these inlets goes below atmospheric. So, the trompe “sucks” air from outside into this vertical shaft to join the water. The turbulence and surface tension of the flowing water entrains these bubbles of air and carries them to the bottom of the shaft. This type of interaction between flowing water and air is fairly complicated to characterize, and there are lots of situations in engineering where air-water interaction can cause major problems like in spillways, control gates, and pipelines. But, in a trompe, this is absolutely essential.

Once the air-water mixture reaches the bottom of the shaft, it enters a horizontal chamber. The purpose of this chamber is to separate the air and water. The turbulence and velocity are reduced, allowing the entrained bubbles to rise upwards. This air gets trapped in the collection system while the water continues out the other side of the chamber and upwards into the second vertical shaft. The purpose of this shaft is to give the water a way out while leaving the air behind. The height of this shaft also determines the pressure of the trapped air. I have a video on this topic if you want more detail, but the summary is that the pressure in a body of fluid doesn’t depend on the volume, just the depth. So a simple riser like my second pipe here is enough to hold pressure on the air in the collection system, compressing it just like a mechanical compressor would. It’s pretty satisfying to see it work. I could watch this all day.

Once enough air has collected in the system, I can open the valve to use it. I should say that this is a scale demonstration, so it doesn’t do anything of significant value unless you have a really tiny nail gun or air drill.

One of the benefits of a trompe over a more traditional air compressor is related to temperature. In technical terms, a compressor uses an adiabatic process where a trompe compresses air isothermally. But there’s no need to get caught up in the vocabulary. If you’re familiar with the behavior of gases, you know that (all other things staying the same) if you compress a gas, it gets hot. And the hotter the air, the more moisture it can hold. If you’re familiar with air tools, or just corrosion in general, you know that moisture is one of a tool’s worst enemies. In a trompe, however, that heat of compression gets absorbed by the water. So you end up with a much cooler and dryer source of compressed air, which by the way, is the definition of conditioning air, something I pay dearly for here in San Antonio, and I’m sure those miners in Canada appreciated as well.

I’m definitely not going to be powering any of my shop tools with my little demonstration here, and it wouldn’t be a very efficient way to do it, even if I could. If you’ve got grid power available, it makes sense to use a compressor designed to take advantage of that. But sometimes you don’t. A trompe can be useful in off-grid aquaponics and hydroponic systems that need aeration of the water. And, in fact, the design of my demonstration here came from the late Bruce Leavitt, a mining engineer who pioneered the use of small trompes for aeration and treatment of mining water in remote locations without access to electricity. I love to see examples of ancient technology finding new uses in our modern world. Especially in an age where renewable sources of energy are at the top of our minds, the trompe is a really cool way to harvest the power of water for beneficial use. Thank you, and let me know what you think!

January 14, 2020 /Wesley Crump

How Does a Hydraulic Ram Pump Work?

December 17, 2019 by Wesley Crump

A while back I wrote about water hammer, a hydraulic phenomenon that can lead to major problems in pipelines. Then I wrote about steam hammer, a somewhat related phenomenon associated with steam piping systems that can be extremely dangerous. And then, I did a follow-up to the water hammer talking about transient vacuum phenomena that can collapse pipes if they’re not designed and operated correctly. But even after those posts, it turns out I haven’t told the full story. Because even though water hammer is generally a problem for engineers, there is a way to take advantage of this normally inauspicious effect for a beneficial use. Hey I’m Grady and this is Practical Engineering. On today’s episode we’re talking about hydraulic ram pumps.

A hydraulic ram is a clever device invented over 200 years ago that can pump water uphill with no other external source of power except for the water flowing into it. No, it’s not a free energy device, but if you search around, you’ll find lots great implementations of this style of pump on YouTube, mainly from people doing homesteading and off-grid lifestyle vlogs. And, it’s easy to see why ram pumps have such popularity among these groups. Because if you’ve got a piece of land with an abundant source of water, a ram pump lets you get that water to a tank or location at a higher elevation with a really elegant design that requires no electricity or fuel and only two moving parts. So of course, I built my own so you can see how it works, but first we need to build a just a little bit of foundational knowledge on the behavior of fluids. And this is something anyone can understand.

There are three types of energy that a fluid can have, and in civil engineering, we usually convert them to their equivalents as the height of a static column. This distance is called the head. Understanding the energy in a fluid is how we solve a lot of engineering problems, because in most scenarios, the amount of energy stays the same, and the only thing that changes is what form it takes. The first type is head from gravitational potential. It doesn’t have an equivalent static column because it is a static column. The head is just the distance from an arbitrary datum. This one is easy to demonstrate with a tank and tube. I can move this tube around wherever I want, but the level in the tube and tank are always going to be the same. They’re both exposed to atmospheric pressure at their surface and they’re not moving so there’s no velocity. It’s just pure gravitational potential.

The second type of energy is pressure head. In this case, the head is the pressure divided by gravity and the density of the fluid. So, if I close off the top of my tank and add some air pressure, the level in the tube goes up. The new height is the pressure head, the equivalent static column related to the pressure in the tank. For a given pressure, a denser fluid like mercury will have a lower head compared to a lighter fluid like water because they have different unit weights. A good example of measuring pressure head is a barometer. We live at the bottom of an ocean of air, and we like to keep track of the air pressure down here. One of the easiest ways to do that is to measure how high the pressure can push a static column of a fluid, in most cases mercury.

The final type of energy is velocity head, which relates to a fluid’s kinetic energy. I can demonstrate the equivalent column of water using a tool called a pitot tube. The conversion for velocity head is velocity squared divided by 2 times gravitational acceleration. That’s a lot of background, but it’s important in understanding the function of a ram pump. Because without an external source of power, even though you can go from one type of energy to another, you can’t get more energy out than you had at the start. For example, I can convert a static column of water to one with some velocity, but I’m never going to get the fluid to a higher elevation than where it started… with one exception. An exception that the hydraulic ram pump takes advantage of beautifully.

A ram pump is essentially just two one-way check valves, one called the waste valve and the other called the delivery valve. To get it started, you just momentarily open the waste valve to allow water to flow. After that it’s working on its’ own to pump the water uphill above the elevation of the source. Pretty amazing, I think. Let’s walk through the path of the water to understand how it works. First, as the waste valve opens, water flows into the pump and immediately out of the valve. But, as it picks up speed, the flowing water eventually forces the waste valve to slam shut. Now the water is stopped in the pump. It had kinetic energy… but now it doesn’t. That means the kinetic energy was converted to something else, in this case pressure. This is the definition of water hammer. Slamming a valve shut converts all that kinetic energy nearly instantly creating a huge spike in pressure that can lead to stress and damage in pipe systems and connected equipment.

In the case of the ram pump though, that spike in pressure has a different effect. It opens the second check valve and forces water entering the pump into the delivery line. As you can see from my digital pressure gauge, this process is cyclical, pumping some of the water and wasting the rest each time the valve slams shut. You can see what’s happening here in real time: the pump is robbing some of the kinetic energy from the flow and imparting it to a smaller volume of water. It’s a redistribution of energy, converting low head and high flow into high head and low flow. And this type of pump can really create a lot of head. I ran my discharge line up to well above the roof of my shed, and my pump is still able to get the water up there. Sometimes an air chamber is included in the pump to smoothing out those sharp spikes in pressure and provide a more even flow rate out of the delivery pipe, reducing wear and tear on the pump components.

If you like to think in terms of modern electrical devices, imagine we installed a hydropower turbine on a pipe to spin a generator and then used that electricity to power a pump to move the water coming out of the turbine. Obviously you wouldn’t be able to pump all the water, and anyway that would be a pretty complicated setup for something the ramp pump can do with a few very simple off-the-shelf plumbing parts. In fact there is a type of pump that works from a water-powered turbine. Maybe I’ll build one of those next. For now though, I think the ram pump is an ingenious way to take advantage of the properties of fluids. We all need water for a variety of reasons, so being able to move it where we need it without any fancy equipment or external sources of power is a pretty nice tool to have in your toolbox. Thank you, and let me know what you think!

December 17, 2019 /Wesley Crump

How Power Blackouts Work

November 19, 2019 by Wesley Crump

We usually think of the power grid in terms of its visible parts: power plants, high-voltage lines, and substations. But, much of the complexity of power grid comes in how we protect it when things go wrong. Because of the importance of electricity in our modern world, it’s critical that we be able to prevent damage to equipment and perform repairs quickly when they’re needed. The grid got its name for a reason, it’s an interconnected system, which means that, if we’re not careful, small problems can sometimes ripple out and impact much larger areas. So its protective systems are thoughtfully designed to work together and minimize the number of people affected when faults happen. Hey I’m Grady and this is Practical Engineering. Today we’re talking about power system protection and how blackouts work.


Things go wrong on the grid all the time. Just like a car or the device you’re watching this video on right now, the grid is a machine. It’s a big machine that sits out in all kinds of weather, exposed to a variety of meddling and destructive animal species and just the general wear and tear that comes from providing humanity with an absolutely essential yet extremely dangerous amenity: electricity. It shouldn’t come as a surprise that faults happen from time to time. One common type of fault on transmission lines comes from sagging. During peak demands, these lines move tremendous amounts of energy as electrical current. Well, no wire is a perfect conductor; they all have some resistance. So, the more current you try to pass through a wire, the less efficiently it works. That energy that doesn’t make it to the end of the line is instead lost as heat. And what does heat do to metal? It causes it to expand. So the lines get longer, which means they sag lower, and occasionally that brings them into contact with tree limbs, creating a path to ground and shorting out the line. 


So what happens during a short circuit? Electricity will take any path to ground that it can find. And the lower the resistance of the path, the more current that will flow. A short circuit is when a low resistance path to ground happens where it’s not supposed to, bypassing the customers and literally shortening the circuit. This has a number or unwanted consequences. All that energy is being wasted, for one. Arcs created by short circuits can start fires for two. But more importantly, faults create massive spikes in current that can overload and damage equipment on the grid. I probably don’t need to mention that most pieces of the power grid are expensive, they take a long time to install and repair, and they’re important (they’re providing an essential utility), so we don’t want them to get damaged.


Easy enough (you might be thinking) “Just make them strong.” Put all the power lines underground where they’re protected from weather and animals. Make them as big as a bridge suspension cables and use indestructible alloys. Put the substations in big concrete buildings. Hide the solar panels under the ocean. You see what I’m getting at. I don't know how much a car that never breaks down would cost, but I’m sure I wouldn’t want to pay for it, and the same is generally true for the power grid. Resiliency doesn’t just mean durability. It’s a balancing act between making our infrastructure strong enough to resist threats, keeping faults from creating further damage, and making it easy to diagnose and repair problems so that equipment can be brought back online with minimal downtime.

Those last two items are the job of power system protection engineers and can be summed up pretty easily in one word: isolate. Engineers establish zones of protection around each major piece of the power grid to isolate faults and make them easy to find and repair. You can trace these zones of protection from your house all the way to the power plant. A short circuit in your coffee maker isn’t going to overload the service transformer because there’s a fuse or breaker in between. If a car knocks down a pole and grounds out a line, it’s not going to take out the entire substation, again because it’s isolated with a fuse or breaker. If a transformer has a fault in a substation, it’s not going to melt the transmission lines feeding it because it can be isolated using breakers. And if a transmission lines sags into a tree limb, the resulting surge in current is not going to destroy the generator at the power plant because it has its own zone of protection. Of course, this is a super simplified explanation. These zones of protection are thoughtfully considered to balance the complexity and resiliency of the grid. But, how do they actually work?


There are a wide variety of types of electrical faults. Identifying and differentiating them can be a major challenge. The fundamentals of electrical devices can be boiled down pretty easily. Electrical current travels from a source, through a series of components, and back through a return path that is referenced to ground. There really isn’t that much information that protective devices can use to identify problems. For example, there’s very little difference between what’s happening in your toaster and what happens when you take the live and neutral lines from a socket and short them together. The circuit breakers in your house identify faults primarily based on electrical current. If you get too many amps moving through the breaker, it assumes that something is wrong and shuts off the circuit. That makes sense for a lot of cases, since high current can seriously damage equipment and conductors, leading to all sorts of issues. But, it’s not the only kind of electrical fault.


On the grid, protection is primarily done through relays that can measure all kinds of parameters to identify faults and activate circuit breakers to isolate equipment and notify utilities of the problem. These relays are measuring voltage, current, and power on the lines, like you’d expect. They also measure differential current. Even if the current isn’t too high, you want to make sure that as much current is going out as is coming in, otherwise you’re losing it somewhere else which can be signal of a fault. This is the same principle that GFCI outlets in your house use. Relays also keep an eye on the frequency of the grid to make sure different components don’t lose synchronization. Certain breakers can also be manually activated, like during rolling blackouts, where utilities are forced to shed non-critical electrical loads due to lack of generation capacity. These are all types of “managed failures” where you have some loss of service at the cost of protecting the rest of the system. The goal is that isolating equipment when things go wrong speeds up the process and reduces the cost of making repairs to get customers back online.


But, there are cases when isolation of equipment can actually make things worse. Please see my demo in the video to see how this works. Imagine a series of interconnected transmission lines, all feeding their own service areas, represented by the power resistor and LED light in the model. During peak demand, these lines might be operating at nearly their maximum capacity. If one line experiences a fault, for example shorting out against a tree branch, protective relays will isolate the line. In my case, when I short out a line, the fuse blows. But, if not handled correctly, that can mean that the entire electrical load gets automatically distributed between the remaining transmission lines, pushing them beyond their limit. All of a sudden, you have a cascading failure. Much of our grid is designed to avoid this type of failure, but occasionally you get the perfect alignment of faults, communication errors, and human factors that lead to massive outages, like the one in 2003 that took out much of the U.S. northeast and Ontario.


Starting back up from a major blackout like this can be really complicated. Even just choosing which equipment to unisolate and in what order takes a lot of consideration and engineering. There’s a chicken and egg situation because most large power plants actually need some power to operate, so it can be difficult to start back up during a wide area outage, also called a black start. But, it’s still better than the alternative of having to perform major equipment replacements because things spiraled out of control. When your power goes out, it’s easy to be frustrated at the inconvenience, but consider also being thankful that it probably means things are working as designed to protect the grid as a whole and ensure a speedy and cost-effective repair to the fault. Thank you, and let me know what you think!


November 19, 2019 /Wesley Crump

World's Largest Batteries - (Pumped Storage)

November 12, 2019 by Wesley Crump

Electricity faces a fundamental problem that comes with pretty much any product that’s provided on-demand: our ability to generate large amounts of it doesn’t match up that closely with when we need it. Wind and solar power are becoming more cost effective, but they’ll always be unreliable and intermittent sources of energy. Retailers use warehouses to store goods between manufacturer and sale. Water utilities use tanks and reservoirs. But the storage of electricity for later use, especially on a large scale, is quite a bit more challenging. That’s why power grids are mostly real time systems with generation ramped up or down to meet fluctuating demands instantaneously. That’s not to say that we don’t store energy at grid scale though, and there’s one type of storage that makes up the vast majority of our current capacity.



Although it’s a very convenient form of energy to produce, transmit, and use, electricity has some disadvantages. We’ve talked a little bit about variability in demand and generation capacity in previous videos of this series, but I’ll summarize again here. The fundamental problem is that we use electricity like this, with peaks in the morning and evening. But, we generate electricity differently. Fossil fuel and nuclear plants generally have a single capacity at which they run most efficiently with occasional need to go offline for maintenance. Solar, of course, follows the amount of sunlight with some variability due to clouds. And wind follows weather patterns with potential for lots of variability. You may have heard of the duck curve, which is the name given to our electricity demand minus the contribution from solar. You end up with this funky curve representing the need for other sources of electricity. This creates a challenge because not only does solar power start to die away right when we need it most during peak demands in the evening, it also creates a much steeper demand curve, requiring grid operators to spin up other types of generation more quickly. So, solar power is meeting some of our electricity needs, but it’s not necessarily eliminating the need for other sources of electricity. And in some cases, it may actually be making the grid less efficient by contributing to instability and requiring the use of peaking plants that are generally heavier polluters.



In fact, peaking plants are the go-to solution for load following on the grid. These are smaller, more expensive sources of electricity that only run for a few hours per day to make up the difference between the base power load and the evening peaks. Another interesting solution to this problem is called demand management, which is influencing the demand for electricity to reduce or shift peaks and match generation capacity better. This can be as simple as marketing campaigns encouraging you to set your thermostat a few degrees higher to sophisticated systems that can tell your electric car when to start charging. But, the holy grail in grid-scale power delivery is simply to let the demand and generation curves be what they’ll be, storing energy when generation exceeds demand and using that stored energy during demand peaks.



There are a wide variety of fascinating ideas for storing large amounts of energy, from molten salt to pressurizing the air in old mines, but most of the current grid-scale storage relies on gravitational potential. That is: use excess energy to lift something up, then use that thing to generate electricity as it falls back down, essentially treating earth’s gravity as a spring. And the vast majority of current grid-scale storage does this using water, in a scheme called pumped storage hydroelectricity. And I’ve built a little mini-scale version of this as a demonstration. In most cases, the way this works is to have two reservoirs nearby but separated by a large difference in elevation, in this case two buckets separated by a ladder. At night, when electricity prices are low, you use that cheap power and pumps to fill the upper reservoir. During the day, when energy prices are high, you use the water in the upper reservoir to spin turbines and generate hydropower. It’s essentially a giant water battery, and storing energy in this way has a lot of benefits, besides just shaving off the peaks of the demand curve. Hydropower is one of the most responsive ways to generate electricity, so pumped storage allows grid operators to handle fluctuation in demands quickly. Pumped storage is also valuable in an emergency, providing quick access to power when other sources may be out of commission. Finally, these systems can provide a lot of benefit on small, insular power grids (like on islands) where you don’t have as much diversification in the generation portfolio. But, pumped storage has several major challenges as well, and I’ll use a demo in my video to illustrate the big ones.



First is energy density, which is the term to describe how much energy can fit into a unit volume. And this is not a pumped storage facility’s finest feature. Just for some reference see the video for the energy density of gasoline, a lithium ion battery, and the water in a typical pumped storage reservoir. I say typical because but the total energy storage is both a function of height and volume. The greater the head above the turbines, the more the generating capacity for a given volume of water. I’m using a little aquarium pump to fill up my upper reservoir on top of this ladder. It’s pretty easy to see the difference in energy density between a battery and the stored water. The water in the bucket has about the same gravitational potential energy as the battery in your car’s key fob. In fact, to reach the same density as a typical lithium-ion battery, you’d have to have the water stored at a height of approximately outside earth’s atmosphere, which wouldn’t be very convenient for an electric vehicle. In fact, this is one of the main disadvantages of pumped storage facilities is that they require a very specific type of site where you can locate two pools near each other while also separating them by as much vertical distance as possible. And even then, because of the low energy density, these are often massive reservoirs that are major civil engineering projects as compared to something like a battery that can be manufactured in a factory.



The other major challenge of pumped storage is getting the energy back out once you’ve stored it. Efficiency is the ratio of how much energy you put in versus how much of it you can get out. You never get it all. That’s the second law of thermodynamics. But you hope to get most of it, otherwise you’ve built a very big and very expensive battery that doesn’t work. As I mentioned, my model reservoir is holding about a tenth of a watt-hour, but that’s not how much energy it took to get it there. I kept an eye on the power supply while the bucket filled, and it took about 0.7 watt-hours of electricity. That means my pump’s efficiency was about 15%. So, the most energy I can even hope to recover is a lot less than I’ve put in.



Some pumped storage facilities can use reversible pumps that act as turbines, but in my case I’m using a dedicated unit. I’ve got a power resistor as dummy loads, and I’m measuring the voltage and current produced by the turbine to estimate the total recovery of energy. And… the numbers don’t look good. In fact, with the small amount of pressure, my little mini hydro turbine could barely even spin at all. My best estimate is that I was able to generate 2 milliwatt-hours from the full bucket. That’s a whopping 0.3% efficiency and this is the other reason we’re not hooking up tanks of water to our portable electronic devices. At a small scale, this just isn’t a feasible way to store power. Little pumps and turbines just aren’t very efficient.



But things look a little better on a larger scale. Even considering all the potential losses of energy from evaporation or leakage of water to friction and turbulence within the machinery, many pumped storage facilities achieve efficiencies of 70 percent and higher. Of course that means they are net energy consumers, since (as we mentioned) you can’t recover all the power used to pump the water to the top, but if the cost of the energy consumed is lower than the price they can get out of that energy (minus inefficiencies) during peak demand, they can still turn a profit.



In fact, you might be surprised how many pumped storage facilities already exist. In the U.S. the Energy Information Administration has a nice online map where you can look around and see if there’s one near that you can go visit. Of course, I’ve only had time to go into the basics of pumped storage, and there are a lot of interesting advancements on the horizon, like using abundantly available seawater instead of sometimes limited sources of freshwater. Like demand management, storage is just one part of improving the efficiency and stability of the power grid as we work to implement more renewable and sustainable sources of electricity. Thank you for reading, and let me know what you think!



November 12, 2019 /Wesley Crump

How do Electric Transmission Lines Work?

September 24, 2019 by Wesley Crump

In the past, power generating plants were only able to serve their local areas. Electricity didn’t have far to travel between where it was created and where it was used. Since then, things have changed, and most of us get our electricity from the grid, huge interconnected areas of power producers and users. As power plants grew larger and further away from populated areas, the need for ways to efficiently move electricity over long distances has become more and more important. Stringing power lines across the landscape to connect cities to power plants may seem as simple as connecting an extension cord to an outlet, but the engineering behind these electric superhighways is more complicated and fascinating than you might think. Hey I’m Grady and this is Practical Engineering. On today’s episode we’re talking about electrical transmission lines.

Generating electricity is a major endeavor, often a complex industrial process that requires huge capital investments and ongoing costs for operation, maintenance, and fuel. Electric utilities only earn revenue on the power that makes it to your meter. They aren’t compensated for energy lost on the grid. So if we’re going to go to the trouble of producing electricity, we want to make sure that as much of it as possible actually reaches the customers for whom it’s intended. The problem is most power plants are usually located far away from populated areas for a variety of reasons: land is cheaper in rural areas, many plants require large cooling ponds, and most people don’t like to live near large industrial facilities. That means that massive amounts of electricity need to be transported long distances from where it’s created to where it’s used. 

Power lines are the obvious solution to this problem, and sure enough, stringing wires (normally called conductors by power professionals) over vast expanses of rural countryside is, in general, how bulk transport of electricity is carried out. But, if we want this transport to be efficient, there’s more to consider. Even good conductors like aluminum and copper have some resistance to the flow of electric current. You even can see this at home. We can measure a small drop in voltage when a hair dryer is plugged directly into an outlet and turned on. Trying this again at the end of a long extension cord, the drop in voltage is much more significant. This difference in power represents energy lost as heat from the resistance of the extension cord. In fact, this lost power is pretty easy to calculate if you’re willing to do a little bit of algebra (which I always am).

Electrical power is the product of the current (that’s the flow rate of electric charge) and the voltage (that’s the difference in electric potential). For a simple conductor, we can use Ohm’s law to show that the drop in voltage from one end of a wire to the other is equal to the current times the resistance of the wire measured in ohms. Substituting this relationship in, we find that the power loss is equal to the product of current squared and resistance. So if we want to reduce the losses in a power line, we have two variables to play with. We can reduce the resistance of the conductor by increasing its size or using a more conductive material, but look what matters even more: the i-squared term. Reducing the current by half will cut the lost power to one-fourth and so on. Going back to Ohm’s law, we can see that the only way to reduce the current and still get the same amount of power is to increase the voltage. So, that’s just what we do. Transformers at power plants boost the voltage up to 100,000 volts and sometimes much higher before sending electricity on its way over transmission lines. This lowers the current in the lines, reducing the wasted energy and making sure that as much power as possible makes it to customers at the other end.

This simple demonstration illustrates the concept. If I try to power a hair dryer using these thin wires, it is not going to work. The current required to power the dryer is just too high. It creates so much heat that the wires completely melt. That heat represents wasted energy. But, if I first boost the voltage up using this transformer and step it back down on the other side of the thin conductors, they have no problem carrying the power required to run the dryer. We’ve swapped high current for high voltage, making the conductors more efficient at carrying power. What we’ve also done is make things much more dangerous. You can think of voltage as electricity’s desire to flow. High voltages mean the power really wants to move and will even find a way to flow through materials we normally consider non-conductive, like the air. The engineers designing high voltage transmission lines have to make sure that these lines are safe from arcing and other dangers that come with high voltage.

Most long distance power lines don’t use insulation around the conductors themselves. Insulating in this way would have to be so thick that it wouldn’t be cost effective. Instead, most of the insulation comes from air gaps, or simply spacing everything far enough apart. Transmission towers and pylons are really tall to prevent anyone or any vehicle on the ground from inadvertently getting close enough to conductors to create an arc. Bulk electricity is transmitted in three phases, which is why you’ll see most transmission conductors in groups of three. Each phase is spaced far enough from the other two to avoid arcing between the phases. The conductors are connected to each tower through long insulators to keep enough distance between energized lines and grounded pylons. These insulators are normally made from ceramic discs so that if they get wet, electricity leakage has to take a much longer path to ground. These discs are somewhat standardized, so this is an easy way to get a rough guess of a transmission line’s voltage. Just multiply the number of discs by 15. For example, this line near my house has 9 discs on each insulator, and I know it’s 138 kilovolt line. You’ll also often see smaller conductors running along the top of transmission lines. These static or shield wires aren’t carrying any current. They’re there to protect the main conductors against lightning strikes.

High voltage isn’t the only design challenge associated with electric transmission lines. Just selection of the conductors alone is a careful balancing act of strength, resistance, and other factors. Transmission lines are so long that even a tiny change in the conductor size or material can have a major impact on the overall cost. Conductors are rated by how much current they can pass for a given rise in temperature. These lines can get very hot and sag during peak electricity demands, which can cause problems if tree branches are too close. Wind can also affect the conductors, causing oscillations that lead to damage or failure of the material. You’ll often see these small devices called stockbridge dampers to absorb some of the wind energy. High voltage transmission lines also generate magnetic fields that can induce currents in parallel conductors like fences and interfere with magnetic devices, so the height of the towers is sometimes set to minimize EMF at the edge of the right-of-way. In certain cases, engineers even need to consider the audible noise of the transmission lines to avoid disturbing nearby residents.

Even with all those considerations, the classic model of the power grid with centralized generation away from populated areas is changing. The cost of solar panels continues to drop making it easier and easier to produce some or all of the electricity you use at your own house or business and even export excess energy back into the grid. This type of a local generation happens on the distribution side of the grid, often completely skipping large transmission lines. On the other side of that coin, the energy marketplace is changing as well, and grid operators are buying and selling electricity across great distances. Electrical transmission lines may seem simple - the equivalent of an extension cord stretched across the sky. But, I hope this video helped show the fascinating complexity that comes with even this seemingly innocuous part of our electrical grid. Thank you, and let me know what you think!


September 24, 2019 /Wesley Crump

How Do Substations Work?

August 27, 2019 by Wesley Crump

When you plug in an electric device, it’s easy not to even consider where the electricity actually comes from. The simple answer is a power generating station, also known as a power plant, usually someplace far away. But the reality is much more complicated than that. Generation is only the first of many steps our power takes on its nearly instantaneous journey from production to consumption. The behaviour of electricity doesn’t always follow our intuitions, which means the challenges associated with constructing, operating, and maintaining the power grid are often complicated and sometimes unexpected. Many of those challenges are overcome at facility which, at first glance, often looks like a chaotic and dangerous mess of wires and equipment, but which actually serves a number of essential roles in our electrical grid, the substation.

As simple as it is to imagine, the power grid isn’t just an interconnected series of wires to which all power producers and users collectively connect. In reality, the electricity normally makes its way through a series of discrete steps on the grid normally divided into three parts: generation, or production of electricity; transmission, or moving that electricity from centralized plants to populated areas; and distribution, or delivering the electricity to every individual customer. If you consider the power grid a gigantic machine (and many do), substations are the linkages that connect the various components together. One of the cool parts about our electrical infrastructure is that most of it is out in the open so anyone can have a look. I’m somewhat of an infrastructure tourist, a regular beholder of the constructed environment, and my goal is for you too to be able to mentally untangle this maze of modern electrical engineering so that the next time you feast your eyes on a substation, you’ll be able to appreciate it as much as I do. Originally named for smaller power plants that were converted for other purposes, “substation” is now a general term for a facility that can serve a wide variety of critical roles on the power grid. Those roles depend on which parts of the electrical grid are being connected together and the types, number, and reliability requirements of the eventual customers downstream. And the first and often simplest of these roles is switching.

The general layout of a substation consists of some number of electric lines (called conductors if you want to fit in with the electrical engineers) coming into the facility. These high voltage conductors connect to a series of some or many pieces of equipment before heading out to their next step in the power grid. As a junction point in the grid, a substation often serves as the termination of many individual power lines. This creates redundancy, making sure that the substation stays energized even if one transmission lines goes down. But, it also creates complexity. The connections to these various devices are called buses, often rigid, overhead conductors that run along the entire substation. The arrangement of the bus is a critical part of the design of any substation because it can have a major impact on the overall reliability.

Like all equipment, substations occasionally have malfunctions or things that simply require regular maintenance. To avoid shutting down the entire substation, we need switches that can isolate equipment, transfer load, and control the flow of electricity along the bus. This may seem obvious, but turning on and off high voltage lines isn’t as simple as flipping a light switch. At high voltages, even air can act like a conductor, which means even if you create a break in a line, electricity can continue flowing in a phenomenon known as an arc. Not only does arcing defeat the purpose of a switch, it is incredibly dangerous and damaging to equipment. So, switching in a substation is a carefully-controlled procedure with specially-designed equipment to handle high voltages. Disconnect switches are often just called switchgear in addition to the equipment that serves another important role in a substation: protection.

I mentioned earlier that much of our electrical infrastructure is exposed and out in the open. That’s nice for people like me who enjoy having a look, but it also means being vulnerable to an endless number of things that can go wrong. From lightning strikes to rogue tree limbs, windstorms to squirrels, grid operators contend with so many threats to their infrastructure on a day by day basis. When something causes a short circuit on the power grid, also called a fault, it can severely damage power lines and other equipment. Not only that, because of the overwhelming complexity of the power grid, faults can and do cascade in unexpected and sometimes uncontrollable ways, leaving huge populations without power for hours or days. Many of the ways we protect equipment from faults are handled at a substation. One of the most common types of electrical fault is a short circuit to ground. This type of fault creates a low-resistance path for current to flow and leads to an overload of power lines and equipment. The simplest way to protect against this type of fault is with a fuse, a device that physically burns out at a certain current threshold. Fuses are dead simple and don’t require much maintenance, but they have some disadvantage too. They’re one-time use and can’t be used to interrupt current for other types of faults. On the other hand, circuit breakers are a class of devices that serve similar roles as fuses, but provide more sophistication for dealing with a wide variety of faults.

Like disconnect switches, circuit breakers need to be carefully designed to interrupt huge voltages and currents without damage. As soon as contacts within a circuit breaker are moved apart from one another, an electrical arc forms. This arc needs to be extinguished as quickly as possible to prevent damage to the breaker or unsafe conditions for workers. Extinguishing the arc is accomplished by a material called a dielectric that doesn’t conduct electricity. For lower voltages, the circuit breakers can be located in a sealed container under vacuum to avoid electricity conducting in the air between the contacts. For higher voltage, breakers are often submerged in tanks filled with non-conductive oil or dense dielectric gas. These breakers give grid operators more control about how and when current gets interrupted. Not every fault is the same and sometimes operators even know about a disturbance ahead of time and can trigger breakers early to prevent cascading failures. Many faults are temporary like lightning strikes or swaying tree branches. A special kind of circuit breaker called a recloser can interrupt current for a short period of time and re-energize the line to test if the fault has cleared. Re-closers usually trip and reclose a few times, depending on their programming, before deciding that a fault is permanent and locking out. If electricity demand on the grid gets so high that it can’t be met by the utility, substations may also be used to shed load. Rolling blackouts are used to lower the total electrical demand to avoid bigger failures on the grid.

One of the most important parts of the power grid is that different segments flow at different voltages. Voltage is a measure of electrical potential, somewhat equivalent to the pressure of a fluid in a pipe. At large power plants, electricity is produced at a somewhat low voltage of around 10-30 kilovolts or kV. From there, the voltage is increased much higher using transformers so that it can travel along transmission lines. Using a higher voltage reduces the losses along the way, making them more efficient but also much more dangerous. This is why overhead transmission lines are so tall - to keep them out of the way of trees and human activities. But, when transmission lines reach the populated areas which they serve, it’s not feasible to keep them so high in the air. So, prior to distribution, the voltage of the grid needs to be brought back down, again using transformers located within a substation.

A transformer is an extremely simple device that relies on the alternating current of the grid to function. It consists of two adjacent coils of wire. As the voltage in one coil changes, it creates a magnetic field. This field couples with the other coil, inducing a voltage. The incredible part of a transformer has to do with the number of loops in each coil. The induced voltage will be proportional to the ratio of loops. For example, if the transmission side of a transformer has 1000 loops while the distribution side has 100, the voltage on the distribution side will be 10 times less. This simple but incredible fact makes it possible for us to step up or down voltage as necessary to balance the safety and efficiency along each part of the power grid.

The simplicity of transformers is great in a lot of ways, but it also means that it can be difficult to make fine adjustments to the power leaving the substation. Because of this, many many substations include equipment for monitoring and controlling the power on the grid. Instrument transformers are small transformers used to measure the voltage or current on the grid or provide power to system monitoring devices. Depending on varying transmission and distribution losses, the voltage on the grid can swing outside an acceptable range. Regulators are devices with multiple taps that can make small adjustments - up or down - to the distribution voltage on feeder lines leaving the substation toward customers. If you look closely you can sometimes see the regulator dial indicating the tap position. 

All that different equipment requires lots of maintenance. The final and most important role of a substation is that it be safe for electricians and linemen to inspect, repair, and replace equipment. Substations are usually the only locations where extra-high voltage power lines get close to the ground, so safety is absolutely critical. The buswork running along the substation is protected from short-circuit by large insulators to avoid arcs to ground. Even the connections into each piece of equipment are done through a device called a bushing which maintains a safe distance between energized lines and the grounded metal housings. Some substations have large concrete walls to serve as fire barriers between equipment. All substations are built with a grid of grounding rods and conductors buried below the surface. In the event of a fault, the substation needs to be able to sink lots of current into the ground to trip the breakers as quickly as possible. This grounding grid also makes sure that the entire substation and all its equipment are kept at the same voltage level, called an equipotential, so that touching any piece of equipment doesn’t create a flow of electricity through a person. Finally, substations are surrounded by large fences and warning signs to make absolutely sure that any wayward citizens know to stay out.

In many ways, the grid is a one-size-fits-all system - a gigantic machine to which we all connect spinning in perfect synchrony across, in some cases, an entire continent. On the other hand, our electricity needs, including when we need it, how much we need, and how reliably it should be delivered vary widely. Power requirements are vastly different between a sensitive research facility and a suburban residential neighborhood, between a military base and country club golf course, and between a steel mill and a bowling alley. Likewise, every electrical substation is customized to meet the needs of the infrastructure it links together. As the grid gets smarter, as demand patterns change, and as we (hopefully!) continue to replace fossil fuel generation with sources of renewable energy to curb global warming, managing our electrical infrastructure will only get more challenging. So, substations will continue to play a critical role in controlling and protecting the power grid.

August 27, 2019 /Wesley Crump

How Electricity Generation Really Works

July 23, 2019 by Wesley Crump

The importance of electricity in our modern world can hardly be overstated. What was a luxury a hundred years ago is now a critical component to the safety, prosperity, and well-being of nearly everyone. And yet, electricity is so unlike our other physical necessities. We can’t hold it in our hand; We can’t see it directly; and we usually only have a vague understanding of where it comes from.


Generation is the first step electricity takes on its journey through the power grid, the gigantic machine that delivers energy to millions of people day in and day out. We talked about the power grid in a previous video, but there’s one crucial point that’s worth repeating: it is a real-time energy delivery system. Electricity moves at nearly the speed of light, and current availability of large-scale energy storage is negligible. That means that power is generated, transported, supplied, and used all in the exact same moment. The energy coursing through the wires of your home or office was a ray of sunshine on a solar panel, an atom of Uranium, or most likely, a bit of coal or natural gas in a steam boiler only milliseconds ago. Because of the laws of thermodynamics, all our electricity starts as some other kind of energy, which means all of our ways to generate electricity are just fancy ways of converting one type of energy to another. And in most cases, the type of energy being converted to electricity is heat.

Take a look at any of the various pie charts showing the breakdown of global energy production. You’ll see that the vast majority of methods we use to generate power are essentially just different ways of getting water really hot. Many thermal power plants (as they’re called) use fossil fuels like coal or natural gas in a furnace to generate steam. These types of plants have the obvious disadvantage of producing tremendous amounts of carbon dioxide as a by-product, a greenhouse gas that’s largely responsible for the ongoing rise in the average temperature of the Earth's climate, also known as global warming. In fact, electricity production makes up about a third of total greenhouse gas emissions. Luckily, there are other ways to generate large quantities of steam that don’t rely on fossil fuels.

Some plants use the fission of radioactive elements in a nuclear reactor as a source of heat. Some parts of the world can use geothermal energy, heat from inside the earth’s crust. We can even use arrays of mirrors to concentrate sunlight and create enough heat to run a boiler. But beyond that first step, thermal power stations are pretty much all the same. Once the steam is created, it passes through a turbine which converts the thermal energy into rotational energy. The shaft of the turbine is coupled to a rotor (that’s the part that rotates) of an AC generator that spins a set of magnets. The stator (that’s the part that’s stationary) has a set of coils of wire called windings. As the magnets on the rotor pass each winding, they generate a voltage across each coil.

In most places in the world, the number of coils in the stator is three, because our grid is built for three-phase, alternating current .The benefit of having the current alternate directions is that it makes it easy to step up or down the voltage using a dead simple device called a transformer. The benefit of generating power in three individual phases is getting a fairly smooth supply of electricity that overlaps so there’s never a moment when all phases are zero. A three-phase supply can also be carry three times as much power on three wires as a single-phase supply can carry on two. This is why steam turbine generators almost always have coils grouped in three.

But, steam isn’t the only way to spin a turbine. Hydroelectricity uses flowing water, and wind energy production has seen massive growth in the past 10 years. The other renewable source of electricity that is seeing explosive growth is solar photovoltaic or PV. The cost of solar cells which convert light directly into electricity has plummeted, making it feasible even for individual homeowners and businesses to install them on rooftops and supply some or all of their own power needs. Large-scale solar farms are also popping up in sunny climates to meet the growing demand for renewable electricity. Being able to power the grid directly from sunlight without harmful by-products is awesome, but it does come at a cost. Besides the obvious disadvantage of only working during daylight hours, solar PV has another disadvantage on the grid: it doesn’t have any inertia.

One of the biggest benefits of connecting lots of power plants together is the tendency of power to remain in motion on the grid, even during localized faults and disturbances. This inertia keeps our electricity stable and reliable. But electricity doesn’t have inertia on its own. You can’t give the electrons a kick and hope they continue down the wires without any help. The inertia comes from the physical rotation of all those massive interconnected generators. You can imagine the power grid as a train going up a hill. The locomotives work together to carry the load. To maintain speed, the throttle or number of locomotives needs to be adjusted to match the load of the train (which represents the total power demand that is constantly changing throughout the day).

The power grid works in a very similar way. Electrical demand is felt immediately by all the connected generators. Each additional demand causes a little more load on the every generator together, slowing the rotation by just a tiny amount and thus decreasing the frequency of the alternating current. Similarly, if electricity generation exceeds the demand, the generators will speed up. You can see this demonstrated in a typical brushless motor which is wired exactly like a three-phase generator. Under no load, the motor spins freely. But, if I short the contacts together to simulate a high electrical load, it takes much more energy to turn. Power consumers turn on and off electrical devices at will, with no notification to the utilities at all. So, to avoid fluctuations in frequency, generation has to be constantly adjusted up or down to match electrical demands on the grid. This process is called load following. As demand on the grid increases or decreases throughout the day, grid operators dispatch generation capacity to match it.

Going back to our analogy, the speed of our train represents the grid frequency, 50 or 60 hertz depending on where you live. Every locomotive and every train car is designed to travel at exactly the same speed, and the stability of the entire system depends on perfect synchrony. If one part of the train starts moving faster or slower than the rest of the cars, things go haywire in a hurry. This is why inertia is so important. If any problem occurs, for example if one of the locomotives dies, the train has enough inertia to keep things moving while the problem can be addressed. It’s also why grid operators maintain spinning reserves, generators that are ready to connect to the grid at a moment’s notice. And before a generator can be connected to the rest of the grid, it needs to be synchronized as well. That means its frequency, phase, and voltage need to be matched with grid power by adjusting the speed and excitation of the electromagnets in the rotor. A special instrument called a synchroscope helps with this process. Once the synchroscope gives the all clear, plant operators can close the breaker to connect to the grid.

This is a simplification of load following and generator dispatch, but it highlights one of the key differences between wind and solar and the rest of our generation capacity. If we want our lights to turn on right when we flip the switch, we have to understand that the grid operators need the same thing: the expectation that generation capacity will be available on demand. Reliability is the overarching purpose of having an interconnected power grid in the first place, and incorporating unreliable sources of power - like wind that depends on weather and solar that’s only available during half the day - is one of the biggest challenges we face with electrical infrastructure. Because of global warming, transitioning to renewable sources of electricity is one of the most important challenges of our lifetime, and I think that overcoming it starts with all of us being interested, informed, and excited about understanding where our power comes from. Thank you for reading, and let me know what you think!

July 23, 2019 /Wesley Crump
  • Newer
  • Older