Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

What is a Trompe?

January 14, 2020 by Wesley Crump

There is a hydropower plant on the Montreal River in eastern Ontario, Canada called Ragged Chute. It doesn’t look like much from an aerial photo, but that’s because the most interesting parts of this facility are underground: two massive vertical shafts and large tunnel connecting the two. Before it was converted to generate electricity, Ragged Chute was one of the world’s only water-powered compressed air plants. Starting around 1910, this plant sold compressed air to be used in the silver mines around Cobalt, Ontario. The way this ingenious facility harnessed the power of water to generate compressed air with no moving parts is fascinating and its use is seeing a small revival in modern days. On today’s blog we’re talking about the trompe.

Compressed air is an excellent way to store and transport energy. It’s not quite as convenient as electricity for homes and businesses, which is why you don’t see air lines strung on poles throughout cities, but in certain situations it makes a lot of sense. This is particularly true in mines, where a variety of tools and equipment need a consistent and safe source of power. But it’s not just pneumatic tools; Pretty much every step of the mining process - including exploration, blasting, ventilation, smelting, and refining - makes use of compressed air as a source of power. It’s reliable, simple, easy to transport, and often safer than the other options because it doesn’t have the risk of sparks or explosions that come with electricity or diesel.

We normally get compressed air from... a compressor, a device that does exactly what you’d expect: uses a mechanism to take outside air and squish it into a tank. But, air compressors had a major disadvantage to the mining professionals working in the early 20th century: they didn’t exist (at least not ones that were commercially available). Also, a compressor is just an energy converter. It takes one type of energy (usually rotational kinetic energy from a diesel or electric motor) and converts into potential energy stored in pressurized air. You still need a source of power. So, to be able to operate a mine using compressed air back in the day would have required both maintaining a separate source of power and a complicated and custom piece of machinery just to keep the tools and equipment running.

You can imagine how valuable it would be to be able to take advantage of a natural source of power - falling water - and avoid the need for complicated machinery and moving parts. That’s exactly what a trompe provided, and I’ve built a miniature version of one so I can show you how it works. And of course it’s made of clear pipe so we can see exactly what’s going on inside. The first step is the water supply. Just like hydroelectric facilities, the amount of hydraulic energy you can convert to compressed air is based on both the height and flow rate available. In my case, I’m using a garden hose, but most trompes built for mines or forges took advantage of small streams or rivers.

As the water enters the first vertical shaft it passes by a series of air inlets. Because of the water’s velocity as it travels down the shaft, the pressure at these inlets goes below atmospheric. So, the trompe “sucks” air from outside into this vertical shaft to join the water. The turbulence and surface tension of the flowing water entrains these bubbles of air and carries them to the bottom of the shaft. This type of interaction between flowing water and air is fairly complicated to characterize, and there are lots of situations in engineering where air-water interaction can cause major problems like in spillways, control gates, and pipelines. But, in a trompe, this is absolutely essential.

Once the air-water mixture reaches the bottom of the shaft, it enters a horizontal chamber. The purpose of this chamber is to separate the air and water. The turbulence and velocity are reduced, allowing the entrained bubbles to rise upwards. This air gets trapped in the collection system while the water continues out the other side of the chamber and upwards into the second vertical shaft. The purpose of this shaft is to give the water a way out while leaving the air behind. The height of this shaft also determines the pressure of the trapped air. I have a video on this topic if you want more detail, but the summary is that the pressure in a body of fluid doesn’t depend on the volume, just the depth. So a simple riser like my second pipe here is enough to hold pressure on the air in the collection system, compressing it just like a mechanical compressor would. It’s pretty satisfying to see it work. I could watch this all day.

Once enough air has collected in the system, I can open the valve to use it. I should say that this is a scale demonstration, so it doesn’t do anything of significant value unless you have a really tiny nail gun or air drill.

One of the benefits of a trompe over a more traditional air compressor is related to temperature. In technical terms, a compressor uses an adiabatic process where a trompe compresses air isothermally. But there’s no need to get caught up in the vocabulary. If you’re familiar with the behavior of gases, you know that (all other things staying the same) if you compress a gas, it gets hot. And the hotter the air, the more moisture it can hold. If you’re familiar with air tools, or just corrosion in general, you know that moisture is one of a tool’s worst enemies. In a trompe, however, that heat of compression gets absorbed by the water. So you end up with a much cooler and dryer source of compressed air, which by the way, is the definition of conditioning air, something I pay dearly for here in San Antonio, and I’m sure those miners in Canada appreciated as well.

I’m definitely not going to be powering any of my shop tools with my little demonstration here, and it wouldn’t be a very efficient way to do it, even if I could. If you’ve got grid power available, it makes sense to use a compressor designed to take advantage of that. But sometimes you don’t. A trompe can be useful in off-grid aquaponics and hydroponic systems that need aeration of the water. And, in fact, the design of my demonstration here came from the late Bruce Leavitt, a mining engineer who pioneered the use of small trompes for aeration and treatment of mining water in remote locations without access to electricity. I love to see examples of ancient technology finding new uses in our modern world. Especially in an age where renewable sources of energy are at the top of our minds, the trompe is a really cool way to harvest the power of water for beneficial use. Thank you, and let me know what you think!

January 14, 2020 /Wesley Crump

How Does a Hydraulic Ram Pump Work?

December 17, 2019 by Wesley Crump

A while back I wrote about water hammer, a hydraulic phenomenon that can lead to major problems in pipelines. Then I wrote about steam hammer, a somewhat related phenomenon associated with steam piping systems that can be extremely dangerous. And then, I did a follow-up to the water hammer talking about transient vacuum phenomena that can collapse pipes if they’re not designed and operated correctly. But even after those posts, it turns out I haven’t told the full story. Because even though water hammer is generally a problem for engineers, there is a way to take advantage of this normally inauspicious effect for a beneficial use. Hey I’m Grady and this is Practical Engineering. On today’s episode we’re talking about hydraulic ram pumps.

A hydraulic ram is a clever device invented over 200 years ago that can pump water uphill with no other external source of power except for the water flowing into it. No, it’s not a free energy device, but if you search around, you’ll find lots great implementations of this style of pump on YouTube, mainly from people doing homesteading and off-grid lifestyle vlogs. And, it’s easy to see why ram pumps have such popularity among these groups. Because if you’ve got a piece of land with an abundant source of water, a ram pump lets you get that water to a tank or location at a higher elevation with a really elegant design that requires no electricity or fuel and only two moving parts. So of course, I built my own so you can see how it works, but first we need to build a just a little bit of foundational knowledge on the behavior of fluids. And this is something anyone can understand.

There are three types of energy that a fluid can have, and in civil engineering, we usually convert them to their equivalents as the height of a static column. This distance is called the head. Understanding the energy in a fluid is how we solve a lot of engineering problems, because in most scenarios, the amount of energy stays the same, and the only thing that changes is what form it takes. The first type is head from gravitational potential. It doesn’t have an equivalent static column because it is a static column. The head is just the distance from an arbitrary datum. This one is easy to demonstrate with a tank and tube. I can move this tube around wherever I want, but the level in the tube and tank are always going to be the same. They’re both exposed to atmospheric pressure at their surface and they’re not moving so there’s no velocity. It’s just pure gravitational potential.

The second type of energy is pressure head. In this case, the head is the pressure divided by gravity and the density of the fluid. So, if I close off the top of my tank and add some air pressure, the level in the tube goes up. The new height is the pressure head, the equivalent static column related to the pressure in the tank. For a given pressure, a denser fluid like mercury will have a lower head compared to a lighter fluid like water because they have different unit weights. A good example of measuring pressure head is a barometer. We live at the bottom of an ocean of air, and we like to keep track of the air pressure down here. One of the easiest ways to do that is to measure how high the pressure can push a static column of a fluid, in most cases mercury.

The final type of energy is velocity head, which relates to a fluid’s kinetic energy. I can demonstrate the equivalent column of water using a tool called a pitot tube. The conversion for velocity head is velocity squared divided by 2 times gravitational acceleration. That’s a lot of background, but it’s important in understanding the function of a ram pump. Because without an external source of power, even though you can go from one type of energy to another, you can’t get more energy out than you had at the start. For example, I can convert a static column of water to one with some velocity, but I’m never going to get the fluid to a higher elevation than where it started… with one exception. An exception that the hydraulic ram pump takes advantage of beautifully.

A ram pump is essentially just two one-way check valves, one called the waste valve and the other called the delivery valve. To get it started, you just momentarily open the waste valve to allow water to flow. After that it’s working on its’ own to pump the water uphill above the elevation of the source. Pretty amazing, I think. Let’s walk through the path of the water to understand how it works. First, as the waste valve opens, water flows into the pump and immediately out of the valve. But, as it picks up speed, the flowing water eventually forces the waste valve to slam shut. Now the water is stopped in the pump. It had kinetic energy… but now it doesn’t. That means the kinetic energy was converted to something else, in this case pressure. This is the definition of water hammer. Slamming a valve shut converts all that kinetic energy nearly instantly creating a huge spike in pressure that can lead to stress and damage in pipe systems and connected equipment.

In the case of the ram pump though, that spike in pressure has a different effect. It opens the second check valve and forces water entering the pump into the delivery line. As you can see from my digital pressure gauge, this process is cyclical, pumping some of the water and wasting the rest each time the valve slams shut. You can see what’s happening here in real time: the pump is robbing some of the kinetic energy from the flow and imparting it to a smaller volume of water. It’s a redistribution of energy, converting low head and high flow into high head and low flow. And this type of pump can really create a lot of head. I ran my discharge line up to well above the roof of my shed, and my pump is still able to get the water up there. Sometimes an air chamber is included in the pump to smoothing out those sharp spikes in pressure and provide a more even flow rate out of the delivery pipe, reducing wear and tear on the pump components.

If you like to think in terms of modern electrical devices, imagine we installed a hydropower turbine on a pipe to spin a generator and then used that electricity to power a pump to move the water coming out of the turbine. Obviously you wouldn’t be able to pump all the water, and anyway that would be a pretty complicated setup for something the ramp pump can do with a few very simple off-the-shelf plumbing parts. In fact there is a type of pump that works from a water-powered turbine. Maybe I’ll build one of those next. For now though, I think the ram pump is an ingenious way to take advantage of the properties of fluids. We all need water for a variety of reasons, so being able to move it where we need it without any fancy equipment or external sources of power is a pretty nice tool to have in your toolbox. Thank you, and let me know what you think!

December 17, 2019 /Wesley Crump

How Power Blackouts Work

November 19, 2019 by Wesley Crump

We usually think of the power grid in terms of its visible parts: power plants, high-voltage lines, and substations. But, much of the complexity of power grid comes in how we protect it when things go wrong. Because of the importance of electricity in our modern world, it’s critical that we be able to prevent damage to equipment and perform repairs quickly when they’re needed. The grid got its name for a reason, it’s an interconnected system, which means that, if we’re not careful, small problems can sometimes ripple out and impact much larger areas. So its protective systems are thoughtfully designed to work together and minimize the number of people affected when faults happen. Hey I’m Grady and this is Practical Engineering. Today we’re talking about power system protection and how blackouts work.


Things go wrong on the grid all the time. Just like a car or the device you’re watching this video on right now, the grid is a machine. It’s a big machine that sits out in all kinds of weather, exposed to a variety of meddling and destructive animal species and just the general wear and tear that comes from providing humanity with an absolutely essential yet extremely dangerous amenity: electricity. It shouldn’t come as a surprise that faults happen from time to time. One common type of fault on transmission lines comes from sagging. During peak demands, these lines move tremendous amounts of energy as electrical current. Well, no wire is a perfect conductor; they all have some resistance. So, the more current you try to pass through a wire, the less efficiently it works. That energy that doesn’t make it to the end of the line is instead lost as heat. And what does heat do to metal? It causes it to expand. So the lines get longer, which means they sag lower, and occasionally that brings them into contact with tree limbs, creating a path to ground and shorting out the line. 


So what happens during a short circuit? Electricity will take any path to ground that it can find. And the lower the resistance of the path, the more current that will flow. A short circuit is when a low resistance path to ground happens where it’s not supposed to, bypassing the customers and literally shortening the circuit. This has a number or unwanted consequences. All that energy is being wasted, for one. Arcs created by short circuits can start fires for two. But more importantly, faults create massive spikes in current that can overload and damage equipment on the grid. I probably don’t need to mention that most pieces of the power grid are expensive, they take a long time to install and repair, and they’re important (they’re providing an essential utility), so we don’t want them to get damaged.


Easy enough (you might be thinking) “Just make them strong.” Put all the power lines underground where they’re protected from weather and animals. Make them as big as a bridge suspension cables and use indestructible alloys. Put the substations in big concrete buildings. Hide the solar panels under the ocean. You see what I’m getting at. I don't know how much a car that never breaks down would cost, but I’m sure I wouldn’t want to pay for it, and the same is generally true for the power grid. Resiliency doesn’t just mean durability. It’s a balancing act between making our infrastructure strong enough to resist threats, keeping faults from creating further damage, and making it easy to diagnose and repair problems so that equipment can be brought back online with minimal downtime.

Those last two items are the job of power system protection engineers and can be summed up pretty easily in one word: isolate. Engineers establish zones of protection around each major piece of the power grid to isolate faults and make them easy to find and repair. You can trace these zones of protection from your house all the way to the power plant. A short circuit in your coffee maker isn’t going to overload the service transformer because there’s a fuse or breaker in between. If a car knocks down a pole and grounds out a line, it’s not going to take out the entire substation, again because it’s isolated with a fuse or breaker. If a transformer has a fault in a substation, it’s not going to melt the transmission lines feeding it because it can be isolated using breakers. And if a transmission lines sags into a tree limb, the resulting surge in current is not going to destroy the generator at the power plant because it has its own zone of protection. Of course, this is a super simplified explanation. These zones of protection are thoughtfully considered to balance the complexity and resiliency of the grid. But, how do they actually work?


There are a wide variety of types of electrical faults. Identifying and differentiating them can be a major challenge. The fundamentals of electrical devices can be boiled down pretty easily. Electrical current travels from a source, through a series of components, and back through a return path that is referenced to ground. There really isn’t that much information that protective devices can use to identify problems. For example, there’s very little difference between what’s happening in your toaster and what happens when you take the live and neutral lines from a socket and short them together. The circuit breakers in your house identify faults primarily based on electrical current. If you get too many amps moving through the breaker, it assumes that something is wrong and shuts off the circuit. That makes sense for a lot of cases, since high current can seriously damage equipment and conductors, leading to all sorts of issues. But, it’s not the only kind of electrical fault.


On the grid, protection is primarily done through relays that can measure all kinds of parameters to identify faults and activate circuit breakers to isolate equipment and notify utilities of the problem. These relays are measuring voltage, current, and power on the lines, like you’d expect. They also measure differential current. Even if the current isn’t too high, you want to make sure that as much current is going out as is coming in, otherwise you’re losing it somewhere else which can be signal of a fault. This is the same principle that GFCI outlets in your house use. Relays also keep an eye on the frequency of the grid to make sure different components don’t lose synchronization. Certain breakers can also be manually activated, like during rolling blackouts, where utilities are forced to shed non-critical electrical loads due to lack of generation capacity. These are all types of “managed failures” where you have some loss of service at the cost of protecting the rest of the system. The goal is that isolating equipment when things go wrong speeds up the process and reduces the cost of making repairs to get customers back online.


But, there are cases when isolation of equipment can actually make things worse. Please see my demo in the video to see how this works. Imagine a series of interconnected transmission lines, all feeding their own service areas, represented by the power resistor and LED light in the model. During peak demand, these lines might be operating at nearly their maximum capacity. If one line experiences a fault, for example shorting out against a tree branch, protective relays will isolate the line. In my case, when I short out a line, the fuse blows. But, if not handled correctly, that can mean that the entire electrical load gets automatically distributed between the remaining transmission lines, pushing them beyond their limit. All of a sudden, you have a cascading failure. Much of our grid is designed to avoid this type of failure, but occasionally you get the perfect alignment of faults, communication errors, and human factors that lead to massive outages, like the one in 2003 that took out much of the U.S. northeast and Ontario.


Starting back up from a major blackout like this can be really complicated. Even just choosing which equipment to unisolate and in what order takes a lot of consideration and engineering. There’s a chicken and egg situation because most large power plants actually need some power to operate, so it can be difficult to start back up during a wide area outage, also called a black start. But, it’s still better than the alternative of having to perform major equipment replacements because things spiraled out of control. When your power goes out, it’s easy to be frustrated at the inconvenience, but consider also being thankful that it probably means things are working as designed to protect the grid as a whole and ensure a speedy and cost-effective repair to the fault. Thank you, and let me know what you think!


November 19, 2019 /Wesley Crump

World's Largest Batteries - (Pumped Storage)

November 12, 2019 by Wesley Crump

Electricity faces a fundamental problem that comes with pretty much any product that’s provided on-demand: our ability to generate large amounts of it doesn’t match up that closely with when we need it. Wind and solar power are becoming more cost effective, but they’ll always be unreliable and intermittent sources of energy. Retailers use warehouses to store goods between manufacturer and sale. Water utilities use tanks and reservoirs. But the storage of electricity for later use, especially on a large scale, is quite a bit more challenging. That’s why power grids are mostly real time systems with generation ramped up or down to meet fluctuating demands instantaneously. That’s not to say that we don’t store energy at grid scale though, and there’s one type of storage that makes up the vast majority of our current capacity.



Although it’s a very convenient form of energy to produce, transmit, and use, electricity has some disadvantages. We’ve talked a little bit about variability in demand and generation capacity in previous videos of this series, but I’ll summarize again here. The fundamental problem is that we use electricity like this, with peaks in the morning and evening. But, we generate electricity differently. Fossil fuel and nuclear plants generally have a single capacity at which they run most efficiently with occasional need to go offline for maintenance. Solar, of course, follows the amount of sunlight with some variability due to clouds. And wind follows weather patterns with potential for lots of variability. You may have heard of the duck curve, which is the name given to our electricity demand minus the contribution from solar. You end up with this funky curve representing the need for other sources of electricity. This creates a challenge because not only does solar power start to die away right when we need it most during peak demands in the evening, it also creates a much steeper demand curve, requiring grid operators to spin up other types of generation more quickly. So, solar power is meeting some of our electricity needs, but it’s not necessarily eliminating the need for other sources of electricity. And in some cases, it may actually be making the grid less efficient by contributing to instability and requiring the use of peaking plants that are generally heavier polluters.



In fact, peaking plants are the go-to solution for load following on the grid. These are smaller, more expensive sources of electricity that only run for a few hours per day to make up the difference between the base power load and the evening peaks. Another interesting solution to this problem is called demand management, which is influencing the demand for electricity to reduce or shift peaks and match generation capacity better. This can be as simple as marketing campaigns encouraging you to set your thermostat a few degrees higher to sophisticated systems that can tell your electric car when to start charging. But, the holy grail in grid-scale power delivery is simply to let the demand and generation curves be what they’ll be, storing energy when generation exceeds demand and using that stored energy during demand peaks.



There are a wide variety of fascinating ideas for storing large amounts of energy, from molten salt to pressurizing the air in old mines, but most of the current grid-scale storage relies on gravitational potential. That is: use excess energy to lift something up, then use that thing to generate electricity as it falls back down, essentially treating earth’s gravity as a spring. And the vast majority of current grid-scale storage does this using water, in a scheme called pumped storage hydroelectricity. And I’ve built a little mini-scale version of this as a demonstration. In most cases, the way this works is to have two reservoirs nearby but separated by a large difference in elevation, in this case two buckets separated by a ladder. At night, when electricity prices are low, you use that cheap power and pumps to fill the upper reservoir. During the day, when energy prices are high, you use the water in the upper reservoir to spin turbines and generate hydropower. It’s essentially a giant water battery, and storing energy in this way has a lot of benefits, besides just shaving off the peaks of the demand curve. Hydropower is one of the most responsive ways to generate electricity, so pumped storage allows grid operators to handle fluctuation in demands quickly. Pumped storage is also valuable in an emergency, providing quick access to power when other sources may be out of commission. Finally, these systems can provide a lot of benefit on small, insular power grids (like on islands) where you don’t have as much diversification in the generation portfolio. But, pumped storage has several major challenges as well, and I’ll use a demo in my video to illustrate the big ones.



First is energy density, which is the term to describe how much energy can fit into a unit volume. And this is not a pumped storage facility’s finest feature. Just for some reference see the video for the energy density of gasoline, a lithium ion battery, and the water in a typical pumped storage reservoir. I say typical because but the total energy storage is both a function of height and volume. The greater the head above the turbines, the more the generating capacity for a given volume of water. I’m using a little aquarium pump to fill up my upper reservoir on top of this ladder. It’s pretty easy to see the difference in energy density between a battery and the stored water. The water in the bucket has about the same gravitational potential energy as the battery in your car’s key fob. In fact, to reach the same density as a typical lithium-ion battery, you’d have to have the water stored at a height of approximately outside earth’s atmosphere, which wouldn’t be very convenient for an electric vehicle. In fact, this is one of the main disadvantages of pumped storage facilities is that they require a very specific type of site where you can locate two pools near each other while also separating them by as much vertical distance as possible. And even then, because of the low energy density, these are often massive reservoirs that are major civil engineering projects as compared to something like a battery that can be manufactured in a factory.



The other major challenge of pumped storage is getting the energy back out once you’ve stored it. Efficiency is the ratio of how much energy you put in versus how much of it you can get out. You never get it all. That’s the second law of thermodynamics. But you hope to get most of it, otherwise you’ve built a very big and very expensive battery that doesn’t work. As I mentioned, my model reservoir is holding about a tenth of a watt-hour, but that’s not how much energy it took to get it there. I kept an eye on the power supply while the bucket filled, and it took about 0.7 watt-hours of electricity. That means my pump’s efficiency was about 15%. So, the most energy I can even hope to recover is a lot less than I’ve put in.



Some pumped storage facilities can use reversible pumps that act as turbines, but in my case I’m using a dedicated unit. I’ve got a power resistor as dummy loads, and I’m measuring the voltage and current produced by the turbine to estimate the total recovery of energy. And… the numbers don’t look good. In fact, with the small amount of pressure, my little mini hydro turbine could barely even spin at all. My best estimate is that I was able to generate 2 milliwatt-hours from the full bucket. That’s a whopping 0.3% efficiency and this is the other reason we’re not hooking up tanks of water to our portable electronic devices. At a small scale, this just isn’t a feasible way to store power. Little pumps and turbines just aren’t very efficient.



But things look a little better on a larger scale. Even considering all the potential losses of energy from evaporation or leakage of water to friction and turbulence within the machinery, many pumped storage facilities achieve efficiencies of 70 percent and higher. Of course that means they are net energy consumers, since (as we mentioned) you can’t recover all the power used to pump the water to the top, but if the cost of the energy consumed is lower than the price they can get out of that energy (minus inefficiencies) during peak demand, they can still turn a profit.



In fact, you might be surprised how many pumped storage facilities already exist. In the U.S. the Energy Information Administration has a nice online map where you can look around and see if there’s one near that you can go visit. Of course, I’ve only had time to go into the basics of pumped storage, and there are a lot of interesting advancements on the horizon, like using abundantly available seawater instead of sometimes limited sources of freshwater. Like demand management, storage is just one part of improving the efficiency and stability of the power grid as we work to implement more renewable and sustainable sources of electricity. Thank you for reading, and let me know what you think!



November 12, 2019 /Wesley Crump

How do Electric Transmission Lines Work?

September 24, 2019 by Wesley Crump

In the past, power generating plants were only able to serve their local areas. Electricity didn’t have far to travel between where it was created and where it was used. Since then, things have changed, and most of us get our electricity from the grid, huge interconnected areas of power producers and users. As power plants grew larger and further away from populated areas, the need for ways to efficiently move electricity over long distances has become more and more important. Stringing power lines across the landscape to connect cities to power plants may seem as simple as connecting an extension cord to an outlet, but the engineering behind these electric superhighways is more complicated and fascinating than you might think. Hey I’m Grady and this is Practical Engineering. On today’s episode we’re talking about electrical transmission lines.

Generating electricity is a major endeavor, often a complex industrial process that requires huge capital investments and ongoing costs for operation, maintenance, and fuel. Electric utilities only earn revenue on the power that makes it to your meter. They aren’t compensated for energy lost on the grid. So if we’re going to go to the trouble of producing electricity, we want to make sure that as much of it as possible actually reaches the customers for whom it’s intended. The problem is most power plants are usually located far away from populated areas for a variety of reasons: land is cheaper in rural areas, many plants require large cooling ponds, and most people don’t like to live near large industrial facilities. That means that massive amounts of electricity need to be transported long distances from where it’s created to where it’s used. 

Power lines are the obvious solution to this problem, and sure enough, stringing wires (normally called conductors by power professionals) over vast expanses of rural countryside is, in general, how bulk transport of electricity is carried out. But, if we want this transport to be efficient, there’s more to consider. Even good conductors like aluminum and copper have some resistance to the flow of electric current. You even can see this at home. We can measure a small drop in voltage when a hair dryer is plugged directly into an outlet and turned on. Trying this again at the end of a long extension cord, the drop in voltage is much more significant. This difference in power represents energy lost as heat from the resistance of the extension cord. In fact, this lost power is pretty easy to calculate if you’re willing to do a little bit of algebra (which I always am).

Electrical power is the product of the current (that’s the flow rate of electric charge) and the voltage (that’s the difference in electric potential). For a simple conductor, we can use Ohm’s law to show that the drop in voltage from one end of a wire to the other is equal to the current times the resistance of the wire measured in ohms. Substituting this relationship in, we find that the power loss is equal to the product of current squared and resistance. So if we want to reduce the losses in a power line, we have two variables to play with. We can reduce the resistance of the conductor by increasing its size or using a more conductive material, but look what matters even more: the i-squared term. Reducing the current by half will cut the lost power to one-fourth and so on. Going back to Ohm’s law, we can see that the only way to reduce the current and still get the same amount of power is to increase the voltage. So, that’s just what we do. Transformers at power plants boost the voltage up to 100,000 volts and sometimes much higher before sending electricity on its way over transmission lines. This lowers the current in the lines, reducing the wasted energy and making sure that as much power as possible makes it to customers at the other end.

This simple demonstration illustrates the concept. If I try to power a hair dryer using these thin wires, it is not going to work. The current required to power the dryer is just too high. It creates so much heat that the wires completely melt. That heat represents wasted energy. But, if I first boost the voltage up using this transformer and step it back down on the other side of the thin conductors, they have no problem carrying the power required to run the dryer. We’ve swapped high current for high voltage, making the conductors more efficient at carrying power. What we’ve also done is make things much more dangerous. You can think of voltage as electricity’s desire to flow. High voltages mean the power really wants to move and will even find a way to flow through materials we normally consider non-conductive, like the air. The engineers designing high voltage transmission lines have to make sure that these lines are safe from arcing and other dangers that come with high voltage.

Most long distance power lines don’t use insulation around the conductors themselves. Insulating in this way would have to be so thick that it wouldn’t be cost effective. Instead, most of the insulation comes from air gaps, or simply spacing everything far enough apart. Transmission towers and pylons are really tall to prevent anyone or any vehicle on the ground from inadvertently getting close enough to conductors to create an arc. Bulk electricity is transmitted in three phases, which is why you’ll see most transmission conductors in groups of three. Each phase is spaced far enough from the other two to avoid arcing between the phases. The conductors are connected to each tower through long insulators to keep enough distance between energized lines and grounded pylons. These insulators are normally made from ceramic discs so that if they get wet, electricity leakage has to take a much longer path to ground. These discs are somewhat standardized, so this is an easy way to get a rough guess of a transmission line’s voltage. Just multiply the number of discs by 15. For example, this line near my house has 9 discs on each insulator, and I know it’s 138 kilovolt line. You’ll also often see smaller conductors running along the top of transmission lines. These static or shield wires aren’t carrying any current. They’re there to protect the main conductors against lightning strikes.

High voltage isn’t the only design challenge associated with electric transmission lines. Just selection of the conductors alone is a careful balancing act of strength, resistance, and other factors. Transmission lines are so long that even a tiny change in the conductor size or material can have a major impact on the overall cost. Conductors are rated by how much current they can pass for a given rise in temperature. These lines can get very hot and sag during peak electricity demands, which can cause problems if tree branches are too close. Wind can also affect the conductors, causing oscillations that lead to damage or failure of the material. You’ll often see these small devices called stockbridge dampers to absorb some of the wind energy. High voltage transmission lines also generate magnetic fields that can induce currents in parallel conductors like fences and interfere with magnetic devices, so the height of the towers is sometimes set to minimize EMF at the edge of the right-of-way. In certain cases, engineers even need to consider the audible noise of the transmission lines to avoid disturbing nearby residents.

Even with all those considerations, the classic model of the power grid with centralized generation away from populated areas is changing. The cost of solar panels continues to drop making it easier and easier to produce some or all of the electricity you use at your own house or business and even export excess energy back into the grid. This type of a local generation happens on the distribution side of the grid, often completely skipping large transmission lines. On the other side of that coin, the energy marketplace is changing as well, and grid operators are buying and selling electricity across great distances. Electrical transmission lines may seem simple - the equivalent of an extension cord stretched across the sky. But, I hope this video helped show the fascinating complexity that comes with even this seemingly innocuous part of our electrical grid. Thank you, and let me know what you think!


September 24, 2019 /Wesley Crump

How Do Substations Work?

August 27, 2019 by Wesley Crump

When you plug in an electric device, it’s easy not to even consider where the electricity actually comes from. The simple answer is a power generating station, also known as a power plant, usually someplace far away. But the reality is much more complicated than that. Generation is only the first of many steps our power takes on its nearly instantaneous journey from production to consumption. The behaviour of electricity doesn’t always follow our intuitions, which means the challenges associated with constructing, operating, and maintaining the power grid are often complicated and sometimes unexpected. Many of those challenges are overcome at facility which, at first glance, often looks like a chaotic and dangerous mess of wires and equipment, but which actually serves a number of essential roles in our electrical grid, the substation.

As simple as it is to imagine, the power grid isn’t just an interconnected series of wires to which all power producers and users collectively connect. In reality, the electricity normally makes its way through a series of discrete steps on the grid normally divided into three parts: generation, or production of electricity; transmission, or moving that electricity from centralized plants to populated areas; and distribution, or delivering the electricity to every individual customer. If you consider the power grid a gigantic machine (and many do), substations are the linkages that connect the various components together. One of the cool parts about our electrical infrastructure is that most of it is out in the open so anyone can have a look. I’m somewhat of an infrastructure tourist, a regular beholder of the constructed environment, and my goal is for you too to be able to mentally untangle this maze of modern electrical engineering so that the next time you feast your eyes on a substation, you’ll be able to appreciate it as much as I do. Originally named for smaller power plants that were converted for other purposes, “substation” is now a general term for a facility that can serve a wide variety of critical roles on the power grid. Those roles depend on which parts of the electrical grid are being connected together and the types, number, and reliability requirements of the eventual customers downstream. And the first and often simplest of these roles is switching.

The general layout of a substation consists of some number of electric lines (called conductors if you want to fit in with the electrical engineers) coming into the facility. These high voltage conductors connect to a series of some or many pieces of equipment before heading out to their next step in the power grid. As a junction point in the grid, a substation often serves as the termination of many individual power lines. This creates redundancy, making sure that the substation stays energized even if one transmission lines goes down. But, it also creates complexity. The connections to these various devices are called buses, often rigid, overhead conductors that run along the entire substation. The arrangement of the bus is a critical part of the design of any substation because it can have a major impact on the overall reliability.

Like all equipment, substations occasionally have malfunctions or things that simply require regular maintenance. To avoid shutting down the entire substation, we need switches that can isolate equipment, transfer load, and control the flow of electricity along the bus. This may seem obvious, but turning on and off high voltage lines isn’t as simple as flipping a light switch. At high voltages, even air can act like a conductor, which means even if you create a break in a line, electricity can continue flowing in a phenomenon known as an arc. Not only does arcing defeat the purpose of a switch, it is incredibly dangerous and damaging to equipment. So, switching in a substation is a carefully-controlled procedure with specially-designed equipment to handle high voltages. Disconnect switches are often just called switchgear in addition to the equipment that serves another important role in a substation: protection.

I mentioned earlier that much of our electrical infrastructure is exposed and out in the open. That’s nice for people like me who enjoy having a look, but it also means being vulnerable to an endless number of things that can go wrong. From lightning strikes to rogue tree limbs, windstorms to squirrels, grid operators contend with so many threats to their infrastructure on a day by day basis. When something causes a short circuit on the power grid, also called a fault, it can severely damage power lines and other equipment. Not only that, because of the overwhelming complexity of the power grid, faults can and do cascade in unexpected and sometimes uncontrollable ways, leaving huge populations without power for hours or days. Many of the ways we protect equipment from faults are handled at a substation. One of the most common types of electrical fault is a short circuit to ground. This type of fault creates a low-resistance path for current to flow and leads to an overload of power lines and equipment. The simplest way to protect against this type of fault is with a fuse, a device that physically burns out at a certain current threshold. Fuses are dead simple and don’t require much maintenance, but they have some disadvantage too. They’re one-time use and can’t be used to interrupt current for other types of faults. On the other hand, circuit breakers are a class of devices that serve similar roles as fuses, but provide more sophistication for dealing with a wide variety of faults.

Like disconnect switches, circuit breakers need to be carefully designed to interrupt huge voltages and currents without damage. As soon as contacts within a circuit breaker are moved apart from one another, an electrical arc forms. This arc needs to be extinguished as quickly as possible to prevent damage to the breaker or unsafe conditions for workers. Extinguishing the arc is accomplished by a material called a dielectric that doesn’t conduct electricity. For lower voltages, the circuit breakers can be located in a sealed container under vacuum to avoid electricity conducting in the air between the contacts. For higher voltage, breakers are often submerged in tanks filled with non-conductive oil or dense dielectric gas. These breakers give grid operators more control about how and when current gets interrupted. Not every fault is the same and sometimes operators even know about a disturbance ahead of time and can trigger breakers early to prevent cascading failures. Many faults are temporary like lightning strikes or swaying tree branches. A special kind of circuit breaker called a recloser can interrupt current for a short period of time and re-energize the line to test if the fault has cleared. Re-closers usually trip and reclose a few times, depending on their programming, before deciding that a fault is permanent and locking out. If electricity demand on the grid gets so high that it can’t be met by the utility, substations may also be used to shed load. Rolling blackouts are used to lower the total electrical demand to avoid bigger failures on the grid.

One of the most important parts of the power grid is that different segments flow at different voltages. Voltage is a measure of electrical potential, somewhat equivalent to the pressure of a fluid in a pipe. At large power plants, electricity is produced at a somewhat low voltage of around 10-30 kilovolts or kV. From there, the voltage is increased much higher using transformers so that it can travel along transmission lines. Using a higher voltage reduces the losses along the way, making them more efficient but also much more dangerous. This is why overhead transmission lines are so tall - to keep them out of the way of trees and human activities. But, when transmission lines reach the populated areas which they serve, it’s not feasible to keep them so high in the air. So, prior to distribution, the voltage of the grid needs to be brought back down, again using transformers located within a substation.

A transformer is an extremely simple device that relies on the alternating current of the grid to function. It consists of two adjacent coils of wire. As the voltage in one coil changes, it creates a magnetic field. This field couples with the other coil, inducing a voltage. The incredible part of a transformer has to do with the number of loops in each coil. The induced voltage will be proportional to the ratio of loops. For example, if the transmission side of a transformer has 1000 loops while the distribution side has 100, the voltage on the distribution side will be 10 times less. This simple but incredible fact makes it possible for us to step up or down voltage as necessary to balance the safety and efficiency along each part of the power grid.

The simplicity of transformers is great in a lot of ways, but it also means that it can be difficult to make fine adjustments to the power leaving the substation. Because of this, many many substations include equipment for monitoring and controlling the power on the grid. Instrument transformers are small transformers used to measure the voltage or current on the grid or provide power to system monitoring devices. Depending on varying transmission and distribution losses, the voltage on the grid can swing outside an acceptable range. Regulators are devices with multiple taps that can make small adjustments - up or down - to the distribution voltage on feeder lines leaving the substation toward customers. If you look closely you can sometimes see the regulator dial indicating the tap position. 

All that different equipment requires lots of maintenance. The final and most important role of a substation is that it be safe for electricians and linemen to inspect, repair, and replace equipment. Substations are usually the only locations where extra-high voltage power lines get close to the ground, so safety is absolutely critical. The buswork running along the substation is protected from short-circuit by large insulators to avoid arcs to ground. Even the connections into each piece of equipment are done through a device called a bushing which maintains a safe distance between energized lines and the grounded metal housings. Some substations have large concrete walls to serve as fire barriers between equipment. All substations are built with a grid of grounding rods and conductors buried below the surface. In the event of a fault, the substation needs to be able to sink lots of current into the ground to trip the breakers as quickly as possible. This grounding grid also makes sure that the entire substation and all its equipment are kept at the same voltage level, called an equipotential, so that touching any piece of equipment doesn’t create a flow of electricity through a person. Finally, substations are surrounded by large fences and warning signs to make absolutely sure that any wayward citizens know to stay out.

In many ways, the grid is a one-size-fits-all system - a gigantic machine to which we all connect spinning in perfect synchrony across, in some cases, an entire continent. On the other hand, our electricity needs, including when we need it, how much we need, and how reliably it should be delivered vary widely. Power requirements are vastly different between a sensitive research facility and a suburban residential neighborhood, between a military base and country club golf course, and between a steel mill and a bowling alley. Likewise, every electrical substation is customized to meet the needs of the infrastructure it links together. As the grid gets smarter, as demand patterns change, and as we (hopefully!) continue to replace fossil fuel generation with sources of renewable energy to curb global warming, managing our electrical infrastructure will only get more challenging. So, substations will continue to play a critical role in controlling and protecting the power grid.

August 27, 2019 /Wesley Crump

How Electricity Generation Really Works

July 23, 2019 by Wesley Crump

The importance of electricity in our modern world can hardly be overstated. What was a luxury a hundred years ago is now a critical component to the safety, prosperity, and well-being of nearly everyone. And yet, electricity is so unlike our other physical necessities. We can’t hold it in our hand; We can’t see it directly; and we usually only have a vague understanding of where it comes from.


Generation is the first step electricity takes on its journey through the power grid, the gigantic machine that delivers energy to millions of people day in and day out. We talked about the power grid in a previous video, but there’s one crucial point that’s worth repeating: it is a real-time energy delivery system. Electricity moves at nearly the speed of light, and current availability of large-scale energy storage is negligible. That means that power is generated, transported, supplied, and used all in the exact same moment. The energy coursing through the wires of your home or office was a ray of sunshine on a solar panel, an atom of Uranium, or most likely, a bit of coal or natural gas in a steam boiler only milliseconds ago. Because of the laws of thermodynamics, all our electricity starts as some other kind of energy, which means all of our ways to generate electricity are just fancy ways of converting one type of energy to another. And in most cases, the type of energy being converted to electricity is heat.

Take a look at any of the various pie charts showing the breakdown of global energy production. You’ll see that the vast majority of methods we use to generate power are essentially just different ways of getting water really hot. Many thermal power plants (as they’re called) use fossil fuels like coal or natural gas in a furnace to generate steam. These types of plants have the obvious disadvantage of producing tremendous amounts of carbon dioxide as a by-product, a greenhouse gas that’s largely responsible for the ongoing rise in the average temperature of the Earth's climate, also known as global warming. In fact, electricity production makes up about a third of total greenhouse gas emissions. Luckily, there are other ways to generate large quantities of steam that don’t rely on fossil fuels.

Some plants use the fission of radioactive elements in a nuclear reactor as a source of heat. Some parts of the world can use geothermal energy, heat from inside the earth’s crust. We can even use arrays of mirrors to concentrate sunlight and create enough heat to run a boiler. But beyond that first step, thermal power stations are pretty much all the same. Once the steam is created, it passes through a turbine which converts the thermal energy into rotational energy. The shaft of the turbine is coupled to a rotor (that’s the part that rotates) of an AC generator that spins a set of magnets. The stator (that’s the part that’s stationary) has a set of coils of wire called windings. As the magnets on the rotor pass each winding, they generate a voltage across each coil.

In most places in the world, the number of coils in the stator is three, because our grid is built for three-phase, alternating current .The benefit of having the current alternate directions is that it makes it easy to step up or down the voltage using a dead simple device called a transformer. The benefit of generating power in three individual phases is getting a fairly smooth supply of electricity that overlaps so there’s never a moment when all phases are zero. A three-phase supply can also be carry three times as much power on three wires as a single-phase supply can carry on two. This is why steam turbine generators almost always have coils grouped in three.

But, steam isn’t the only way to spin a turbine. Hydroelectricity uses flowing water, and wind energy production has seen massive growth in the past 10 years. The other renewable source of electricity that is seeing explosive growth is solar photovoltaic or PV. The cost of solar cells which convert light directly into electricity has plummeted, making it feasible even for individual homeowners and businesses to install them on rooftops and supply some or all of their own power needs. Large-scale solar farms are also popping up in sunny climates to meet the growing demand for renewable electricity. Being able to power the grid directly from sunlight without harmful by-products is awesome, but it does come at a cost. Besides the obvious disadvantage of only working during daylight hours, solar PV has another disadvantage on the grid: it doesn’t have any inertia.

One of the biggest benefits of connecting lots of power plants together is the tendency of power to remain in motion on the grid, even during localized faults and disturbances. This inertia keeps our electricity stable and reliable. But electricity doesn’t have inertia on its own. You can’t give the electrons a kick and hope they continue down the wires without any help. The inertia comes from the physical rotation of all those massive interconnected generators. You can imagine the power grid as a train going up a hill. The locomotives work together to carry the load. To maintain speed, the throttle or number of locomotives needs to be adjusted to match the load of the train (which represents the total power demand that is constantly changing throughout the day).

The power grid works in a very similar way. Electrical demand is felt immediately by all the connected generators. Each additional demand causes a little more load on the every generator together, slowing the rotation by just a tiny amount and thus decreasing the frequency of the alternating current. Similarly, if electricity generation exceeds the demand, the generators will speed up. You can see this demonstrated in a typical brushless motor which is wired exactly like a three-phase generator. Under no load, the motor spins freely. But, if I short the contacts together to simulate a high electrical load, it takes much more energy to turn. Power consumers turn on and off electrical devices at will, with no notification to the utilities at all. So, to avoid fluctuations in frequency, generation has to be constantly adjusted up or down to match electrical demands on the grid. This process is called load following. As demand on the grid increases or decreases throughout the day, grid operators dispatch generation capacity to match it.

Going back to our analogy, the speed of our train represents the grid frequency, 50 or 60 hertz depending on where you live. Every locomotive and every train car is designed to travel at exactly the same speed, and the stability of the entire system depends on perfect synchrony. If one part of the train starts moving faster or slower than the rest of the cars, things go haywire in a hurry. This is why inertia is so important. If any problem occurs, for example if one of the locomotives dies, the train has enough inertia to keep things moving while the problem can be addressed. It’s also why grid operators maintain spinning reserves, generators that are ready to connect to the grid at a moment’s notice. And before a generator can be connected to the rest of the grid, it needs to be synchronized as well. That means its frequency, phase, and voltage need to be matched with grid power by adjusting the speed and excitation of the electromagnets in the rotor. A special instrument called a synchroscope helps with this process. Once the synchroscope gives the all clear, plant operators can close the breaker to connect to the grid.

This is a simplification of load following and generator dispatch, but it highlights one of the key differences between wind and solar and the rest of our generation capacity. If we want our lights to turn on right when we flip the switch, we have to understand that the grid operators need the same thing: the expectation that generation capacity will be available on demand. Reliability is the overarching purpose of having an interconnected power grid in the first place, and incorporating unreliable sources of power - like wind that depends on weather and solar that’s only available during half the day - is one of the biggest challenges we face with electrical infrastructure. Because of global warming, transitioning to renewable sources of electricity is one of the most important challenges of our lifetime, and I think that overcoming it starts with all of us being interested, informed, and excited about understanding where our power comes from. Thank you for reading, and let me know what you think!

July 23, 2019 /Wesley Crump

How Does the Power Grid Work?

June 25, 2019 by Wesley Crump

The modern world depends on electricity. It’s not just a luxury we use to power our devices and enjoy our free time. It’s not even just a convenience of having light, heating, and cooling in our buildings. Electricity is a crucial resource, especially in urban areas, providing public security, safety, and health and making possible everything from emergency response to modern medical care in hospitals to even the other utilities we require like fresh water and sanitation systems. But unlike those other utilities, electricity can’t be created, stored, and provided at a later time. The instant it’s produced, it’s used no matter how far apart the producer is from the user. And the infrastructure that makes all this possible is one of humanity’s most important and fascinating engineering achievements. Hey I’m Grady and this is Practical Engineering. Today we’re talking about the power grid.


Like most people, you probably take the grid for granted. Electrical infrastructure is so ubiquitous, it’s easy not to notice that the majority of our power grid is out in the open for anyone who wants to have a look. I happen to be one of those people who does want to have a look, and hopefully by the end of this series on electrical infrastructure, you will be too. This is geared toward North America, but most of the concepts will apply to any other part of the world. And just to give you a sense of scale, there are only four distinct electrical grids that service essentially all of North America. You have the two big ones, Western and Eastern, and the two electrical separatists: Quebec and Texas. Depending on your definition, an electrical grid can be considered one of the world’s largest machines. So how does this machine work?


The basic function of generating electricity and delivering it to those who need it may seem simple. I can hook up a small generator to a light, and boom; electrical grid. With the cost of solar panels reaching record lows, many are exploring the possibility of generating all the power they need at home and forgoing the grid altogether. But, a wide area interconnection (that’s the technical term for a power grid) offers some serious advantages in exchange for increased complexity. Here’s a simplified diagram showing the major components of a typical power grid, and we’ll follow the flow of electrical current as it makes its way through each one. We start with generation, where the electricity is produced. There are many types of power plants, each with their own distinct advantages and disadvantages, but they all have one thing in common: they take one kind of energy and convert it into electrical energy. Most power plants are located away from populated areas, so that electricity they create needs to be efficiently transported. That’s handled by high-voltage transmission lines. At the plant, transformers boost the voltage to minimize losses within the lines as the electricity makes its way to the areas that need it.


Once it reaches populated areas, transformers then step down the power back to a safer and more practical voltage. This is done at a substation, which also has equipment to regulate the quality of the electricity and breakers to isolate potential faults. Some energy customers draw power directly from transmission lines, but most are served from feeder lines that carry power from the substation. This part of the system is called distribution. From the feeders, smaller transformers step down voltage to its final level for industrial, commercial, or residential uses before the electricity reaches its final destination. 


Rather than a constant flow of current in a single direction (called direct current or DC), the vast majority of the power grid uses alternating current or AC, where the direction of voltage and current are constantly switching, 60 times per second in North America. The major advantage of AC power is that it’s easy to step up and down voltages, a critical part of efficiently and safely moving electricity from producer to consumer. The device that performs this important role, called a transformer, is as simple as a pair of coils next to each other. A varying voltage in one coil induces a voltage in the other coil proportional to the number of turns in each one. If the current doesn’t vary, like in direct current, the transformer can’t do any transforming.


It’s helpful to think about the grid as a marketplace. Power producers bring their electricity to the market by connecting to the grid, and power consumers purchase that electricity for use in their home or business. The economics and politics of the grid are so much more complicated than this, but the important part of the analogy is that, in many ways, the power grid is a shared resource. Because of that, it needs organizations to oversee and establish rules about how each participant in the producing, transmitting, and consuming of power may use it. And there are three overarching technical goals that engineers use to design and maintain the power grid.


The first one is power quality. Our electrical devices and equipment are designed assuming that the power coming from grid has certain parameters, mainly that the voltage and frequency are correct and stable. Some devices count the oscillations in the AC grid power to keep track of time, so it’s critical that the grid frequency not deviate. Changes in the voltage can lead to brownouts or surges that damage connected equipment. One of the benefits of a large power grid is electrical inertia. All those huge spinning generators connected together provide momentum that smooths out the ripples and spikes that can occur from equipment faults or quickly changing electrical loads. The next technical goal of the grid is reliability. If, like most people, you take that constant availability of power for granted, that’s by design. Much of the grid’s complexity comes from how we manage faults and provide redundancy so that you’re rarely faced with blackout conditions. It’s another inherent benefit of a grid that electricity can be rerouted when a piece of equipment is out-of-service, whether it was planned or otherwise.


The final goal of the power grid is simply that the supply meet the demand. Power production and consumption happens on a real-time basis. If it’s plugged in, the light from the screen you’re watching right now was a drop of water in a turbine or a breeze across a windmill microseconds ago. And by the way, did you call your utility and let them know that you were going to turn on your computer or phone and watch this video? I’m willing to bet you didn’t, which means not only did they have to adjust their production up to match the extra load, but they had to do it immediately without any warning whatsoever. Luckily having millions of people connected to the same grid smooths out the demands created by individuals, but load following is still a major challenge. For the most part, electrical demand follows a fairly consistent pattern, but factors like extreme weather can make it difficult to forecast. Grid operators balance demand by dispatching generation capacity in real time. The cheapest sources of power are used to fulfill the base load that’s more consistent, and higher cost sources are used for peaking when demand exceeds the base. But it’s not as simple as flipping on a switch. Large power plants can take hours, days, or even weeks to startup and shut down. Equipment needs to be taken out of service for maintenance. Fuel costs fluctuate. Renewable sources like wind and solar can have massive and unpredictable variations in capacity, providing irregular sloshes of power to the grid. You can see why balancing electricity supply and demand is this fantastically complex job of taking into account all these considerations, some of which are predictable and some of which aren’t.


That’s part of the reason we are trying to make the grid smarter by using software, sensors, and devices capable of communicating with each other. On the supply side, this can allow computers and software to do what they do best: take in tremendous amounts of data to help us make decisions about how to manage the grid. But a smart grid can also help on the demand side as well. Unlike most of the goods we buy, consumers don’t have a keen understanding of power, how much we’re using, or how much it should cost depending on the time of day or year. A smart grid can take away some of the obfuscation, allowing us to make better decisions about how we use electricity in our day-to-day lives. Ultimately, a smart grid can help us use and take care of this huge machine - this shared resource we call the power grid - more efficiently and effectively now and into the future.


June 25, 2019 /Wesley Crump

How Do Spillways Work?

May 28, 2019 by Wesley Crump

We normally build a dam to hold water back and store it for use in water supply, irrigation, hydropower, or flood control. But sometimes we have to let some water go. Whether we need it downstream or the impounded water behind the dam is simply too full to store any more, nearly every dam needs a spillway to safely discharge water.

To understand spillways, we have to start with hydrology. More specifically, we need to understand the tremendous variability in inflows that can affect dams and reservoirs. Designing a dam would be simple if rainfall and snowmelt were consistent throughout the year. In fact, most dams wouldn’t even be necessary, since hydrologic variability is the reason why most dams exist in the first place - to provide storage of water and smooth out the ebbs and spikes of inflows to protect us from flooding or so that water can be used to meet our needs throughout the year. But, those spikes of inflow can be enormous. It’s not unusual for a watershed to generate the majority of its entire annual volume of water in a single storm event. Those inflows can reach a reservoir with very little warning, so dams need to always be ready to handle major storm events.

As far as infrastructure goes, dams are fairly risky. Depending on the size of the structure and what’s downstream, the failure of a dam can be catastrophic. In fact, some of the worst human-caused disasters in history have been failures of dams. For this reason, they’re often required to withstand the biggest storm that we could possibly conceive, called the Probable Maximum Flood. It’s too expensive to build a dam so tall that it can store the entirety of this flood. On the other hand, we can’t just let the flood overtop the dam, because flowing water can damage and destroy the structure. So in most cases, dams are designed with at least one spillway, a structure that can safely discharge floodwaters without causing injury or deterioration to the dam.

The water stored behind a dam is called its reservoir, and the term “spillway” usually is reserved for structures that release excess inflows, when the reservoir is already full (e.g. floods or heavy snowmelt). This distinguishes spillways from other structures that provide releases from reservoirs like intakes that serve pump stations and penstocks that serve hydro turbines. Because of the variability in inflows, many large dams have two or more spillways. The smaller one is called the principal or service spillway that passes normal inflows when the reservoir is full. And, the other is called the auxiliary or emergency spillway that only engages during extreme events. Depending on the design, the auxiliary spillway may only flow for a few scary moments in a dam’s entire lifetime. Because of that, they can be as simple as an excavated channel cut around the dam. It might not last very long, but it can protect the dam from failure in an extreme situation.

No matter how often it flows, a spillway has only three main jobs, and there is a wide variety of types of structures that can accomplish these objectives. But I think if you’re going to demonstrate a spillway on the internet, there’s only one obvious choice for the model: the morning glory. This is a type of drop shaft spillway that has enchanted the internet with crazy vortex photos, and I built a model of one in my shop so we can use it to discuss the basic functions of a spillway. And the first basic function is the most obvious: to manage the water level in a reservoir.

A morning glory spillway is in a class of spillways that we call uncontrolled. In general, they are set and forget. There are no gates or moving parts to manage. They regulate the reservoir level simply by existing. If it gets too high, water flows out and the pool goes down. If the pool is below the crest, no water is released, and the level goes up as precipitation makes its way into the reservoir. Most uncontrolled spillways are weirs, which I covered more in a previous video. A weir is simply a structure that allows water to pass over its crest. The morning glory acts like a circular weir at first, but as the water level goes up, the bell mouth chokes and the behavior changes. This type of spillway is used in narrow canyons where there isn’t much room for a more conventional overflow. Uncontrolled spillways normally need to be pretty big to handle even the largest storms that a reservoir might face without any moving parts. That can get expensive quick, so an alternative can be to use controlled spillways with different types of gates. The gates add complexity to a spillway, but they can also reduce its cost by providing flexibility in discharge capacity allowing for a smaller overall structure. The gates can be operated to match the any size of storm event, even if the spillway is relatively small.

The next job of a spillway is to safely convey the flow to the downstream side of the dam. In most spillways, including my model, the water has to get from the top of the reservoir to a natural watercourse downstream of the dam. That’s often a big drop in elevation, which means the water can pick up a lot of speed. This high velocity flow can cause major damage, so we need some way to contain it safely. Sometimes that’s a pipe or conduit like in my model drop shaft spillway. And for open-channel spillways, it’s called a chute. A chute also needs training walls on the sides to keep the flow contained. Both spillway conduits and chutes are often made of concrete too because it’s one of the only materials strong enough to resist the damaging forces of the high velocity flow. That leads me to the final objective of a spillway: energy dissipation.

I mentioned that all the water moving so quickly can cause serious erosion downstream of a dam. If not controlled, this erosion can progress upstream, eventually leading to failure of the dam. So, all spillways need a way to dissipate hydraulic energy and slow down the flow before releasing it into a natural watercourse. For large spillways, this is often accomplished in a structure called a stilling basin that forces a hydraulic jump to occur. This is another topic I covered in a previous video, so check that out if you want to learn more. For smaller spillways, the dissipation can be simpler like rock riprap or even just letting the flow plunge into a deep pool.. Once most of the hydraulic energy is lost, the water can safely travel downstream without causing damage.

Like most of my videos, I’m just scratching the surface of a gigantic topic. The spillway is a critical part of any dam and often the most complex component. Designing a spillway usually requires a team of engineers performing structural, geotechnical, electrical, mechanical, hydrologic, and hydraulic analysis to get it right. All so we can safely discharge water from a reservoir during high inflow events when there’s no more room to store it. Thanks for reading this blog and please let me know what you think!


May 28, 2019 /Wesley Crump
  • Newer
  • Older