Practical Engineering

  • Home
  • About
  • Blog
  • Book
  • Store
  • Email List

An Engineer's Perspective on the Texas Floods

September 16, 2025 by Wesley Crump

[Note that this article is a transcript of the video embedded above.]

This is an animation of the weather radar in central Texas starting at noon on July 3, 2025. You can see there was torrential rain across the state throughout the afternoon from remnants of Tropical Storm Barry. But focus on this area northwest of San Antonio. Around midnight on July 4, a severe storm gets stuck in this area and just stays in place for several hours. When you put it in context with the rest of the system, it looks kind of insignificant, but that little storm dropped enough rain to raise the Guadalupe River higher than ever in recorded history, at least in the upper part of the basin. The water quickly rushed through summer camps, RV parks, and rural communities in the middle of the night. And the result was one of the deadliest inland flooding events in the past 50 years.

I live not too far from some of the worst-hit areas, and although my family wasn’t directly affected by the weather, it’s been a tough situation for me to wrestle with, personally. I spent the better part of my career as an engineer thinking about flooding and designing projects to cope with it. I’ve worked on and played in the Guadalupe River. And I have kids who are getting close to summer camp age. As a dad, it’s almost impossible to comprehend a tragedy like this. As an engineer, I’ve dedicated a large part of my professional career to understanding events exactly like it. So, as I’ve ruminated about this flood over the past few months, I’ve collected some thoughts that might be worth putting into the world. Let’s take a look at this event through an engineering lens, talk a little bit about how technical and regulatory decisions play out in the aftermath of tragedy, and see if any lessons become apparent. I’m Grady, and this is Practical Engineering.

One of the fundamental problems we face in engineering, and really life in general, is that we can’t predict the future. That sounds like a ridiculous thing to say, but out of that uncertainty comes the framework for how we think about so many things. Because, we have to make all kinds of decisions - many of them with extremely high stakes - in the face of the unknown. In civil engineering, a lot of the loads we account for come from the most classically volatile and unpredictable aspect of the earth: the weather. Wind, ice, snow, waves, and rain - you cannot look ahead 50 or 100 years and know what forces a structure will be subjected to. You just have to guess.

And that’s a pretty hard thing to do, especially because you tend to have two opposing forces pushing your guess around. On the one hand, caution dictates overestimating forces to leave a wide margin of safety, but on the other hand, costs and budget constraints tend to push the estimate the other way. I can make this dam taller or this bridge higher, but it’s going to cost me a lot more money, and maybe it’s not necessary. So how do you draw the line? The same way we try to predict the future in so many other parts of life: we look to the past.

Surely past performance is an indicator of future results, right? I know that’s a stock line, but what else do we have? Over the years, we have gone to considerable lengths to apply historical data to predictions of future floods. Of course, this gets pretty complicated. One of the resources widely used in the United States for decades is Technical Paper 40, published in 1961. It represents a monumental effort to compile rainfall data across the contiguous United States, find probability distributions that fit the data, and map the results. It’s divided up by duration and recurrence interval, so you get this big group of separate maps. But what is a recurrence interval?

I’ve talked about the so-called 100-year flood in a few of my videos, but it’s a concept so widely misunderstood that it’s worth explaining again, especially because it’s so relevant to the Guadalupe River flood in July. We can’t really use historical data to determine when a flood might happen in the future, but we can make an estimation about how probable one might be. The bigger the flood, the lower the probability that it might occur. So there’s a relationship between probability and magnitude. In hydrology, we often express the probability as a quote-unquote “return period,” which means, on average, how many years you would expect to pass before you see that magnitude equalled or exceeded again. But that “on average” is doing some heavy lifting in the definition.

This terminology is debated endlessly in the hydrologic community because saying something like the 100-year flood has an underlying implication that storms are cyclical; that somehow if a particular magnitude of storm was to occur, we might have a period of security before it happened again, or the flipside: that if a flood hadn’t occurred in some time, we might be more “due” for it. And that’s just not how it works. Floods are statistically independent events. Every year, the atmosphere rolls the metaphorical dice to see what the biggest one is going to be. The odds of rolling a two or snake eyes in craps are 1 in 36, but if you go 35 rolls without a snake eyes, the odds of rolling it on the next one haven’t changed. The dice don’t remember what happened before. No one calls snake eyes the 36-roll throw because we understand it’s possible to do it twice in a row, and it’s possible to go a lot more than 36 rolls without getting one. So why do we call it the 100-year flood? Probably because the only good alternative is the storm with a 1% annual exceedance probability. Just doesn’t roll off the tongue. But it is the technically correct definition: the 100-year rainfall is the depth of precipitation (over a given duration) that has a one percent probability of being equalled or exceeded in a given year. It’s a tough concept to wrap your head around, but it’s fundamental to engineering hydrology.

If you take a look at these maps, you can see that the 100-year rainfall over a 24-hour duration in Kerr County, Texas is around 9.5 inches (or about 240 millimeters). But again, this is from 1961. And it’s based entirely on historical data. So there are decades of rainfall not included in this analysis, not to mention limitations in the statistical methodology and data processing methods of the time. TP 40 wasn’t the only resource for precipitation frequency data in the US, but it was probably the most widely used until Atlas 14 came along, or is coming along (it’s still a work in progress). NOAA has been working to update this information with the entire historical record and more rigorous statistical methods. For most of the US, this is easy to navigate online. Just mark a spot on the map and you get this table of values and confidence intervals for a range of durations and return periods. And you can see that the 100-year, 24-hour precipitation in Kerr County is 11.5 inches (or nearly 300 millimeters). That’s a pretty big jump from the 1961 estimate - an increase of about 20 percent. What was the 100-year rainfall in 1961 is now just the 50-year storm AND look at those confidence intervals! 8 to 16 inches.

I know this is kind of long-winded, but the whole point I’m trying to make here is the tremendous uncertainty we have when it comes to hydrology. In some ways, this rainfall data is extremely rigorous, and I couldn’t even begin to explain some of the statistical methods used to develop it. It serves a really important purpose in the world of engineering, planning, and emergency management. But in another sense, it’s almost meaningless. And I can show you a few of the reasons through the lens of the Guadalupe River Flood.

Here’s an hourly map of the rainfall that hit central Texas on July 4, 2025. That yellow area is the watershed for the upper Guadalupe River. When I loop through it again, you can see that cell right there caused the majority of the flooding you probably read about on the news. It was there and gone in four hours. More rain came in later that morning and the next few days, but this was a classic flash flood: A relatively short burst of heavy rainfall on a small, steep, rocky basin, where most of it runs off into a river within minutes or hours. Here’s the thing: hourly rainfall records weren’t very common until the 1940s. I counted about 100 rain gauges used by Atlas 14 within a 50 mile radius of Hunt, Texas, where most of the fatalities occurred. None had hourly records before 1940, and of the group that did collect hourly data, only four had a record longer than 70 years. That might seem like enough data to understand flooding in the area, but let me show you why it’s not.

Here’s that loop of rainfall again. What do you see on this map? Because I’ll tell you what I see: enormous spatial variability. If you were to pick four random pixels on this map, how good a picture do you think it would give you of what really happened? That’s essentially what we’re doing with rainfall frequency analysis. Compared to modern data collection methods, like the radar rainfall I showed, our historical records are extremely sparse, especially for data that varies so significantly across space. Imagine trying to recreate the Mona Lisa from scratch with just a dozen random pixels. Most of the rain gauges we use to estimate flood probabilities have never even seen an event of the magnitude we’re trying to use them to predict. There’s a whole lot of extrapolation going on.

To hammer this point home: This is the 24-hour rainfall totals for the flood, and you can see that even within this single watershed, some areas saw extreme precipitation, while others just got an inch or 25 millimeters of rain. And actually, I mapped the percentage of the 100-year rainfall that this storm amounted to, and you can see, at least in the Upper Guadalupe Basin, only a small area got close to the 100-year rainfall. For most of the watershed, this was more like a 2- or a 5-year storm.

And here’s what makes this even tougher: When we’re talking about flooding, we don’t actually care too much about rainfall. We care about the outcome of rainfall, specifically the rise in a river or stream. Here’s the graph of a stream gage upstream of Hunt during the flood. You can see that, starting around 2:00 on the morning of the 4th, the river rose by 20 feet or 6 meters in three-and-a-half hours. A little further downstream, similar story. Starting at 2 AM, the river went up 35 feet or nearly 11 meters in 3 hours before the gage broke. That is a staggeringly fast increase. In a hydrologic sense, it’s practically a wall of water. And the results were devastating. In Kerr County, there just wasn’t enough time to coordinate an evacuation. More than 100 people were killed, many of them children. So a rain gauge here, or here, or here would have completely missed the fact that the watershed it was within was experiencing the flood of record.

That’s the value of measuring the thing you actually care about. Just like precipitation, you can take historical stream gage data, fit it to a probability distribution, and get a sense of the likelihood of major floods in the future. But these gages are even more sparse in coverage than rain gauges, their records often don’t go back as far, they’re a lot more expensive to install and maintain, and, as we saw in one graph, they can go offline, ironically as a result of flooding, completely missing the peak. Engineers or hydrologists actually often visit the affected area and map the high water line after a flood to validate and confirm the data from stream gages (or to fill in the gaps if one breaks). So, although they serve an extremely important role, most of the time when engineers are trying to predict flooding or its effect on infrastructure and the built world, instead of using stream gages, they’re using hydrologic models to convert rainfall into runoff and flooding, a process that introduces a whole new set of uncertainties into the mix.

And there’s one more thing. Everything we’ve been talking about so far is predicated on a crucial underlying assumption: temporal stationarity, basically, the idea that the distribution of extreme events doesn’t change over time - or put another way - that future precipitation can be represented by past observations. But, even though those past observations are relatively sparse, in a lot of cases, we can already see that it’s probably not a great assumption. I understand this is a point of pretty strong contention in the public discourse. But within the professional community of hydrologists, engineers, and climate scientists, it’s not really a question of “is the climate changing” but more a question of how much, how quickly, and where the effects of that are most pronounced. For example, in the Texas Volume of Atlas 14, the team tested for long-term trends in the data. They found some scattered weather stations that did show an increase in extreme rainfall over time; most of them didn’t. Other studies have found more pronounced increases by looking at only the past few decades. So there are no broad statements that capture the complexity of the situation as we understand it, and importantly, this is a tough thing to figure out.

Say you have 100 years of historical data. How many 100-year floods happened within that time? Could be a few. Could be none. So, especially for very extreme events on the 1-in-a-century scale, there’s a lot of uncertainty when it comes to teasing out any trends. That said, there is a strong consensus among the various climate models and recorded data that a warming atmosphere has already resulted in an overall increase in the intensity and frequency of rainfall, a trend that will likely continue. And you can see why that poses a problem. Particularly for infrastructure with a design life of 50 to 100 years, we need to design not just for the storms of today but those decades in the future, and our current methods of doing that is, on average, systematically underestimating them if we assume a stationary climate.

Just to be clear, I’m not trying to blame a flood on climate change. Although attribution studies can estimate the contribution of extra energy in the climate system, there’s no way to ascribe any particular weather event to global warming deterministically. For many places, it might not even be a major source of uncertainty compared to all the other factors I’ve mentioned when it comes to predicting the magnitude of future floods. My point is that it’s just one more confounding aspect of estimating flood risks. And it gets to the heart of the entire issue. Because why does any of this even matter?

There‘s been a lot of discourse about what should have happened before the storm and what should be done in its wake. But before you can take any action to mitigate flood impacts, you have to know what the actual risks are. On the upper Guadalupe, we’ve seen it with our eyes, but how many similar watersheds just got lucky that night, or really, any night? I think you’ll agree with me that this is complicated stuff. And humans are notoriously bad at using probabilities and risks to make decisions. Almost nothing in our biology is optimized for long-term, rational decision-making about rare and extreme events. Almost every day of everyone’s lives, there’s not a flood. That makes it really tough to consider it as a priority and devote resources toward preparations. And I think part of the problem is that we rarely talk about the uncertainties.

Even within the field of engineering, where we should know better, we have a strong tendency to treat everything deterministically. It sure makes things a lot simpler. Take the bold number in the table, plug it into your equations and computer models, and just forget that those uncertainty bands even exist. In some ways, it makes sense. Ultimately, you do have to choose a number: how high to build a bridge or how large a culvert to install, or how wide to make a spillway. But, in a lot of cases, those decisions get translated into a sort of confidence that doesn’t actually exist. The concept of the floodplain is a perfect example.

In the US, a lot of the framework for how we think about and prepare for floods comes out of the National Flood Insurance Program. And to participate in this program, communities are required to regulate what happens in the floodplain, or more specifically, what and how things get built there. And so, a fundamental part of regulating the floodplain is deciding where it actually is and isn’t. We’re not going to dive into that process, but billions of dollars have been invested in making these maps and keeping them up to date in the US.

If you take a look at one, it’s a lot to parse depending on the location. There are quite a few different hazard areas with different meanings. The simplest for riverine locations is the base flood, essentially the 100-year flood. Some maps show the 500-year flood as well. Many maps show the floodway, which is kind of the main part of the channel needed to pass floods, so it’s usually regulated more strictly. But there’s something I notice when I look at floodplain maps. All of these zones are bordered with nice crisp lines. You’re inside the floodplain here, and you’re outside of it here. And property owners often go to great lengths to refine these maps; to shift the line just slightly and reduce their regulatory responsibilities. But consider everything we’ve talked about with estimating flood risk and ask yourself, what’s the difference in the risk profile between here and here? Is it enough to have a sharp line between them? And if not - if the true situation is more nebulous - is the map doing a good job of communicating flood risk to the public?

Because, just to be clear, that is one of the stated purposes of floodplain maps. Of course you need to delineate zones clearly to be able to regulate where permits are required and where buildings can be built and so on. But, to me at least, it sends a complicated message to have this binary definition of inside the floodplain or outside of it as a way to explain to individuals, homeowners, renters, and the general public about the risks that they’re actually exposed to.

You look at these maps and there is absolutely no indication about uncertainty, despite the fact that almost every step of the process that goes into creating them has huge margins of error. And then, when we get more historical data, or land uses change, or our understanding of the floodplain evolves, and we try to change the map, that immediately sows distrust. You hear it all the time (at least if you run in similar circles as I do): “We’ve had two hundred-year floods in the past 5 years. These engineers don’t know what they’re talking about…” Part of that, of course, is just a misunderstanding about what the hundred-year flood actually means, but part of it is that we don’t do a good job communicating risk and uncertainty well. The meteorologists get the same thing. People get salty when forecasts are wrong without any acknowledgement at all that the job is essentially predicting the future. You know, it’s wizard stuff. Weather is really complicated, and I think we have a lot of room to grow in how we discuss and disseminate the things we don’t know for sure.

Because flooding is capricious. If you look back at the maps from July 4, you can see a lot of places where rainfall was more intense than in Kerr County and the Guadalupe River. Many areas of central Texas received more than the 100-year, 24-hour precipitation from Atlas 14. And there were severe storms and flooding across the region in the days that followed as well. But nearly all the fatalities happened in this one place. I don’t have a good answer for why. Maybe some combination of timing, warning systems, the rural location, differences in floodplain regulations, and plain bad luck. I think scientists, engineers, and emergency planners can probably learn a lot by simply comparing the flooding between Kerr County and some of the other areas in central Texas hit by this storm system, and why the outcomes were so drastically different.

My heart goes out to the victims and their families who were affected by this flood. I’ve been thinking so much about it in the weeks since, and why these kinds of risks can go so underappreciated that we wouldn’t bat an eye at having such a large population of people sleeping in the floodplain of a flashy watershed.

I think there are a lot of lessons to learn here, but the one that keeps coming back to me is about communication. People can’t act to reduce their risk unless they can internalize what it actually is. Professionals think about these issues every day; they have technical training, knowledge, and experience to make informed decisions about infrastructure, land use, and zoning. But most people don’t have the same cognizance of the hazards. You can’t blame them. It’s a crazy world we live in, and even individuals who live, work, and play in areas at risk of flooding might not come face-to-face with the danger in their entire lives. Like I said, weather is complicated, and we don’t all have the headspace to try and understand spatial variability, annual exceedance probabilities, climate stationarity, and so on.

So I think the professional community has a responsibility to improve how we communicate flood risks to the public, not only for accessibility but honesty. We need to have language that anyone can grasp, but we also need to be better about acknowledging uncertainty. It sounds counterintuitive, but I think facing the limitations of our understanding head-on actually instills more trust than pretending like we have all the answers. And when people understand those uncertainties, they get a deeper appreciation for how flood hazards vary across the landscape, giving them more insight, not less, to prepare for what’s ahead. Thanks for watching, and let me know what you think.

September 16, 2025 /Wesley Crump
  • Newer
  • Older