Is it the school, or the students?

Are schools that feature strong test scores highly effective, or do they mostly enroll students who are already well-prepared for success? A study co-authored by MIT scholars concludes that widely disseminated school quality ratings reflect the preparation and family background of their students as much or more than a school’s contribution to learning gains.

Indeed, the study finds that many schools that receive relatively low ratings perform better than these ratings would imply. Conventional ratings, the research makes clear, are highly correlated with race. Specifically, many published school ratings are highly positively correlated with the share of the student body that is white.

“A school’s average outcomes reflect, to some extent, the demographic mix of the population it serves,” says MIT economist Josh Angrist, a Nobel Prize winner who has long analyzed education outcomes. Angrist is co-author of a newly published paper detailing the study’s results.

The study, which examines the Denver and New York City school districts, has the potential to significantly improve the way school quality is measured. Instead of raw aggregate measures like test scores, the study uses changes in test scores and a statistical adjustment for racial composition to compute more accurate measures of the causal effects that attending a particular school has on students’ learning gains. This methodologically sophisticated research builds on the fact that Denver and New York City both assign students to schools in ways that allow the researchers to mimic the conditions of a randomized trial.

In documenting a strong correlation between currently used rating systems and race, the study finds that white and Asian students tend to attend higher-rated schools, while Black and Hispanic students tend to be clustered at lower-rated schools.

“Simple measures of school quality, which are based on the average statistics for the school, are invariably highly correlated with race, and those measures tend to be a misleading guide of what you can expect by sending your child to that school,” Angrist says.

The paper, “Race and the Mismeasure of School Quality,” appears in the latest issue of the American Economic Review: Insights. The authors are Angrist, the Ford Professor of Economics at MIT; Peter Hull, a professor of economics at Brown University; Parag Pathak, the Class of 1922 Professor of Economics at MIT; and Christopher Walters PhD ’13, an associate professor of economics at the University of California at Berkeley. Angrist and Pathak are both professors in the MIT Department of Economics and co-founders of MIT’s Blueprint Labs, a research group that often examines school performance.

The study uses data provided by the Denver and New York City public school districts, where 6th-graders apply for seats at certain middle schools, and the districts use a school-assignment system. In these districts, students can opt for any school in the district, but some schools are oversubscribed. In these circumstances, the district uses a random lottery number to determine who gets a seat where.

By virtue of the lottery inside the seat-assignment algorithm, otherwise-similar sets of students randomly attend an array of different schools. This facilitates comparisons that reveal causal effects of school attendance on learning gains, as in a randomized clinical trial of the sort used in medical research. Using math and English test scores, the researchers evaluated student progress in Denver from the 2012-2013 through the 2018-2019 school years, and in New York City from the 2016-2017 through 2018-2019 school years.

Those school-assignment systems, it happens, are mechanisms some of the researchers have helped construct, allowing them to better grasp and measure the effects of school assignment.

“An unexpected dividend of our work designing Denver and New York City’s centralized choice systems is that we see how students are rationed from [distributed among] schools,” says Pathak. “This leads to a research design that can isolate cause and effect.”

Ultimately, the study shows that much of the school-to-school variation in raw aggregate test scores stems from the types of students at any given school. This is a case of what researchers call “selection bias.” In this case, selection bias arises from the fact that more-advantaged families tend to prefer the same sets of schools.

“The fundamental problem here is selection bias,” Angrist says. “In the case of schools, selection bias is very consequential and a big part of American life. A lot of decision-makers, whether they’re families or policymakers, are being misled by a kind of naïve interpretation of the data.”

Indeed, Pathak notes, the preponderance of more simplistic school ratings today (found on many popular websites) not only creates a deceptive picture of how much value schools add for students, but has a self-reinforcing effect — since well-prepared and better-off families bid up housing costs near highly-rated schools.As the scholars write in the paper, “Biased rating schemes direct households to low-minority rather than high-quality schools, while penalizing schools that improve achievement for disadvantaged groups.”

The research team hopes their study will lead districts to examine and improve the way they measure and report on school quality. To that end, Blueprint Labs is working with the New York City Department of Education to pilot a new ratings system later this year. They also plan additional work examining the way families respond to different sorts of information about school quality.

Given that the researchers are proposing to improve ratings in what they believe is a straightforward way, by accounting for student preparation and improvement, they think more officials and districts may be interested in updating their measurement practices.

“We’re hopeful that the simple regression adjustment we propose makes it relatively easy for school districts to use our measure in practice,” Pathak says.

The research received support from the Walton Foundation and the National Science Foundation.

Share Button

Making the future too bright: How wishful thinking can point us in the wrong direction

Everyone indulges in wishful thinking now and again. But when is that most likely to happen and when could it actually be harmful? A new study, led by the University of Amsterdam (UvA), demonstrates unequivocally that the greater the insecurity and anxiety of a situation, the more likely people are to become overly optimistic — even to the point where it can prevent us from taking essential action. The study’s results have now been published in the journal American Economic Review.

‘People aren’t purely truth-seekers — many beliefs are influenced by emotions and driven by what is pleasant or comforting. Like belief in an afterlife or optimism about health outcomes,’ says Joël van der Weele, professor of Economic Psychology at the UvA. Working alongside professor of Neuroeconomics Jan Engelmann and an international team, Van der Weele set out to answer whether people become overly optimistic when facing potential hardships. ‘So far studies haven’t provided clear evidence for wishful thinking, with many not backing up the idea,’ explains Engelmann. ‘But these mainly focused on positive outcomes, like winning a lottery. We examined how both positive and negative outcomes influence biased beliefs.’

Choosing the most pleasant outcome

Understanding self-deception and its causes is difficult in real-world situations. The study involved a set of experiments with over 1,700 participants, conducted in a lab and online. Participants were briefly shown various patterns, such as sets of differently oriented stripes or coloured dots, and were asked what kind of pattern they saw. Some of these patterns were linked to a negative outcome to induce anxiety, either a mild and non-dangerous electrical shock (in the lab) or a loss of money (online). ‘We wanted to see if people make more mistakes in recognising patterns associated with a negative outcome, thinking it was actually a harmless pattern. That would indicate wishful thinking,’ explains Van der Weele.

The study consistently found that participants were less likely to correctly identify patterns associated with a shock or loss. ‘The participants tended to see a pattern that aligned with what was more desirable,’ Engelmann says. ‘Previous research looked at wishful thinking related to positive outcomes and found mixed results, with many studies not finding an effect. Our study demonstrates very clearly that the negative emotion of anxiety about an outcome leads to wishful thinking.’

Making people more realistic

The researchers also tested interventions designed to make people more realistic. The first involved making the patterns easier to recognise. ‘Reducing uncertainty did indeed turn out to reduce wishful thinking,’ says Van der Weele. The second intervention was to offer higher potential earnings for correct pattern recognition. This intervention had little effect, except when participants could gather more evidence about the exact pattern they were shown. ‘When people had more time to collect evidence and were better rewarded for a correct answer, they became more realistic,’ explains Engelmann.

Finally, in the experiments where negative outcomes were replaced by positive outcomes, participants showed no wishful thinking. According to the authors this shows that reducing negative emotions can lessen overoptimism.

Wishful thinking in the ‘real world’

The authors recognise that wishful thinking can be useful because it helps us deal with bad feelings and manage uncertainty. Engelmann: ‘Wishful thinking is important for humans in coping with anxiety about possible future events.’ For Van der Weele and Engelmann, the concern is situations in which too much optimism stops people from getting the information they need or from acting in a way that would benefit them. ‘People can get too hopeful when things are uncertain. We observe this happening with climate change, when financial markets fluctuate, and even in personal health situations when people avoid medical help because they think everything will be fine. We need to know more about when wishful thinking helps and when it hurts.’

Share Button

How the Crimean-Congo hemorrhagic fever virus enters our cells

Researchers at Karolinska Institutet, in collaboration with JLP Health and others, have identified how the tick-borne Crimean-Congo haemorrhagic fever virus enters our cells. The results are published in Nature Microbiology and are an important step in the development of drugs against the deadly disease.

Crimean-Congo haemorrhagic fever virus (CCHF virus) is spread through tick bites and can cause haemorrhagic fever. The disease is serious and has a mortality rate of up to 40 per cent depending on the health status of the person infected. Common symptoms include fever, muscle pain, abdominal pain, joint pain, vomiting and haemorrhaging that can cause organ failure.

The disease has spread to Europe

The virus is present in around 40 countries, including Central Asia, the Middle East and parts of Africa. In recent years, the disease has spread to new geographical areas as a result of climate change, including Spain and France. The tick species that can spread the disease has also been observed in Germany and Sweden. There are currently no effective treatments for the disease.

In a new study, researchers at Karolinska Institutet in Sweden and others have found that the virus enters our cells via a protein on the cell surface, the so-called LDL receptors that regulate blood cholesterol levels.

To identify the protein, the researchers used human mini-organs grown in test tubes and an advanced stem cell library from JLP Health. The same platform has previously been used to identify how the Ebola virus enters cells.

The results were also confirmed in tests on mice, which showed that mice lacking the LDL receptor did not get as sick as others.

Researchers want to trick the virus

The discovery is an important step towards developing drugs for Crimean-Congo haemorrhagic fever, according to Ali Mirazimi, adjunct professor at the Department of Laboratory Medicine, Karolinska Institutet, and one of the researchers behind the study.

“Once we know which receptor the virus uses, we can produce the receptor in test tubes and administer it as a drug,” he says. “Then we can trick the virus into binding to those receptors instead of to the cells and thus stop the virus from spreading in our bodies.”

This knowledge is essential if the disease were to become more common and spread to new areas. Normally it takes many years to develop a drug, but the COVID-19 pandemic and the development of the SARS-CoV-2 vaccine showed that it can be done much faster if everyone decides it is a priority.

Ticks are spread by migratory birds

“This is an important step in our preparedness for the disease,” says Ali Mirazimi. “Crimean-Congo haemorrhagic fever is a disease we would rather not have. The ticks are spread by migratory birds and have already been found in Sweden. If the disease starts appearing in more places, we may already have a drug that we can take into clinical trials.”

The research was conducted in collaboration with the Medical University of Vienna, Austria, Helmholtz Centre for Infection Research, Germany, the National Institutes of Health, USA, and the company JLP Health. It was financed mainly by the Swedish Research Council and the EU. No conflicts of interest have been reported.

Share Button

Study links PMS with perinatal depression

Women with premenstrual disorders are much more likely to have birth-related depression, researchers say.

Share Button

Could assisted dying be coming to Scotland?

MSPs will get the chance to make Scotland the first part of the UK to let people legally end their lives.

Share Button

‘We crowdfunded to help pay our son’s care costs’

Like TV presenter Kate Garraway, other families are struggling with the cost of care.

Share Button

Artificial reef designed by MIT engineers could protect marine life, reduce storm damage

The beautiful, gnarled, nooked-and-crannied reefs that surround tropical islands serve as a marine refuge and natural buffer against stormy seas. But as the effects of climate change bleach and break down coral reefs around the world, and extreme weather events become more common, coastal communities are left increasingly vulnerable to frequent flooding and erosion.

An MIT team is now hoping to fortify coastlines with “architected” reefs — sustainable, offshore structures engineered to mimic the wave-buffering effects of natural reefs while also providing pockets for fish and other marine life.

The team’s reef design centers on a cylindrical structure surrounded by four rudder-like slats. The engineers found that when this structure stands up against a wave, it efficiently breaks the wave into turbulent jets that ultimately dissipate most of the wave’s total energy. The team has calculated that the new design could reduce as much wave energy as existing artificial reefs, using 10 times less material.

The researchers plan to fabricate each cylindrical structure from sustainable cement, which they would mold in a pattern of “voxels” that could be automatically assembled, and would provide pockets for fish to explore and other marine life to settle in. The cylinders could be connected to form a long, semipermeable wall, which the engineers could erect along a coastline, about half a mile from shore. Based on the team’s initial experiments with lab-scale prototypes, the architected reef could reduce the energy of incoming waves by more than 95 percent.

“This would be like a long wave-breaker,” says Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. “If waves are 6 meters high coming toward this reef structure, they would be ultimately less than a meter high on the other side. So, this kills the impact of the waves, which could prevent erosion and flooding.”

Details of the architected reef design are reported today in a study appearing in the open-access journal PNAS Nexus. Triantafyllou’s MIT co-authors are Edvard Ronglan SM ’23; graduate students Alfonso Parra Rubio, Jose del Auila Ferrandis, and Erik Strand; research scientists Patricia Maria Stathatou and Carolina Bastidas; and Professor Neil Gershenfeld, director of the Center for Bits and Atoms; along with Alexis Oliveira Da Silva at the Polytechnic Institute of Paris, Dixia Fan of Westlake University, and Jeffrey Gair Jr. of Scinetics, Inc.

Leveraging turbulence

Some regions have already erected artificial reefs to protect their coastlines from encroaching storms. These structures are typically sunken ships, retired oil and gas platforms, and even assembled configurations of concrete, metal, tires, and stones. However, there’s variability in the types of artificial reefs that are currently in place, and no standard for engineering such structures. What’s more, the designs that are deployed tend to have a low wave dissipation per unit volume of material used. That is, it takes a huge amount of material to break enough wave energy to adequately protect coastal communities.

The MIT team instead looked for ways to engineer an artificial reef that would efficiently dissipate wave energy with less material, while also providing a refuge for fish living along any vulnerable coast.

“Remember, natural coral reefs are only found in tropical waters,” says Triantafyllou, who is director of the MIT Sea Grant. “We cannot have these reefs, for instance, in Massachusetts. But architected reefs don’t depend on temperature, so they can be placed in any water, to protect more coastal areas.”

The new effort is the result of a collaboration between researchers in MIT Sea Grant, who developed the reef structure’s hydrodynamic design, and researchers at the Center for Bits and Atoms (CBA), who worked to make the structure modular and easy to fabricate on location. The team’s architected reef design grew out of two seemingly unrelated problems. CBA researchers were developing ultralight cellular structures for the aerospace industry, while Sea Grant researchers were assessing the performance of blowout preventers in offshore oil structures — cylindrical valves that are used to seal off oil and gas wells and prevent them from leaking.

The team’s tests showed that the structure’s cylindrical arrangement generated a high amount of drag. In other words, the structure appeared to be especially efficient in dissipating high-force flows of oil and gas. They wondered: Could the same arrangement dissipate another type of flow, in ocean waves?

The researchers began to play with the general structure in simulations of water flow, tweaking its dimensions and adding certain elements to see whether and how waves changed as they crashed against each simulated design. This iterative process ultimately landed on an optimized geometry: a vertical cylinder flanked by four long slats, each attached to the cylinder in a way that leaves space for water to flow through the resulting structure. They found this setup essentially breaks up any incoming wave energy, causing parts of the wave-induced flow to spiral to the sides rather than crashing ahead.

“We’re leveraging this turbulence and these powerful jets to ultimately dissipate wave energy,” Ferrandis says.

Standing up to storms

Once the researchers identified an optimal wave-dissipating structure, they fabricated a laboratory-scale version of an architected reef made from a series of the cylindrical structures, which they 3D-printed from plastic. Each test cylinder measured about 1 foot wide and 4 feet tall. They assembled a number of cylinders, each spaced about a foot apart, to form a fence-like structure, which they then lowered into a wave tank at MIT. They then generated waves of various heights and measured them before and after passing through the architected reef.

“We saw the waves reduce substantially, as the reef destroyed their energy,” Triantafyllou says.

The team has also looked into making the structures more porous, and friendly to fish. They found that, rather than making each structure from a solid slab of plastic, they could use a more affordable and sustainable type of cement.

“We’ve worked with biologists to test the cement we intend to use, and it’s benign to fish, and ready to go,” he adds.

They identified an ideal pattern of “voxels,” or microstructures, that cement could be molded into, in order to fabricate the reefs while creating pockets in which fish could live. This voxel geometry resembles individual egg cartons, stacked end to end, and appears to not affect the structure’s overall wave-dissipating power.

“These voxels still maintain a big drag while allowing fish to move inside,” Ferrandis says.

The team is currently fabricating cement voxel structures and assembling them into a lab-scale architected reef, which they will test under various wave conditions. They envision that the voxel design could be modular, and scalable to any desired size, and easy to transport and install in various offshore locations. “Now we’re simulating actual sea patterns, and testing how these models will perform when we eventually have to deploy them,” says Anjali Sinha, a graduate student at MIT who recently joined the group.

Going forward, the team hopes to work with beach towns in Massachusetts to test the structures on a pilot scale.

“These test structures would not be small,” Triantafyllou emphasizes. “They would be about a mile long, and about 5 meters tall, and would cost something like 6 million dollars per mile. So it’s not cheap. But it could prevent billions of dollars in storm damage. And with climate change, protecting the coasts will become a big issue.”

This work was funded, in part, by the U.S. Defense Advanced Research Projects Agency.

Share Button

Persistent hiccups in a far-off galaxy draw astronomers to new black hole behavior

At the heart of a far-off galaxy, a supermassive black hole appears to have had a case of the hiccups.

Astronomers from MIT, Italy, the Czech Republic, and elsewhere have found that a previously quiet black hole, which sits at the center of a galaxy about 800 million light years away, has suddenly erupted, giving off plumes of gas every 8.5 days before settling back to its normal, quiet state.

The periodic hiccups are a new behavior that has not been observed in black holes until now. The scientists believe the most likely explanation for the outbursts stems from a second, smaller black hole that is zinging around the central, supermassive black hole and slinging material out from the larger black hole’s disk of gas every 8.5 days.

The team’s findings, which will be published in the journal Science Advances, challenge the conventional picture of black hole accretion disks, which scientists had assumed are relatively uniform disks of gas that rotate around a central black hole. The new results suggest that accretion disks may be more varied in their contents, possibly containing other black holes, and even entire stars.

“We thought we knew a lot about black holes, but this is telling us there are a lot more things they can do,” says study author Dheeraj “DJ” Pasham, a research scientist in MIT’s Kavli Institute for Astrophysics and Space Research. “We think there will be many more systems like this, and we just need to take more data to find them.”

The study’s MIT co-authors include postdoc Peter Kosec, graduate student Megan Masterson, Associate Professor Erin Kara, Principal Research Scientist Ronald Remillard, and former research scientist Michael Fausnaugh, along with collaborators from multiple institutions, including the Tor Vergata University of Rome, the Astronomical Institute of the Czech Academy of Sciences, and Masaryk University in the Czech Republic.

“Use it or lose it”

The team’s findings grew out of an automated detection by ASAS-SN (the All Sky Automated Survey for SuperNovae), a network of 20 robotic telescopes situated in various locations across the northern and southern hemispheres. The telescopes automatically survey the entire sky once a day for signs of supernovae and other transient phenomena.

In December of 2020, the survey spotted a burst of light in a galaxy about 800 million light years away. That particular part of the sky had been relatively quiet and dark until the telescopes’ detection, when the galaxy suddenly brightened by a factor of 1,000. Pasham, who happened to see the detection reported in a community alert, chose to focus in on the flare with NASA’s NICER (the Neutron star Interior Composition Explorer), an X-ray telescope aboard the International Space Station that continuously monitors the sky for X-ray bursts that could signal activity from neutron stars, black holes, and other extreme gravitational phenomena. The timing was fortuitous, as it was getting toward the end of Pasham’s year-long period during which he had permission to point, or “trigger” the telescope.

“It was either use it or lose it, and it turned out to be my luckiest break,” he says.

He trained NICER to observe the far-off galaxy as it continued to flare. The outburst lasted for about four months before petering out. During that time, NICER took measurements of the galaxy’s X-ray emissions on a daily, high-cadence basis. When Pasham looked closely at the data, he noticed a curious pattern within the four-month flare: subtle dips, in a very narrow band of X-rays, that seemed to reappear every 8.5 days.

It seemed that the galaxy’s burst of energy periodically dipped every 8.5 days. The signal is similar to what astronomers see when an orbiting planet crosses in front of its host star, briefly blocking the star’s light. But no star would be able to block a flare from an entire galaxy.

“I was scratching my head as to what this means because this pattern doesn’t fit anything that we know about these systems,” Pasham recalls.

Punch it

As he was looking for an explanation to the periodic dips, Pasham came across a recent paper by theoretical physicists in the Czech Republic. The theorists had separately worked out that it would be possible, in theory, for a galaxy’s central supermassive black hole to host a second, much smaller black hole. That smaller black hole could orbit at an angle from its larger companion’s accretion disk.

As the theorists proposed, the secondary would periodically punch through the primary black hole’s disk as it orbits. In the process, it would release a plume of gas , like a bee flying through a cloud of pollen. Powerful magnetic fields, to the north and south of the black hole, could then slingshot the plume up and out of the disk. Each time the smaller black hole punches through the disk, it would eject another plume, in a regular, periodic pattern. If that plume happened to point in the direction of an observing telescope, it might observe the plume as a dip in the galaxy’s overall energy, briefly blocking the disk’s light every so often.

“I was super excited by this theory, and I immediately emailed them to say, ‘I think we’re observing exactly what your theory predicted,'” Pasham says.

He and the Czech scientists teamed up to test the idea, with simulations that incorporated NICER’s observations of the original outburst, and the regular, 8.5-day dips. What they found supports the theory: The observed outburst was likely a signal of a second, smaller black hole, orbiting a central supermassive black hole, and periodically puncturing its disk.

Specifically, the team found that the galaxy was relatively quiet prior to the December 2020 detection. The team estimates the galaxy’s central supermassive black hole is as massive as 50 million suns. Prior to the outburst, the black hole may have had a faint, diffuse accretion disk rotating around it, as a second, smaller black hole, measuring 100 to 10,000 solar masses, was orbiting in relative obscurity.

The researchers suspect that, in December 2020, a third object — likely a nearby star — swung too close to the system and was shredded to pieces by the supermassive black hole’s immense gravity — an event that astronomers know as a “tidal disruption event.” The sudden influx of stellar material momentarily brightened the black hole’s accretion disk as the star’s debris swirled into the black hole. Over four months, the black hole feasted on the stellar debris as the second black hole continued orbiting. As it punched through the disk, it ejected a much larger plume than it normally would, which happened to eject straight out toward NICER’s scope.

The team carried out numerous simulations to test the periodic dips. The most likely explanation, they conclude, is a new kind of David-and-Goliath system — a tiny, intermediate-mass black hole, zipping around a supermassive black hole.

“This is a different beast,” Pasham says. “It doesn’t fit anything that we know about these systems. We’re seeing evidence of objects going in and through the disk, at different angles, which challenges the traditional picture of a simple gaseous disk around black holes. We think there is a huge population of these systems out there.”

“This is a brilliant example of how to use the debris from a disrupted star to illuminate the interior of a galactic nucleus which would otherwise remain dark. It is akin to using fluorescent dye to find a leak in a pipe,” says Richard Saxton, an X-ray astronomer from the European Space Astronomy Centre (ESAC) in Madrid, Spain, who was not involved in the study. “This result shows that very close super-massive black hole binaries could be common in galactic nuclei, which is a very exciting development for future gravitational wave detectors.”

This research was supported in part NASA.

Share Button

Land under water: What causes extreme flooding?

If rivers overflow their banks, the consequences can be devastating — just like the catastrophic floods in North Rhine-Westphalia and Rhineland-Palatinate of 2021 showed. In order to limit flood damage and optimise flood risk assessment, we need to better understand what factors can lead to extreme forms of flooding and to what extent. Using methods of explainable machine learning, researchers at the Helmholtz Centre for Environmental Research (UFZ) have shown that floods are more extreme when several factors are involved in their development. The research was published in Science Advances.

There are several factors that play an important role in the development of floods: air temperature, soil moisture, snow depth, and the daily precipitation in the days before a flood. In order to better understand how individual factors contribute to flooding, UFZ researchers examined more than 3,500 river basins worldwide and analysed flood events between 1981 and 2020 for each of them. The result: precipitation was the sole determining factor in only around 25% of the almost 125,000 flood events. Soil moisture was the decisive factor in just over 10% of cases, and snow melt and air temperature were the sole factors in only around 3% of cases. In contrast, 51.6% of cases were caused by at least two factors. At around 23%, the combination of precipitation and soil moisture occurs most frequently.

However, when analysing the data, the UFZ researchers discovered that three — or even all four — factors can be jointly responsible for a flood event. For example, temperature, soil moisture, and snow depth were decisive factors in around 5,000 floods whilst all four factors were decisive in around 1,000 flood events. And not only that: “We also showed that flood events become more extreme when more factors are involved,” says Dr Jakob Zscheischler, Head of the UFZ Department “Compound Environmental Risks” and senior author of the article. In the case of one-year floods, 51.6% can be attributed to several factors; in the case of five- and ten-year floods, 70.1% and 71.3% respectively can be attributed to several factors. The more extreme a flood is, the more driving factors there are and the more likely they are to interact in the event generation. This correlation often also applies to individual river basins and is referred to as flood complexity.

According to the researchers, river basins in the northern regions of Europe and America as well as in the Alpine region have a low flood complexity. This is because snow melt is the dominant factor for most floods regardless of the flood magnitude. The same applies to the Amazon basin, where the high soil moisture resulting from the rainy season is often a major cause of floods of varying severity. In Germany, the Havel and the Zusam, a tributary of the Danube in Bavaria, are river basins that have a low flood complexity. Regions with river basins that have a high flood complexity primarily include eastern Brazil, the Andes, eastern Australia, the Rocky Mountains up to the US west coast, and the western and central European plains. In Germany, this includes the Moselle and the upper reaches of the Elbe. “River basins in these regions generally have several flooding mechanisms,” says Jakob Zscheischler. For example, river basins in the European plains can be affected by flooding caused by the combination of heavy precipitation, active snow melt, and high soil moisture.

However, the complexity of flood processes in a river basin also depends on the climate and land surface conditions in the respective river basin. This is because every river basin has its own special features. Among other things, the researchers looked at the climate moisture index, the soil texture, the forest cover, the size of the river basin, and the river gradient. “In drier regions, the mechanisms that lead to flooding tend to be more heterogeneous. For moderate floods, just a few days of heavy rainfall is usually enough. For extreme floods, it needs to rain longer on already moist soils,” says lead author Dr Shijie Jiang, who now works at the Max Planck Institute for Biogeochemistry in Jena.

The scientists used explainable machine learning for the analysis. “First, we use the potential flood drivers air temperature, soil moisture, and snow depth as well as the weekly precipitation — each day is considered as an individual driving factor — to predict the run-off magnitude and thus the size of the flood,” explains Zscheischler. The researchers then quantified which variables and combinations of variables contributed to the run-off of a particular flood and to which extent. This approach is referred to as explainable machine learning because it uncovers the predictive relationship between flood drivers and run-off during a flood in the trained model. “With this new methodology, we can quantify how many driving factors and combinations thereof are relevant for the occurrence and intensity of floods,” adds Jiang.

The findings of the UFZ researchers are expected to help predict future flood events. “Our study will help us better estimate particularly extreme floods,” says Zscheischler. Until now, very extreme floods have been estimated by extrapolating from less extreme floods. However, this is too imprecise because the individual contributing factors could change their influence for different flood magnitudes.

Share Button

Katie Price warns about ‘damaging’ plastic surgery

The model says women have cosmetic procedures younger than she did, and they all look like “aliens”.

Share Button