Public satisfaction with NHS at lowest ever level

Just 24% of Britons say they are satisfied, citing waiting times and lack of staff in a long-running survey.

Share Button

Council spending on special needs transport doubles

The rising cost of getting children to school is a key driver of current pressure on council budgets.

Share Button

Sleeping supermassive black holes awakened briefly by shredded stars

A new investigation into an obscure class of galaxies known as Compact Symmetric Objects, or CSOs, has revealed that these objects are not entirely what they seem. CSOs are active galaxies that host supermassive black holes at their cores. Out of these monstrous black holes spring two jets traveling in opposite directions at nearly the speed of light. But in comparison to other galaxies that boast fierce jets, these jets do not extend out to great distances — they are much more compact. For many decades, astronomers suspected that CSOs were simply young and that their jets would eventually travel out to greater distances.

Now, reporting in three different papers in The Astrophysical Journal, a Caltech-led team of researchers has concluded that CSOs are not young but rather lead relatively short lives.

“These CSOs are not young,” explains Anthony (Tony) Readhead, the Robinson Professor of Astronomy, Emeritus, who led the investigation. “You wouldn’t call a 12-year-old dog young even though it has lived a shorter life than an adult human. These objects are a distinct species all of their own that live and die out in thousands of years rather than the millions of years that are common in galaxies with bigger jets.”

In the new studies, the team reviewed literature and past observations of more than 3,000 CSO candidates, verifying 64 as real and identifying an additional 15 CSOs. All these objects had been previously observed by the National Radio Astronomy Observatory’s Very Long Baseline Array (VLBA), funded by the National Science Foundation (NSF), and some had been observed by other high-resolution radio telescopes. “The VLBA observations are the most detailed in astronomy, providing images with details equivalent to measuring the width of a human hair at a distance of 100 miles,” Readhead says.

The team’s analysis concludes that CSOs expel jets for 5,000 years or less and then die out. “The CSO jets are very energetic jets but they seem to shut off,” says Vikram Ravi, assistant professor of astronomy at Caltech and a co-author of one of the studies. “The jets stop flowing from the source.”

As for what is fueling the short-lived jets, the scientists believe the cause is a tidal disruption event (TDE), which occurs when a single star wanders too close to a supermassive black hole and is devoured.

“We think that a single star gets ripped apart, and then all that energy is channeled into jets along the axis the black hole is spinning around,” Readhead says. “The giant black hole starts out invisible to us, and then when it consumes a star, boom! The black hole has fuel, and we can see it.”

Readhead first suspected that CSOs might be fueled by TDEs back in the 1990s, but he says the idea went largely unnoticed by the scientific community. “The hypothesis was all but forgotten because years went by before observational evidence began to mount for TDEs,” he says. At the time of his original hypothesis, only three CSOs had been found.

Fast forward to 2020. Readhead, who had paused his studies of CSOs to delve into different problems in radio astronomy, decided it was time to revisit the topic. He gathered some of his colleagues together on Zoom, and they decided to comb through literature and weed out objects that had been misclassified as CSOs. Over the next two years, the team investigated more than 3,000 CSO candidates, narrowing the group down to only dozens that had the criteria to be real CSOs.

Ultimately, a picture began to emerge of CSOs as an entirely distinct family with jets that die out much sooner than their gigantic brethren, such as those of the extremely powerful Cygnus A, a galaxy that shoots out extremely powerful jets that glow brightly at radio wavelengths. These jets stretch to distances of about 230,000 light-years in each direction and last tens of millions of years. In contrast, the CSO jets extend to about 1,500 light-years at most and die out by about 5,000 years.

According to the astronomers, the CSO jets likely form when a supermassive black hole snacks on not just any star, but a substantial one.

“The TDEs we’ve previously seen only lasted for a few years,” Ravi says. “We think that the remarkable TDEs powering CSOs last far longer because the disrupted stars are very large in size, very massive, or both.”

By analyzing the varied collection of CSO radio images, the researchers say they can trace how the objects age over time, almost like looking at a photo album of a CSO’s life to observe how its jets evolve. The younger CSOs have shorter jets that are closer to the black holes, while the older objects have jets that extend further out from their black hole. Though most of the jets die out, the scientists estimate that one in 100 will go onto to become long-lived like those of Cygnus A. In those rare cases, the galaxies are likely merging with other galaxies, a turbulent process that provides a large quantity of fuel.

If the discoveries of Readhead and his team are confirmed with additional observations, the CSOs will provide a whole new avenue for studying how massive stars at the centers of galaxies interact with supermassive black holes.

“These objects are indeed a distinct population with their own distinct origin, and it is up to us now to learn more about them and how they came to be,” Readhead says. “Being able to study these objects on timescales of years to decades rather than millions of years has opened the door to a whole new laboratory for studying supermassive black holes and the many unexpected and unpredictable surprises they hold.”

Share Button

Researchers find energy development and tree encroachment impact Wyoming pronghorn

While Wyoming is home to some of North America’s most abundant populations of pronghorn that have largely been stable in recent years, a new analysis shows that many herds are experiencing long-term declines in fawn production.

Those declines are primarily a result of oil and gas development and encroachment of trees, according to researchers from the University of Wyoming, the University of Florida, the University of Nebraska-Lincoln, the University of Arkansas and the Northern Plains Agricultural Research Laboratory. Their findings have been published in the journal Global Ecology and Conservation.

The study included data collected by the Wyoming Game and Fish Department for 40 pronghorn herds covering much of Wyoming — home to about half of North America’s population of the iconic animal — over a 35-year period from 1984-2019. In addition to analyzing the Game and Fish Department’s extensive information from annual pronghorn population surveys, the researchers looked at region-specific data regarding oil and gas development, roads, fire, invasive plants, tree encroachment and precipitation patterns.

“Long-term declines in (pronghorn) productivity were associated with increases in oil and gas development and woody encroachment,” wrote the research team, led by former University of Nebraska researcher Victoria Donovan, now with the University of Florida, and Professor Jeff Beck, of UW’s Department of Ecosystem Science and Management. They found that “both tree cover and oil and gas development have increased substantially across most herd units in Wyoming over the last 40 years.”

“Other drivers of global change viewed as threats to pronghorn — including nonnative annual grass invasions, wildfire, roads and increased winter precipitation — were not prominent drivers of long-term declines in pronghorn productivity,” the scientists concluded.

While oil and gas development already is widely recognized as impacting Wyoming’s rangelands and the species on those lands, the researchers noted that tree encroachment is not generally viewed as a threat to the state’s sagebrush ecosystems. That’s likely because average tree cover ranged from less than 1 percent to 18 percent across the 40 pronghorn herd unit areas.

But even low levels of invading trees have been shown to have drastic impacts on sagebrush-dependent wildlife, the scientists wrote. For Wyoming’s pronghorn, the increase in trees could be providing cover for predators; driving loss of forage associated with sagebrush and grassland cover; and causing pronghorn to avoid those areas.

The researchers suggest that efforts to prevent and manage tree growth amid sagebrush ecosystems could be important for Wyoming pronghorn to maintain their numbers. This could include manual removal of trees and controlled burning.

“Our results contribute to the overwhelming evidence that early management of invading trees within sagebrush habitat will help protect iconic rangeland species like pronghorn,” they wrote. “Preventative management and management applied in the early phases of encroachment is, thus, the most impactful and cost-effective approach.”

Share Button

Researchers show that introduced tardigrade proteins can slow metabolism in human cells

University of Wyoming researchers have gained further insight into how tardigrades survive extreme conditions and shown that proteins from the microscopic creatures expressed in human cells can slow down molecular processes.

This makes the tardigrade proteins potential candidates in technologies centered on slowing the aging process and in long-term storage of human cells.

The new study, published in the journal Protein Science, examines the mechanisms used by tardigrades to enter and exit from suspended animation when faced by environmental stress. Led by Senior Research Scientist Silvia Sanchez-Martinez in the lab of UW Department of Molecular Biology Assistant Professor Thomas Boothby, the research provides additional evidence that tardigrade proteins eventually could be used to make life-saving treatments available to people where refrigeration is not possible — and enhance storage of cell-based therapies, such as stem cells.

Measuring less than half a millimeter long, tardigrades — also known as water bears — can survive being completely dried out; being frozen to just above absolute zero (about minus 458 degrees Fahrenheit, when all molecular motion stops); heated to more than 300 degrees Fahrenheit; irradiated several thousand times beyond what a human could withstand; and even survive the vacuum of outer space.

They survive by entering a state of suspended animation called biostasis, using proteins that form gels inside of cells and slow down life processes, according to the new UW-led research. Co-authors of the study are from institutions including the University of Bristol in the United Kingdom, Washington University in St. Louis, the University of California-Merced, the University of Bologna in Italy and the University of Amsterdam in the Netherlands.

Sanchez-Martinez, who came from the Howard Hughes Medical Institute to join Boothby’s UW lab, was the lead author of the paper.

“Amazingly, when we introduce these proteins into human cells, they gel and slow down metabolism, just like in tardigrades,” Sanchez-Martinez says. “Furthermore, just like tardigrades, when you put human cells that have these proteins into biostasis, they become more resistant to stresses, conferring some of the tardigrades’ abilities to the human cells.”

Importantly, the research shows that the whole process is reversible: “When the stress is relieved, the tardigrade gels dissolve, and the human cells return to their normal metabolism,” Boothby says.

“Our findings provide an avenue for pursuing technologies centered on the induction of biostasis in cells and even whole organisms to slow aging and enhance storage and stability,” the researchers concluded.

Previous research by Boothby’s team showed that natural and engineered versions of tardigrade proteins can be used to stabilize an important pharmaceutical used to treat people with hemophilia and other conditions without the need for refrigeration.

Tardigrades’ ability to survive being dried out has puzzled scientists, as the creatures do so in a manner that appears to differ from a number of other organisms with the ability to enter suspended animation.

Share Button

Leaked emails reveal child gender service concerns

BBC News has seen emails from senior NHS bosses saying some patients could be at risk.

Share Button

Treatment for blindness-causing retinal detachment using viscous seaweed

It’s taboo to consume seaweed soup before exams in Korea since it can lead to failing the exam. The belief is rooted in the idea that the slippery nature of seaweed may cause people to slip and falter during the test. The slick surface of seaweeds such as seaweed and kelp is attributed to alginate, a mucilaginous substance. Notably, an intriguing study exploring the use of alginate for the treatment of retinal detachment has been recently published.

A collaborative effort between Professor Hyung Joon Cha from the Department of Chemical Engineering and the School of Convergence Science and Technology and Dr. Geunho Choi from the Department of Chemical Engineering at Pohang University of Science and Technology (POSTECH), and Professor Woo Jin Jeong, Professor Woo Chan Park, and Professor Seoung Hyun An from the Dong-A University Hospital’s Department of Ophthalmology has resulted in the creation of an artificial vitreous body for treating retinal detachment. This solution is based on a natural carbohydrate derived from algae. The research findings were recently published in Biomaterials, an international journal of biomaterials published by Elsevier.

The vitreous body is a gel-like substance that occupies the space between the lens and retina, contributing to the eye’s structural integrity. Retinal detachment occurs when the retina separates from the inner wall of the eye and moves into the vitreous cavity, leading to detachment and potentially resulting in blindness in severe cases. While a common approach involves removing the vitreous body and substituting it with medical intraocular fillers like expandable gas or silicone oil, these fillers have been associated with various side effects.

To address these concerns, the research team employed a modified form of alginate, a natural carbohydrate sourced from algae. Alginate, also known as alginic acid, is widely utilized in various industries, including food and medicine, for its ability to create viscous products. In this research, the team crafted a medical composite hydrogel based on alginate, offering a potential alternative for vitreous replacement.

The hydrogel, possessing high biocompatibility and optical properties akin to authentic vitreous body, enables patients to preserve their vision post-surgery. Its distinctive viscoelasticity effectively regulates fluid dynamics within the eye, contributing to retinal stabilization and the elimination of air bubbles.

To validate the hydrogel’s stability and effectiveness, the team conducted experiments using animal models, specifically rabbit eyes, which closely resemble human eyes in structure, size, and physiological response. Implanting the hydrogel into rabbit eyes demonstrated its success in preventing the recurrence of retinal detachment, maintaining stability, and functioning well over an extended period without any adverse effects.

Professor Hyung Joon Cha of the POSTECH who led the study remarked, “There is a correlation between retinal detachment and severe myopia and the prevalence of retinal detachment is increasing, particularly in young people. The incidence of retinal detachment cases in Korea rose by 50% in 2022 compared to 2017.” He expressed the team’s commitment by saying, “Our team will enhance and progress the technology to make the hydrogel suitable for practical use in real-world eye care through ongoing research.”

Professor Woo Jin Jeong from the Dong-A University Hospital stated, “The worldwide market for intraocular fillers is expanding at a rate of 3% per year.” He added, “We anticipate that the hydrogel we’ve created will prove beneficial in upcoming vitreoretinal surgeries.”

The research was sponsored by the Korea Medical Device Development Fund and the Mid-Career Research Program of the National Research Foundation of Korea.

Share Button

Disposable bans will not work, says vape boss

British American Tobacco boss says bans overseas are not “effective” and the illegal market is large.

Share Button

Engineering household robots to have a little common sense

From wiping up spills to serving up food, robots are being taught to carry out increasingly complicated household tasks. Many such home-bot trainees are learning through imitation; they are programmed to copy the motions that a human physically guides them through.

It turns out that robots are excellent mimics. But unless engineers also program them to adjust to every possible bump and nudge, robots don’t necessarily know how to handle these situations, short of starting their task from the top.

Now MIT engineers are aiming to give robots a bit of common sense when faced with situations that push them off their trained path. They’ve developed a method that connects robot motion data with the “common sense knowledge” of large language models, or LLMs.

Their approach enables a robot to logically parse many given household task into subtasks, and to physically adjust to disruptions within a subtask so that the robot can move on without having to go back and start a task from scratch — and without engineers having to explicitly program fixes for every possible failure along the way.

“Imitation learning is a mainstream approach enabling household robots. But if a robot is blindly mimicking a human’s motion trajectories, tiny errors can accumulate and eventually derail the rest of the execution,” says Yanwei Wang, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “With our method, a robot can self-correct execution errors and improve overall task success.”

Wang and his colleagues detail their new approach in a study they will present at the International Conference on Learning Representations (ICLR) in May. The study’s co-authors include EECS graduate students Tsun-Hsuan Wang and Jiayuan Mao, Michael Hagenow, a postdoc in MIT’s Department of Aeronautics and Astronautics (AeroAstro), and Julie Shah, the H.N. Slater Professor in Aeronautics and Astronautics at MIT.

Language task

The researchers illustrate their new approach with a simple chore: scooping marbles from one bowl and pouring them into another. To accomplish this task, engineers would typically move a robot through the motions of scooping and pouring — all in one fluid trajectory. They might do this multiple times, to give the robot a number of human demonstrations to mimic.

“But the human demonstration is one long, continuous trajectory,” Wang says.

The team realized that, while a human might demonstrate a single task in one go, that task depends on a sequence of subtasks, or trajectories. For instance, the robot has to first reach into a bowl before it can scoop, and it must scoop up marbles before moving to the empty bowl, and so forth. If a robot is pushed or nudged to make a mistake during any of these subtasks, its only recourse is to stop and start from the beginning, unless engineers were to explicitly label each subtask and program or collect new demonstrations for the robot to recover from the said failure, to enable a robot to self-correct in the moment.

“That level of planning is very tedious,” Wang says.

Instead, he and his colleagues found some of this work could be done automatically by LLMs. These deep learning models process immense libraries of text, which they use to establish connections between words, sentences, and paragraphs. Through these connections, an LLM can then generate new sentences based on what it has learned about the kind of word that is likely to follow the last.

For their part, the researchers found that in addition to sentences and paragraphs, an LLM can be prompted to produce a logical list of subtasks that would be involved in a given task. For instance, if queried to list the actions involved in scooping marbles from one bowl into another, an LLM might produce a sequence of verbs such as “reach,” “scoop,” “transport,” and “pour.”

“LLMs have a way to tell you how to do each step of a task, in natural language. A human’s continuous demonstration is the embodiment of those steps, in physical space,” Wang says. “And we wanted to connect the two, so that a robot would automatically know what stage it is in a task, and be able to replan and recover on its own.”

Mapping marbles

For their new approach, the team developed an algorithm to automatically connect an LLM’s natural language label for a particular subtask with a robot’s position in physical space or an image that encodes the robot state. Mapping a robot’s physical coordinates, or an image of the robot state, to a natural language label is known as “grounding.” The team’s new algorithm is designed to learn a grounding “classifier,” meaning that it learns to automatically identify what semantic subtask a robot is in — for example, “reach” versus “scoop” — given its physical coordinates or an image view.

“The grounding classifier facilitates this dialogue between what the robot is doing in the physical space and what the LLM knows about the subtasks, and the constraints you have to pay attention to within each subtask,” Wang explains.

The team demonstrated the approach in experiments with a robotic arm that they trained on a marble-scooping task. Experimenters trained the robot by physically guiding it through the task of first reaching into a bowl, scooping up marbles, transporting them over an empty bowl, and pouring them in. After a few demonstrations, the team then used a pretrained LLM and asked the model to list the steps involved in scooping marbles from one bowl to another. The researchers then used their new algorithm to connect the LLM’s defined subtasks with the robot’s motion trajectory data. The algorithm automatically learned to map the robot’s physical coordinates in the trajectories and the corresponding image view to a given subtask.

The team then let the robot carry out the scooping task on its own, using the newly learned grounding classifiers. As the robot moved through the steps of the task, the experimenters pushed and nudged the bot off its path, and knocked marbles off its spoon at various points. Rather than stop and start from the beginning again, or continue blindly with no marbles on its spoon, the bot was able to self-correct, and completed each subtask before moving on to the next. (For instance, it would make sure that it successfully scooped marbles before transporting them to the empty bowl.)

“With our method, when the robot is making mistakes, we don’t need to ask humans to program or give extra demonstrations of how to recover from failures,” Wang says. “That’s super exciting because there’s a huge effort now toward training household robots with data collected on teleoperation systems. Our algorithm can now convert that training data into robust robot behavior that can do complex tasks, despite external perturbations.”

Share Button

Novel electrochemical sensor detects dangerous bacteria

Researchers at Goethe University Frankfurt and Kiel University have developed a novel sensor for the detection of bacteria. It is based on a chip with an innovative surface coating. This ensures that only very specific microorganisms adhere to the sensor — such as certain pathogens. The larger the number of organisms, the stronger the electric signal generated by the chip. In this way, the sensor is able not only to detect dangerous bacteria with a high level of sensitivity but also to determine their concentration.

Each year, bacterial infections claim several million lives worldwide. That is why detecting harmful microorganisms is crucial — not only in the diagnosis of diseases but also, for example, in food production. However, the methods available so far are often time-consuming, require expensive equipment or can only be used by specialists. Moreover, they are often unable to distinguish between active bacteria and their decay products.

By contrast, the newly developed method detects only intact bacteria. It makes use of the fact that microorganisms only ever attack certain body cells, which they recognize from the latter’s specific sugar molecule structure. This matrix, known as the glycocalyx, differs depending on the type of cell. It serves, so to speak, as an identifier for the body cells. This means that to capture a specific bacterium, we need only to know the recognizable structure in the glycocalyx of its preferred host cell and then use this as “bait.”

This is precisely what the researchers have done. “In our study, we wanted to detect a specific strain of the gut bacterium Escherichia coli — or E. coli for short,” explains Professor Andreas Terfort from the Institute of Inorganic and Analytical Chemistry at Goethe University Frankfurt. “We knew which cells the pathogen usually infects. We used this to coat our chip with an artificial glycocalyx that mimics the surface of these host cells. In this way, only bacteria from the targeted E. coli strain adhere to the sensor.”

E. coli has many short arms, known as pili, which the bacterium uses to recognize its host’s glycocalyx and cling onto it. “The bacteria use their pili to bind to the sensor in several places, which allows them to hang on particularly well,” says Terfort. In addition, the chemical structure of the artificial glycocalyx is such that microbes without the right arms slide off it — like an egg off a well-greased frying pan. This ensures that indeed only the pathogenic E. coli bacteria are retained.

But how were the scientists able to corroborate that bacteria really were attached to the artificial glycocalyx? “We bonded the sugar molecules to a conductive polymer,” explains Sebastian Balser, a doctoral researcher under Professor Terfort and the first author of the paper. “By applying an electrical voltage via these ‘wires’, we are able to read how many bacteria had bonded to the sensor.”

The study documents how effective this is: The researchers mixed pathogens from the targeted E. coli strain among harmless E. coli bacteria in various concentrations. “Our sensor was able to detect the harmful microorganisms even in very small quantities,” explains Terfort. “What’s more, the higher the concentration of the targeted bacteria, the stronger the emitted signals.”

The paper is initial proof that the method works. In the next step, the involved working groups want to investigate whether it also stands the test in practice. Using it in regions where there are no hospitals with sophisticated lab diagnostics is conceivable, for example.

Share Button