Tuesday, November 19, 2024

Living microbes discovered in Earth's driest desert

 The Atacama Desert, which runs along the Pacific Coast in Chile, is the driest place on the planet and, largely because of that aridity, hostile to most living things. Not everything, though -- studies of the sandy soil have turned up diverse microbial communities. Studying the function of microorganisms in such habitats is challenging, however, because it's difficult to separate genetic material from the living part of the community from genetic material of the dead.

A new separation technique can help researchers focus on the living part of the community. This week in Applied and Environmental Microbiology, an international team of researchers describes a new way to separate extracellular (eDNA) from intracellular (iDNA) genetic material. The method provides better insights into microbial life in low-biomass environments, which was previously not possible with conventional DNA extraction methods, said Dirk Wagner, Ph.D., a geomicrobiologist at the GFZ German Research Centre for Geosciences in Potsdam, who led the study.

The microbiologists used the novel approach on Atacama soil samples collected from the desert along a west-to-east swath from the ocean's edge to the foothills of the Andes mountains. Their analyses revealed a variety of living and possibly active microbes in the most arid areas. A better understanding of eDNA and iDNA, Wagner said, can help researchers probe all microbial processes.

"Microbes are the pioneers colonizing this kind of environment and preparing the ground for the next succession of life," Wagner said. These processes, he said, aren't limited to the desert. "This could also apply to new terrain that forms after earthquakes or landslides where you have more or less the same situation, a mineral or rock-based substrate."

Most commercially available tools for extracting DNA from soils leave a mixture of living, dormant and dead cells from microorganisms, Wagner said. "If you extract all the DNA, you have DNA from living organisms and also DNA that can represent organisms that just died or that died a long time ago." Metagenomic sequencing of that DNA can reveal specific microbes and microbial processes. However, it requires sufficient good-quality DNA, Wagner added, "which is often the bottleneck in low-biomass environments."

To remedy that problem, he and his collaborators developed a process for filtering intact cells out of a mixture, leaving behind eDNA genetic fragments left from dead cells in the sediment. It involves multiple cycles of gentle rinsing, he said. In lab tests they found that after 4 repetitions, nearly all the DNA in a sample had been divided into the 2 groups.

When they tested soil from the Atacama Desert, they found Actinobacteria and Proteobacteria in all samples in both eDNA and iDNA groups. That's not surprising, Wagner said, because the living cells constantly replenish the store of iDNA as they die and degrade. "If a community is really active, then a constant turnover is taking place, and that means the 2 pools should be more similar to each other," he said. In samples collected from depths of less than 5 centimeters, they found that Chloroflexota bacteria dominated in the iDNA group.

In future work, Wagner said he plans to conduct metagenomic sequencing on the iDNA samples to better understand the microbes at work, and to apply the same approach to samples from other hostile environments. By studying iDNA, he said, "you can get deeper insights into the real active part of the community."

Journal Reference:

  1. Alexander Bartholomäus, Steffi Genderjahn, Kai Mangelsdorf, Beate Schneider, Pedro Zamorano, Samuel P. Kounaves, Dirk Schulze-Makuch, Dirk Wagner. Inside the Atacama Desert: uncovering the living microbiome of an extreme environment. Applied and Environmental Microbiology, 2024; DOI: 10.1128/aem.01443-24 

Courtesy:

American Society for Microbiology. "Living microbes discovered in Earth's driest desert." ScienceDaily. ScienceDaily, 14 November 2024. <www.sciencedaily.com/releases/2024/11/241114125607.htm>.

 

 

 

Sunday, November 17, 2024

AI headphones create a 'sound bubble,' quieting all sounds more than a few feet away

Imagine this: You're at an office job, wearing noise-canceling headphones to dampen the ambient chatter. A co-worker arrives at your desk and asks a question, but rather than needing to remove the headphones and say, "What?," you hear the question clearly. Meanwhile the water-cooler chat across the room remains muted. Or imagine being in a busy restaurant and hearing everyone at your table, but reducing the other speakers and noise in the restaurant.

A team led by researchers at the University of Washington has created a headphone prototype that allows listeners to create just such a "sound bubble." The team's artificial intelligence algorithms combined with a headphone prototype allow the wearer to hear people speaking within a bubble with a programmable radius of 3 to 6 feet. Voices and sounds outside the bubble are quieted an average of 49 decibels (approximately the difference between a vacuum and rustling leaves), even if the distant sounds are louder than those inside the bubble.

The team published its findings Nov. 14 in Nature Electronics. The code for the proof-of-concept device is available for others to build on. The researchers are creating a startup to commercialize this technology.

"Humans aren't great at perceiving distances through sound, particularly when there are multiple sound sources around them," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far. Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself."

Researchers created the prototype with commercially available noise-canceling headphones. They affixed six small microphones across the headband. The team's neural network -- running on a small onboard embedded computer attached to the headphones -- tracks when different sounds reach each microphone. The system then suppresses the sounds coming from outside the bubble, while playing back and slightly amplifying the sounds inside the bubble (because noise-canceling headphones physically let some sound through).

"We'd worked on a previous smart-speaker system where we spread the microphones across a table because we thought we needed significant distances between microphones to extract distance information about sounds," Gollakota said. "But then we started questioning our assumption. Do we need a big separation to create this 'sound bubble'? What we showed here is that we don't. We were able to do it with just the microphones on the headphones, and in real-time, which was quite surprising."

To train the system to create sound bubbles in different environments, researchers needed a distance-based sound dataset collected in the real-world, which was not available. To gather such a dataset, they put the headphones on a mannequin head. A robotic platform rotated the head while a moving speaker played noises coming from different distances. The team collected data with the mannequin system as well as with human users in 22 different indoor environments, including offices and living spaces.

Researchers have determined that the system works for a couple of reasons. First, the wearer's head reflects sounds, which helps the neural net distinguish sounds from various distances. Second, sounds (like human speech) have multiple frequencies, each of which goes through different phases as it travels from its source. The team's AI algorithm, the researchers believe, is comparing the phases of each of these frequencies to determine the distance of any sound source (a person talking, for instance).

Headphones like Apple's AirPods Pro 2 can amplify the voice of the person in front of the wearer while reducing some background noise. But these features work by tracking head position and amplifying the sound coming from a specific direction, rather than gauging distance. This means the headphones can't amplify multiple speakers at once, lose functionality if the wearer turns their head away from the target speaker, and aren't as effective at reducing loud sounds from the speaker's direction.

The system has been trained to work only indoors, because getting clean training audio is more difficult outdoors. Next, the team is working to make the technology function on hearing aids and noise-canceling earbuds, which requires a new strategy for positioning the microphones.

Additional co-authors are Malek Itani and Tuochao Chen, UW doctoral students in the Allen School; Sefik Emre Eskimez, a senior researcher at Microsoft; and Takuya Yoshioka, director of research at AssemblyAI. This research was funded by a Moore Inventor Fellow award, a UW CoMotion Innovation Gap Fund and the National Science Foundation.

Journal Reference:

  1. Tuochao Chen, Malek Itani, Sefik Emre Eskimez, Takuya Yoshioka, Shyamnath Gollakota. Hearable devices with sound bubbles. Nature Electronics, 2024; DOI: 10.1038/s41928-024-01276-z 

Courtesy:

University of Washington. "AI headphones create a 'sound bubble,' quieting all sounds more than a few feet away." ScienceDaily. ScienceDaily, 14 November 2024. <www.sciencedaily.com/releases/2024/11/241114161302.htm>.