AI that sees with sound, learns to walk and predicts seismic physics • TechCrunch

Research into machine learning and artificial intelligence, which is now a staple technology in nearly every industry and company, is too massive for anyone to read in its entirety. This column, Perceptron, aims to bring together some of the relevant recent discoveries and research—particularly in the field of artificial intelligence, to name a few—and explain why they are important.

This month, engineers at Meta detailed two recent innovations from the depths of the company’s research labs: an AI system that compresses audio files and an algorithm that can speed up the performance of a protein-folded AI by 60 times. Elsewhere, scientists at the Massachusetts Institute of Technology have revealed that they are using spatial acoustic information to help machines better visualize their environments, simulating how a listener hears sound from any point in the room.

Meta compression action doesn’t exactly reach an unexplored area. Last year, Google announced Lyra, a neural audio codec trained to compress low-bitrate speech. But Meta claims that its system is the first with CD-quality and stereo sound, making it useful for commercial applications such as voice calls.

sound pressure dead

Architectural drawing of the AI ​​audio compression model in Meta. Image credits: dead

Using AI, Meta’s compression system, called Encodec, can compress and decompress audio in real time on a single CPU core at rates ranging from 1.5 kbps to 12 kbps. Compared to MP3, Encodec can achieve a compression rate of approximately 10x at 64kbps without noticeable quality loss.

The researchers behind Encodec say that human evaluators preferred the quality of audio processed by Encodec versus audio processed by Lyra, suggesting that Encodec could eventually be used to deliver better quality audio in situations where bandwidth is restricted or at a higher price.

As for Meta’s protein folding action, it has less immediate commercial potential. But it could lay the foundation for important scientific research in the field of biology.

Collapse Meta Protein

Protein structures predicted by the Meta system. Image credits: dead

Meta says its AI system, ESMFold, predicted the structures of about 600 million proteins from bacteria, viruses and other microbes that have yet to be characterized. That’s more than three times the number of structures that Alphabet-backed DeepMind was able to predict earlier this year, which covered nearly every protein from known organisms in their DNA databases.

Also Read :  Inside TheTruthSpy, the stalkerware network spying on thousands • TechCrunch

The Meta system is not as accurate as DeepMind’s. Of the 600 million proteins he produced, only a third were “high quality.” But it is 60 times faster at predicting structures, enabling it to extend structure prediction to much larger databases of proteins.

Not giving Meta too much attention, the company’s AI division also this month detailed a system designed for Mathematical Reason. The company’s researchers say the “neuroproblem solver” has learned from a data set of successful mathematical proofs to generalize to new and different types of problems.

Meta is not the first to build such a system. OpenAI has developed a program of its own, called Lean, which it announced in February. Separately, DeepMind has experimented with systems that can solve difficult mathematical problems in studies of symmetries and knots. But Meta claims that its neural problem-solving tool was able to solve five times more IMO than any previous AI system and outperform other systems on widely used math standards.

Meta suggests that AI in solving math can benefit the fields of software verification, cryptography, and even space.

By turning our attention to MIT’s work, scientists there have developed a machine-learning model that can capture how sounds in a room propagate through space. By modeling the acoustics, the system can learn the room’s geometry from audio recordings, which can then be used to build visual representations of the room.

The researchers say the technology could be applied to virtual and augmented reality programs or robots that have to navigate complex environments. In the future, they plan to improve the system so that it can generalize to new and larger scenes, such as entire buildings or even entire towns and cities.

Also Read :  Is there a paved road toward cloud native resiliency?

In Berkeley’s robotics division, two separate teams are working to speed up the rate at which the quadruped robot is learning to walk and doing other tricks. One team looked to combine the best of the business among several other advances in reinforcement learning to allow the robot to go from a blank slate to a vigorous walk over uncertain terrain in just 20 minutes in real time.

“Perhaps surprisingly, we found that through so many rigorous design decisions in terms of task setup and algorithm implementation, it is possible for a quadruped robot to learn to walk from scratch using deep RL in less than 20 minutes, across a range of different environments and types of surfaces. Crucially. This does not require new algorithm components or any other unexpected innovation,” the researchers wrote.

Instead, they choose and combine some modern methods and get amazing results. You can read the paper here.

A demonstration of a robotic dog from Professor Peter Abel’s EECS Laboratory in Berkeley, California in 2022. (Photo courtesy of Philip Wu/Berkeley Engineering)

Another motion-learning project, from (TechCrunch’s pal) lab Peter Appel, has been described as “imagination training.” They set up the robot with the ability to try to predict how its actions will work, and although it starts out helplessly, it quickly gains more knowledge about the world and how it works. This leads to a better prediction process, which results in better knowledge etc. in the notes until it walks in less than an hour. It learns just as quickly to recover from being pushed or “disturbed” in another way, as it does in language. Their work is documented here.

Also Read :  Make Your Home More Secure with These 5 Gadgets

Work with a potentially more immediate app came earlier this month from Los Alamos National Laboratory, where researchers developed a machine-learning technique to predict the friction that occurs during earthquakes — providing a way to predict earthquakes. Using a language model, the team says they were able to analyze the statistical features of seismic signals emitted by a fault in a laboratory seismic machine to predict the timing of the next earthquake.

“The model is not constrained by physics, but it does predict the physics, and the actual behavior of the system,” said Chris Johnson, one of the researchers leading the project. “Now we are making future prediction from past data, which goes beyond describing the instantaneous state of the system.”

dream time

Image credits: dream time

The researchers say the technique is difficult to apply in the real world, because it is not clear if there is enough data to train the prediction system. But despite this, they are optimistic about the applications, which could include anticipating damage to bridges and other structures.

Last week, there was a cautionary note from MIT researchers, who cautioned that neural networks used to simulate actual neural networks should be carefully examined for training bias.

Neural networks, of course, depend on the way our brains process information and its signals, enhancing certain connections and groups of nodes. But this does not mean that fixtures and reals work the same way. In fact, the MIT team found that neural network-based simulations of retinal cells (part of the nervous system) produced similar activity only when carefully constrained by their creators. If it is allowed to control itself, the way actual cells do, it does not produce the desired behaviour.

This is not to say that deep learning models are useless in this field – they are far from it, they are very valuable. But, as Professor Ila Fiete said in a school news post: “It can be a powerful tool, but one has to be very careful in interpreting it and in determining whether they are actually making new predictions, or even illuminating what it is that the brain gets better.”

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button