Google would have created the first total artificial intelligence, to “compete” with the human mind

⁇ [VIDÉO] You may also like this companion content (after the ad)

DeepMind, a company (owned by Google) that specializes in artificial intelligence, recently presented its new artificial intelligence called “Gato”. Unlike “classic” AIs, which specialize in a specific task, Gato can perform more than 600 tasks, often better than humans. Controversy has erupted over whether this is really the first “generalized artificial intelligence” (GAI). Experts remain skeptical of DeepMind’s announcement.

Artificial intelligence is positively changing many disciplines. Incredibly special neural networks are now able to produce results beyond human capabilities in many areas.

One of the major challenges in the field of AI is the implementation of a system that integrates generalized artificial intelligence (GAI), or dynamic artificial intelligence. Such a system should be able to understand and manage any task that a person can do. He is therefore able to compete with the human intellect, and even to develop a certain kind of consciousness. Earlier this year, Google unveiled a type of AI that can do coding like a typical programmer. Recently, in this race for AI, DeepMind announced the creation of Gato, an artificial intelligence to be presented as the world’s first AGI. The results are published in arXiv.

An unprecedented model of the generalist agent

An AI system capable of solving multiple tasks is not new. For example, Google recently began using a system for its search engine called a “unified multitasking model,” or MUM, that can manage text, images, and video to perform tasks, from in research into cross-linguistic changes. of the spelling of a word, and the association of search queries with related images.

In fact, Senior Vice President Prabhakar Raghavan provided an impressive example of MUM in action, using the mock search question: I’ve been hiking Mount Adams and now I want to hike Mount Fuji next fall, what else can I do to prepare? “. MUM allowed Google Search to show the differences and similarities between Mount Adams and Mount Fuji. He also brought up articles that discussed the equipment needed to climb later. There is nothing surprising you can say, but concretely with Gato, what is new is the diversity of tasks approached and the method of training, in one and unique system.

Gato’s guiding design principle is to train the widest range of relevant data possible, including a variety of applications such as images, text, proprioception, combined torques, button presses and more. discrete and continuous observation and action.

To be able to process this multimodal data, scientists encode it into a flat array of “tokens”. These tokens are used to represent data in a way that Gato can understand, allowing the system, for example, to determine which combination of words in a sentence will make grammatical meaning. These sequences are grouped and processed in a transformative neural network, often used in language processing. The same network, with the same weight, is used for different tasks, unlike traditional neural networks. In fact, later, each neuron is given a particular weight and therefore different importance. In simple terms, weight determines what information enters the network and calculates the output data.

In this representation, Gato can be trained and sampled from a standard large-scale model language, across a large number of datasets including the experience of simulated agents and real-world environments, in addition to various natural language datasets and images. When operating, Gato uses the context of assembling these sample tokens to determine the form and content of their responses.

For example killing Gato. The system “consumes” a sequence of previous sample observations and action tokens to perform the next action. The new action is applied, to the agent (Gato), to the environment (a game console in this illustration), a new set of observations is obtained, and the process is repeated. © S. Reed et al., 2022.

The results are somewhat heterogeneous. When it comes to dialogue, Gato is very lacking in competing proficiency with GPT-3, the Open AI text generation model. He can give wrong answers during conversations. For example he replied that Marseille was the capital of France. The authors point out that this can be improved with further scaling.

However, he has still proven to be more capable in other areas. Its designers claim that, half the time, Gato is better than human experts in 450 of the 604 tasks listed in the research paper.

Examples of tasks performed by Gato, as sequences of tokens. © S. Reed et al., 2022.

The Game is over “, Really?

Some AI researchers view AGI as an existing disaster for humans: a “super intelligent” system beyond human intelligence will replace Earth’s humanity, under the worst possible scenario. Some experts believe that it is not possible in our lifetime to see the emergence of these AGIs. This is the pessimistic opinion that Tristan Greene argues in his editorial on the site AngNextWeb. He explained that Gato could easily be mistaken for a real IAG. The difference, however, is that a general intelligence can learn to do new things without prior training.

The response to this article did not take long. on TwitterNando de Freitas, DeepMind researcher and professor of machine learning at Oxford University, said the game is over (“ The Game is over ”) In the long search for general artificial intelligence. He added: “ It’s about making these models bigger, safer, more calculation efficient, faster sampling, with smarter memory, more modalities, new data, online/offline By solving these challenges we can achieve IAG “.

However, the authors warn against the development of these AGIs: “ Even if generalist agents are still an evolving field of research, their potential impact on society requires a fully interdisciplinary analysis of their risks and benefits. […] Generalist agent damage reduction tools are relatively underdeveloped and require further research before these agents can be deployed. “.

In addition, general agents, capable of performing actions in the physical world, pose new challenges that require new methods of mitigation. For example, physical embodiment may lead users to anthropomorphize the agent, leading to misconfidence in the case of a faulty system.

In addition to the risks of seeing the AGI tip as a detrimental operation for humans, there is no data that currently demonstrates the ability to produce robust results in a consistent manner. This is mainly due to the fact that human problems are often difficult, there is not always a solution, and for which no prior training is possible.

Tristant Greene, despite Nando de Fraitas’ response, maintained his opinion as harsh, in AngNextWeb : “ It’s a miracle to see a machine pulling diversion abilities and conjuring a la Copperfield, especially when you know that said machine is no smarter than a toaster (and obviously fool than the craziest mouse) “.

Whether or not we agree with these statements, or whether we are more optimistic about the progress of AGIs, yet it seems that the rise of such intelligences, which compete with our human minds, is not yet complete. .

Source: arXiv

Leave a Reply

Your email address will not be published. Required fields are marked *