Skip to main content

   Earlier, DeepMind released a new "generalist" AI model called "Gato". DeepMind, the artificial intelligence lab of parent company Alphabet, says the model can perform tasks as varied as playing Atari video games, understanding image content, chatting, and stacking blocks with a physical robotic arm. All told, Gato can complete 604 different missions.

  There is no doubt that the Gato model is very capable, and just a week after its release, many researchers are already a little obsessed with it.

  Nando de Freitas, one of DeepMind's top researchers and co-author of the paper on "Gato," is one of them, and he can't hide his excitement about Gato. He tweeted, "The game is about to pass," meaning that through "Gato," people have now found their way to artificial general intelligence -- or AGI, the concept of reaching a human-like or superhuman level of artificial intelligence. The main problem of scale that needs to be solved in order to achieve AGI now, he said, is how to make models like Gatto bigger and more capable.

  And De Freitas' announcement unsurprisingly sparked widespread media coverage that DeepMind was reaching "near" human-level AI. And this isn't the first time the media has hyped beyond reality.

  In fact, many other very powerful newly released artificial intelligence models have made similar grand announcements, such as OpenAI's text generator GPT-3 model and image generator DALL-E model. Unfortunately, this eye-catching "grand statement" has led many in the field to overlook other important areas of research in AI. And the same is true for the "Gato" model.

  Mixing different skills is already in use in some current AI models: for example, the DALL-E model can generate images from textual descriptions. There are other models that can recognize images and sentences simultaneously with relatively single training techniques. In addition, DeepMind's Alpha Zero model has learned to play Go as well as chess and shogi.

  But the key difference between AlphaZero and Gato is that AlphaZero can only learn one task at a time. After learning to play Go, if it wants to learn a new game, it must first forget everything it has learned before. That is, it cannot learn to play two games at the same time.

  And here comes the power of Gato: Gato can learn multiple different tasks at the same time, which means it can switch between training for different skills without needing to learn another skill before Forget about previously learned skills. Although this may seem like a small improvement, it is significant.

  However, "Gato" also has a disadvantage, that is, it cannot perform different kinds of tasks at the same time. Robots still need to learn "common sense knowledge" about the world from text, said Jacob Andreas, an assistant professor of artificial intelligence, natural language and speech processing at MIT.

  And if the robot learns these abilities, it may be helpful in some situations. "That way, when you bring a robot into the kitchen and you ask it to make a cup of tea for the first time, they don't need to know the steps to make a cup of tea, and at the same time they can find out what the bag is," Andreas said. location."

  But some outside researchers strongly disagree with De Freitas' claims. Artificial intelligence researcher Gary Marcus, for example, believes that these are "far from 'intelligent'", and he has also been critical of deep learning. He said the hype surrounding "Gato" showed that the AI ​​field was being undermined by a senseless "culture of triumphalism."

  A big problem with deep learning models that always catch people’s attention and get people excited is that people expect it to reach near-human level intelligence, and if it makes mistakes, people will would think it's the model that's the problem, and it's like "if a person makes a mistake, people think it's the person that's the problem."



  He added: "And the truth is that nature keeps telling us that it's not going to work. But unfortunately, researchers in this field believe so much in the news that they completely ignore it."

  Even Colleagues Jackie Kay and Scott Reed, who worked with de Freitas on the Gato study, were also cautious when asked about their views on the issue. Talking about the prospect that Gato might move in the direction of AGI, without showing much interest, he said: "I don't think that predictions like this are really credible and convincing, and that predictions like It’s the same as predicting the stock market, and I try to avoid it.”

  Reed also said that this question is difficult to answer: “I think most people who work in machine learning will deliberately avoid answering these kinds of questions. It’s hard to predict, but, I do hope that one day it can be realized."

  From a certain point of view, DeepMind hyped "Gato" as an almighty "generalist" and AGI, and the ultimate victim in the artificial intelligence industry may be DeepMind itself. This would give rise to the idea that the capabilities of current AI systems are still "narrow", that current AI can only do a specific, restricted set of tasks, such as generating text.

  Some technologists, including those from DeepMind, also believe that one day humans will develop "broader" AI systems that will work like humans, or even better. While some call it artificial general intelligence, others criticize it as a "belief in magic." Many top researchers, including Likun Yang, chief AI scientist at Meta, are also skeptical of its possibility.

  "Gato" is a "generalist" who can do many different things at the same time. But MIT's Andreas says that's not yet at the level of "general" AI, which will be able to flexibly perform new types of tasks on demand for which the model has not been trained, and we're still far from it. to this point.

  Even scaling up the model won't solve the problem of the model's inability to "learn for life," he said. Lifelong learning means that if a model is taught something, they will understand all the implications and use those newly learned abilities in all subsequent tasks.

  Emmanuel Kahembwi, a researcher in artificial intelligence and robotics and a member of the Black in Ai group co-founded by Tinit Gerbrough, also believes that the hype surrounding the "Gato" model is affecting AI in general. Development is harmful.

  "There are a lot of interesting topics in AI that need more attention due to lack of funding, but unfortunately many of the big tech companies and many of the researchers in those tech companies aren't as interested in it," he said.

  Patrick Vilas Dahl, president of the Wheat Govin Foundation, said tech companies often need to step back and reflect on and assess why they built what they are currently building. It is reported that the McGovern Foundation is a charity that funds artificial intelligence projects.

  “The concept of AGI describes a profound concept of human nature — pushing humans to become stronger by creating tools that make them stronger,” he said. “And the concept is really good, but also Be aware that people can be distracted by it and forget to study the real problems we're facing that need to be solved with AI."



Luoyang Zhengmu Biotechnology Co Ltd | GMP Certified Veterinary API Manufacturer

Luoyang Zhengmu Biotechnology Co Ltd

GMP-Certified Veterinary API Manufacturer

Core Competencies

  • ✓ 1000-ton Annual Production Capacity
  • ✓ 300,000-class Clean Room Facilities
  • ✓ BP/EP/USP Standard Compliance
  • ✓ Full-range Quality Control Laboratory

Featured Pharmaceutical Products

Sulfa Drug Series

  • Sulfadimidine Sodium
  • Sulfadiazine & Sodium Salt
  • Diaveridine HCl

Quinolones Series

  • Norfloxacin Derivatives
  • Pefloxacin Mesilate
  • Enrofloxacin API

Quality Assurance System

GMP Certification of Luoyang Zhengmu Biotechnology

Our analytical capabilities include:

  • HPLC & GC Analysis
  • Spectrophotometry (UV/IR)
  • Microbiological Testing

Global Partnerships

Contact our technical team:

📍 Liuzhuang Village, Goushi Town
Yanshi City, Henan Province 471000 China
📞 +86 379-67490366
📧 info@zhengmubio.cn