(Image credit: imaginima via Getty Images)
Artificial general intelligence (AGI) could be around the corner if Meta CEO Mark Zuckerberg has any say in it. The Facebook founder announced on Instagram that he is dumping more than $10 billion into the computing infrastructure to develop AGI — AI that can match or surpass humans across a range of cognitively demanding tasks.
“Today I’m bringing Meta’s two AI research efforts closer together to support our long-term goals of building general intelligence, open-sourcing it responsibly, and making it available and useful to everyone in all of our daily lives,” Zuckerberg said Jan. 18 in a recorded message. “It’s clear that the next generation of services requires building full general intelligence, building the best AI assistants, AIs for creators, AIs for businesses and more that needs services in every area of AI.”
Unlike artificial intelligence (AI) systems today, which are highly specific and can’t comprehend nuance and context as well as humans, an AGI system would be able to solve problems in a wide range of environments, according to a 2019 essay published in the journal EMBO Reports. It would therefore mimic the key features of human intelligence, in particular learning and flexibility.
Related: 3 scary breakthroughs AI will make in 2024
Achieving AGI may also feel like a point of no return for the human race — with Google CEO Sundar Pichai saying as far back as 2018 that the field of AI research is “more profound than electricity or fire.” Last year, dozens of experts and prominent figures — including OpenAI CEO Sam Altman and Microsoft founder Bill Gates — signed a statement stressing the collective need for humanity to mitigate “the risk of extinction from AI” alongside other societal-scale risks such as pandemics and nuclear war. That said, many scientists think humanity can never build AGI.
But Zuckerberg announced in an Instagram reel that the company is buying 350,000 Nvidia H100 graphics processing units (GPUs) — some of the most powerful graphics cards in the world — which are key to training today’s best AI models. This will more than double Meta’s total computing power for AI training, with Meta aiming to wield computing power equivalent to 600,000 H100 GPUs in total.
Nvidia’s H100 is the newer version of the A100 graphics cards, which OpenAI used to train ChatGPT. Our best available knowledge, based on unverified leaks, suggests OpenAI used roughly 25,000 Nvidia A100 GPUs for the chatbot’s training — although other estimates suggest this number is lower.
Zuckerberg said this “absolutely massive amount of infrastructure” will be in place by the end of the year. His company is currently training Meta’s answer to ChatGPT and Google’s Gemini, dubbed “Llama 3” — and teased a future roadmap that includes a future AGI system.
Get the world’s most fascinating discoveries delivered straight to your inbox.
Keumars is the technology editor at Live Science. He has written for a variety of publications including ITPro, The Week Digital, ComputerActive and TechRadar Pro. He holds a BSc in Biomedical Sciences, and has worked as a technology journalist for more than five years.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Live Science – https://www.livescience.com/technology/artificial-intelligence/artificial-general-intelligence-when-ai-becomes-more-capable-than-humans-is-just-moments-away-metas-mark-zuckerberg-declares