September 24, 2023 11:40 AM
Image Credit: Image created with Midjourney
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Historically, periods of rapid advancement and change have ushered in times of great uncertainty. Harvard economist John Kenneth Galbraith wrote about such a time in his 1977 book The Age of Uncertainty, in which he discussed the successes of market economics but also predicted a period of instability, inefficiency and social inequity.
Today, as we navigate the transformative waves of AI, we find ourselves on the cusp of a new era marked by similar uncertainties. However, this time the driving force isn’t merely economics — it’s the relentless march of technology, particularly the rise and evolution of AI.
Already, the impact of AI is becoming more discernable in daily life. From AI-generated songs, to haikus written in the style of Shakespeare, to self-driving vehicles, to chatbots that can imitate lost loved ones and AI assistants that help us with work, the technology is beginning to become pervasive.
AI will soon become much more prevalent with the approaching AI tsunami. Wharton School professor Ethan Mollick recently wrote about the results of an experiment on the future of professional work. The experiment centered around two groups of consultants working for the Boston Consultant Group. Each group was given various common tasks. One group was able to use currently available AI to augment their efforts while the other was not.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Register Now
Mollick reported: “Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without.”
Of course, it is possible that problems inherent in large language models (LLM), such as confabulation and bias, may cause this wave to simply dissipate — although this is now appearing unlikely. While the technology is already demonstrating its disruptive potential, it will take several years until we are able to experience the power of the tsunami. Here is a look at what is coming.
The next wave of AI models
The next generation of LLMs will be more sophisticated and more generalized than the current crop that includes GPT-4 (OpenAI), PaLM 2 (Google), LLaMA (Meta) and Claude 2 (Anthropic). It’s likely that there will also be a new and possibly very capable model entrant from xAI, Elon Musk’s new start-up. Capabilities like reasoning, common sense and judgment remain big challenges for these models. We can expect to see progress in each of these areas, however.
Among the next generation, The Wall Street Journal reported that Meta is working on a LLM that will be at least as capable as GPT-4. According to the report, this is expected sometime in 2024. It is reasonable to expect that OpenAI is also working on their next generation, although they have been quiet in discussing plans. That likely will not last long.
Based on currently available information, the most substantive new model is “Gemini” from the combined Google Brain and DeepMind AI team. Gemini could far surpass anything available today. Alphabet CEO Sundar Pichai announced last May that training of the model was already underway.
Pichai said in a blog at that time: “While still early, we’re already seeing impressive multimodal capabilities not seen in prior models.”
Multimodal means it can process and understand multiple types of data inputs (text and images), serving as the foundation for both text-based and image-based applications. The reference to capabilities not seen in prior models means that there could be greater emergent or unanticipated qualities and behaviors. An emergent example from the current generation is the ability to create computer code, as this was not an expected capability.
A Swiss Army Army knife of AI models?
There have been Reports that Google has given a small group of companies access to an early version of Gemini. One of those might be SemiAnalysis, a well-regarded semiconductor research company. According to a new post from the company, Gemini could be 5 to 20X more advanced than GPT-4 models now on the market.
Gemini’s design will likely be based on DeepMind’s Gato disclosed in 2022. A VentureBeat article last year reported: “The deep learning [Gato] transformer model is described as a ‘generalist agent’ and purports to perform 604 distinct and mostly mundane tasks with varying modalities, observations and action specifications. It has been referred to as the Swiss Army Knife of AI models. It is clearly much more general than other AI systems developed thus far and in that regard appears to be a step towards AGI [artificial general intelligence].”
Towards artificial general intelligence (AGI)
Already, GPT-4 is thought to show “sparks of AGI” according to Microsoft, able to “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting.” By leapfrogging all existing models, Gemini could indeed be a large step towards AGI. The speculation is that Gemini will be released in several levels of model capabilities, possibly over some months and perhaps beginning before the end of this year.
As impressive as Gemini is likely to be, even larger and more sophisticated models are expected. Mustafa Suleyman, the CEO and cofounder of Inflection AI and a cofounder of DeepMind, predicted during an Economist conversation that “in the next five years, the frontier model companies — those of us at the very cutting edge who are training the very largest AI models — are going to train models that are over a thousand times larger than what you see today in GPT-4.”
The potential applications and influence these models could have on our daily lives could be unparalleled, with the potential for great benefits as well as enhanced dangers. Vanity Fair quotes David Chalmers, a professor of philosophy and neural science at NYU: “The upsides for this are enormous, maybe these systems find cures for diseases and solutions to problems like poverty and climate change, and those are enormous upsides.” The article also discusses the potential risks, citing expert predictions of horrific outcomes including the possibility of human extinction, with probability estimates ranging from 1% to 50%.
The end of human-dominated history?
In the Economist conversation, historian Yuval Noah Harari said these coming advances in AI development will not mark the end of history, but “the end of human-dominated history. History will continue, with somebody else in control. I’m thinking of it as more an alien invasion.”
To which Suleyman countered: AI tools will not have agency, meaning they cannot do anything beyond that which humans empower them to do. Harari then responded that this future AI could be “more intelligent than us. How do you prevent something more intelligent than you from developing agency?” With agency, an AI could pursue actions that may not be aligned with human needs and values.
These next-generation models represent the next step towards AGI and a future where AI becomes even more capable, integrated and indispensable for modern life. While there is ample reason to be hopeful, these expected new developments add even more impetus to calls for oversight and regulation.
The regulatory conundrum
Even the leaders of companies who make frontier models agree that regulation is necessary. After many of appeared jointly before the U.S. Senate on September 13th, Fortune reported that they “loosely endorsed the idea of government regulations” and that “there is little consensus on what regulation would look like.”
The session was organized by Senator Chuck Schumer, who afterward discussed the challenges faced in developing appropriate regulations. He pointed out that AI is technically complicated, keeps changing and “has such a wide, broad effect across the whole world.”
It might not even be realistically possible to regulate AI. For one thing, much of the technology has been released as open-source software, meaning it is effectively out in the wild for anyone to use. This alone could make many regulatory efforts problematic.
Precaution logical and sensical
Some see public statements by AI leaders in support of regulation theatrics. MarketWatch reported the views of Tom Siebel, a long-time Silicon Valley executive and current CEO for C3 AI: “AI execs are playing rope-a-dope with lawmakers, asking them to please regulate us. But there is not enough money and intellectual capital to ensure millions of algorithms are safe. They know it is impossible.”
It may indeed be impossible, but we must make the attempt. As Suleyman noted in his Economist conversation: “This is the moment when we have to adopt a precautionary principle, not through any fear monger but just as a logical, sensical way to proceed.”
As AI rapidly progresses from narrow capabilities towards AGI, the promise is vast but the perils profound. This age of uncertainty demands our deepest conscience, wisdom and caution to develop these AI technologies for the benefit of humanity while averting extreme potential dangers.
Gary Grossman is senior VP of the technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : VentureBeat – https://venturebeat.com/ai/the-ai-age-of-uncertainty/