Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.
Today is the day.
As per OpenAI co-founder and CEO Sam Altman’s posts on X/Twitter on Friday, the company that ushered in the generative AI age is set to announce big updates to its signature chatbot ChatGPT and the underlying GPT-4 large language model (LLM) that powers it.
When and where will OpenAI’s Spring Update take place?
The event is set to kick off at 10 am Pacific / 1 pm Eastern Time and will be livestreamed on OpenAI’s website (openai.com) and its YouTube channel. Already, as of the time of this article’s posting, more than 5,000 viewers are eagerly awaiting the event start.
The time for the YouTube stream to start is actually earlier, at 9 am PT/12 pm ET, though the actual presentations and content are not expected to start till an hour later.
VB Event
The AI Impact Tour: The AI Audit
Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Request an invite
What to expect from OpenAI’s Spring Updates event
While reports and rumors had swirled for weeks — including one from Reuters on May 10 suggesting OpenAI would release a search engine or product to rival Google, which is hosting its own I/O conference this week, and upstart AI unicorn Perplexity — Altman put the kibosh on those on Friday, stating on X, “not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”
not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.
monday 10am PT. https://t.co/nqftf6lRL1
— Sam Altman (@sama) May 10, 2024
The sentiment was echoed by OpenAI co-founder and president Greg Brockman in his own post on X, stating vaguely OpenAI would produce a “live demo of some new work.”
If not a search engine product, what could OpenAI launch instead?
Activity by OpenAI employees on X has hinted at a conversational audio/voice assistant reminiscent of the character Samantha in the 2006 sci-fi movie “Her,” portrayed by the voice of Scarlett Johansson.
Altman “Liked” a post on X from self-described “College student, anarcho-capitalist, techno-optimist” Spencer Schiff that said: “Currently watching Her to prepare for Monday.”
Currently watching Her to prepare for Monday
— Spencer Schiff (@SpencerKSchiff) May 12, 2024
Meanwhile, at least 10 different OpenAI researchers and engineers have posted cryptic updates on X stating they are all excited for what one another are presenting, as shown in his handy video montage from AI influencer “@SmokeAwayyy.”
Among those OpenAI employees who have posted about their excitement for the event are: Aidan Clark, a researcher and former Google DeepMind employee; Mo Bavarian, a research scientist whose X account bio says they are working on optimization and architecture of LLMs at OpenAI; researcher Rapha Gontijo Lopes; technical staff member Chelsea Sierra Voss; Steven Heidel, described as working on fine-tuning at OpenAI; technical staff member Bogo Giertler; and technical staff member Javier “Javi” Soto.
Altman also answered questions from the public using his verified account on Reddit on Friday related to OpenAI’s “Model Spec,” a new document published last week showing how it wants its AI products to behave.
In the course of that discussion, Altman answered a question from user “ankle_biter50” asking: “Will you making this new model mean that we will have chatGPT 4 and the current DALL-E free?” with a side eye/looking emoji, indicating there may be some truth to this idea.
Adding to speculation that OpenAI will be releasing a voice assistant of some kind is an observation from user “@ananayarora” on X pointing out that, looking at the source code on its website, OpenAI has “webRTC servers in place” to allow “phone calls inside of chatGPT” (thanks to Eluna.AI for spotting it).
OpenAI seems to be working on having phone calls inside of chatGPT. This is probably going to be a small part of the event announced on Monday.
(1/n) pic.twitter.com/KT8Hb54DwA
— Ananay (@ananayarora) May 11, 2024
In addition, X user “@testingcatalog” posted that the ChatGPT iOS update appears to have been updated recently as well with a new conversational user interface.
ChatGPT iOS app is getting its conversational mode prepared for the upcoming event on May 13. Some recent experiments ?
– No model Selector
– Lite mode for the conversational UI
– Mute button on the Conversational UI pic.twitter.com/llOIAmBQDC
— TestingCatalog News ? (@testingcatalog) May 11, 2024
OpenAI has offered a ChatGPT with Voice interface on its iOS and Android apps since December 2023, allowing for the chatbot to speak to users back and forth and take their voice queries through their smartphones, automatically attempting to detect when the user pauses or finishes speaking. It’s accessible in the mobile apps by clicking the headphones icon beside the input text bar.
OpenAI also gave ChatGPT the ability to “speak” responses to user prompts through AI generated audio voices back in March 2024, adding a new feature called “Read Aloud” to its iOS, Android, and web apps that can be accessed by clicking the small “speaker” icon below a response.
The company further showcased a demo of its voice cloning technology later that month, which can reproduce a human speaker’s voice convincingly on new typed text using only a 15-second recording of the original speaker. However, like its realistic AI video generator Sora, the OpenAI “Voice Engine” remains not publicly released for now, and only accessible to selected third-party groups and testers per OpenAI’s desire to avoid or limit misuse.
Presumably, a new audio conversational assistant from OpenAI would build upon these earlier demos, releases, and technologies, and would perhaps allow a human user to “talk” to ChatGPT in a more naturalistic and humanlike open-ended conversation, without waiting for the assistant to detect a pause or finished query.
Perhaps relatedly, X user “@alwaysaq00” found code on OpenAI’s website indicating the presence of a new GPT-4 Omni model, aka GPT-4o.
I just discovered GPT-4 Omni today. Has it been around for a while? I’m wondering if OpenAI had this under wraps. Also, is ‘Omni’ the codename for GPT-4.5? It seems I’ve stumbled upon something exciting!
Ive known about omni for a while though.
? https://t.co/x0gj2hYleU pic.twitter.com/CipsvvgAZ5
— alwaysaq00 (@alwaysaq00) May 13, 2024
Whatever OpenAI announces today, it will be sure to turn heads and leave the AI and broader tech industry discussing the ramifications for hours and likely days and weeks to come. Will it live up to the already surging hype — despite not being GPT-5?
Stay tuned and watch the event along with us at 10 am ET/1 pm ET and we’ll be reporting on it here live.
VB Daily
Stay in the know! Get the latest news in your inbox daily
By subscribing, you agree to VentureBeat’s Terms of Service.
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : VentureBeat – https://venturebeat.com/ai/openai-spring-updates-event-where-to-watch-when-what-to-expect/