Benj Edwards / Getty Images
On Tuesday, researchers from Stanford University and University of California, Berkeley published a research paper that purports to show changes in GPT-4’s outputs over time. The paper fuels a common-but-unproven belief that the AI language model has grown worse at coding and compositional tasks over the past few months. Some experts aren’t convinced by the results, but they say that the lack of certainty points to a larger problem with how OpenAI handles its model releases.
In a study titled “How Is ChatGPT’s Behavior Changing over Time?” published on arXiv, Lingjiao Chen, Matei Zaharia, and James Zou, cast doubt on the consistent performance of OpenAI’s large language models (LLMs), specifically GPT-3.5 and GPT-4. Using API access, they tested the March and June 2023 versions of these models on tasks like math problem-solving, answering sensitive questions, code generation, and visual reasoning. Most notably, GPT-4’s ability to identify prime numbers reportedly plunged dramatically from an accuracy of 97.6 percent in March to just 2.4 percent in June. Strangely, GPT-3.5 showed improved performance in the same period.
Enlarge / Performance of the March 2023 and June 2023 versions of GPT-4 and GPT-3.5 on four tasks, taken from “How Is ChatGPT’s Behavior Changing over Time?”
Chen/Zaharia/Zou
This study comes on the heels of people frequently complaining that GPT-4 has subjectively declined in performance over the past few months. Popular theories about why include OpenAI “distilling” models to reduce their computational overhead in a quest to speed up the output and save GPU resources, fine-tuning (additional training) to reduce harmful outputs that may have unintended effects, and a smattering of unsupported conspiracy theories such as OpenAI reducing GPT-4’s coding capabilities so more people will pay for GitHub Copilot.
Meanwhile, OpenAI has consistently denied any claims that GPT-4 has decreased in capability. As recently as last Thursday, OpenAI VP of Product Peter Welinder tweeted, “No, we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one. Current hypothesis: When you use it more heavily, you start noticing issues you didn’t see before.”
While this new study may appear like a smoking gun to prove the hunches of the GPT-4 critics, others say not so fast. Princeton computer science professor Arvind Narayanan thinks that its findings don’t conclusively prove a decline in GPT-4’s performance and are potentially consistent with fine-tuning adjustments made by OpenAI. For example, in terms of measuring code generation capabilities, he criticized the study for evaluating the immediacy of the code’s ability to be executed rather than its correctness.
“The change they report is that the newer GPT-4 adds non-code text to its output. They don’t evaluate the correctness of the code (strange),” he tweeted. “They merely check if the code is directly executable. So the newer model’s attempt to be more helpful counted against it.”
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Ars Technica – https://arstechnica.com/?p=1954989
Unveiling 2024 Community Health Assessment: Join the Conversation and Collaborate for a Healthier Future!