Large language models (LLMs) can now be put to the test in the retro arcade video game Street Fighter III, and so far it seems some are better than others.
The Street Fighter III-based benchmark, termed LLM Colosseum, was created by four AI devs from Phospho and Quivr during the Mistral hackathon in San Francisco last month. The benchmark works by pitting two LLMs against each other in an actual game of Street Fighter III, keeping each updated on how close victory is, where the opposing LLM is, what move it took. Then it asks for what it would like to do, after which it will make a move.
According to the official leaderboard for LLM Colosseum, which is based on 342 fights between eight different LLMs, ChatGPT-3.5 Turbo is by far the winner, with an Elo rating of 1,776.11. That’s well ahead of several iterations of ChatGPT-4, which landed in the 1,400s to 1,500s.
What even makes an LLM good at Street Fighter III is balance between key characteristics, said Nicolas Oulianov, one of the LLM Colosseum developers. “GPT-3.5 turbo has a good balance between speed and brains. GPT-4 is a larger model, thus way smarter, but much slower.”
The disparity between ChatGPT-3.5 and 4 in LLM Colosseum is an indication of what features are being prioritized in the latest LLMs, according to Oulianov. “Existing benchmarks focus too much on performance regardless of speed. If you’re an AI developer, you need custom evaluations to see if GPT-4 is the best model for your users,” he said. Even fractions of a second can count in fighting games, so taking any extra time can result in a quick loss.
A different experiment with LLM Colosseum was documented by Amazon Web Services developer Banjo Obayomi, running models off Amazon Bedrock. This tournament involved a dozen different models, though Claude clearly swept the competition by snagging first to fourth place, with Claude 3 Haiku scoring first place.
Obayomi also tracked the quirky behavior that tested LLMs exhibited from time to time, including attempts to play invalid moves such as the devastating “hardest hitting combo of all.”
There were also instances where LLMs just refused to play anymore. The companies that create AI models tend to inject them with an anti-violent outlook, and will often refuse to answer any prompt that it deems to be too violent. Claude 2.1 was particularly pacifistic, saying it couldn’t tolerate even fictional fighting.
Compared to actual human players, though, these chatbots aren’t exactly playing at a pro level. “I fought a few SF3 games against LLMs,” says Oulianov. “So far, I think LLMs only stand a chance to win in Street Fighter 3 against a 70 or a five-year-old.”
MPs ask: Why is it so freakin’ hard to get AI giants to pay copyright holders?
Boffins deem Google DeepMind’s material discoveries rather shallow
US Air Force secretary so confident in AI-controlled F-16s, he’ll fly in one
Next-gen Meta AI chip serves up ads while sipping power
ChatGPT-4 similarly performed pretty poorly in Doom, another old-school game that requires quick thinking and fast movement.
But why test LLMs in a retro fighting game?
The idea of benchmarking LLMs in an old-school video game is funny and maybe that’s all the reason LLM Colosseum needs to exist, but it might be a little more than that. “Unlike other benchmarks you see in press releases, everyone played video games, and can get a feel of why it would be challenging for an LLM,” Oulianov said. “Large AI companies are gaming benchmarks to get pretty scores and show off.”
But he does note that “the Street Fighter benchmark is kind of the same, but way more entertaining.”
Beyond that, Oulianov said LLM Colosseum showcases how intelligent general-purpose LLMs already are. “What this project shows is the potential for LLMs to become so smart, so fast, and so versatile, that we can use them as ‘turnkey reasoning machines’ basically everywhere. The goal is to create machines able to not only reason with text, but also react to their environment and interact with other thinking machines.”
Oulianov also pointed out that there are already AI models out there that can play modern games at a professional level. DeepMind’s AlphaStar trashed StarCraft II pros back in 2018 and 2019, and OpenAI’s OpenAI Five model proved to be capable of beating world champions and cooperating effectively with human teammates.
Today’s chat-oriented LLMs aren’t anywhere near the level of purpose-made models (just try playing a game of chess against ChatGPT), but perhaps it won’t be that way forever. “With projects like this one, we show that this vision is closer to reality than science fiction,” Oulianov said. ®
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Register – https://go.theregister.com/feed/www.theregister.com/2024/04/11/chatgpt_claude_street_fighter_3/