China’s DeepSeek Coder becomes first open-source coding model to beat GPT-4 Turbo

China’s DeepSeek Coder becomes first open-source coding model to beat GPT-4 Turbo

It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More

Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model.

Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category.

It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities. 

What does DeepSeek Coder V2 bring to the table?

Founded last year with a mission to “unravel the mystery of AGI with curiosity,” DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K – enabling it to handle more complex and extensive coding tasks.

When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively — sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model’s mathematical capabilities (MATH and GSM8K). 

The only model that managed to outperform DeepSeek’s offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K.

DeepSeek says it achieved these technical and performance advances by using DeepSeek V2, which is based on its Mixture of Experts framework, as a foundation. Essentially, the company pre-trained the base V2 model on an additional dataset of 6 trillion tokens – largely comprising code and math-related data sourced from GitHub and CommonCrawl.

This enables the model, which comes with 16B and 236B parameter options, to activate only 2.4B and 21B “expert” parameters to address the tasks at hand while also optimizing for diverse computing and application needs. 

Strong performance in general language, reasoning

In addition to excelling at coding and math-related tasks, DeepSeek Coder V2 also delivers decent performance in general reasoning and language understanding tasks. 

For instance, in the MMLU benchmark designed to evaluate language understanding across multiple tasks, it scored 79.2. This is way better than other code-specific models and nearly similar to the score of Llama-3 70B. GPT-4o and Claude 3 Opus, on their part, continue to lead the MMLU category with scores of 88.7 and 88.6, respectively. Meanwhile, GPT-4 Turbo follows closely behind.

The development shows open coding-specific models are finally excelling across the spectrum (not just their core use cases) and closing in on state-of-the-art closed-source models.

One of the most impressive teams in generative AI and open source killing it again!

The technical papers are amongst the best out there and performance has been exceptional from the final models with permissive licenses.

Great to see, everyone should try the 16b version ? https://t.co/lmggkEgj2n

— Emad (@EMostaque) June 17, 2024

As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. 

For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot. 

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat’s Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : VentureBeat – https://venturebeat.com/ai/chinas-deepseek-coder-becomes-first-open-source-coding-model-to-beat-gpt-4-turbo/

Exit mobile version