Mistral AI Launches Codestral: A Generative AI Model for Code Generation
Mistral AI has announced the release of Codestral, a groundbreaking generative AI model designed specifically for code generation tasks. This innovative model aims to empower developers by simplifying and accelerating coding processes through its advanced capabilities. According to Mistral AI, Codestral is an open-weight model that helps developers write and interact with code via a shared instruction and completion API endpoint.
A Model Fluent in 80+ Programming Languages
Codestral has been trained on a diverse dataset covering more than 80 programming languages, including popular languages like Python, Java, C, C++, JavaScript, and Bash, as well as niche languages like Swift and Fortran. This extensive language base allows Codestral to assist developers in various coding environments and projects.
The model significantly reduces the time and effort required for coding by completing functions, writing tests, and filling in partial code. This automation not only enhances developers’ productivity but also minimizes the risk of errors and bugs, making it a valuable tool for software development.
Setting the Bar for Code Generation Performance
As a 22-billion parameter model, Codestral sets a new benchmark in the performance/latency space for code generation. It features a larger context window of 32k, surpassing competitors that offer 4k, 8k, or 16k. This makes Codestral highly efficient in long-range code generation tasks, as demonstrated by its superior performance in the RepoBench evaluation.
Codestral’s capabilities have been rigorously tested using several benchmarks:
- Python: HumanEval pass@1, MBPP sanitized pass@1, CruxEval, and RepoBench EM.
- SQL: Spider benchmark.
- Multiple Languages: HumanEval pass@1 across C++, bash, Java, PHP, TypeScript, and C#.
- Fill-in-the-middle (FIM): HumanEval pass@1 in Python, JavaScript, and Java, compared to DeepSeek Coder 33B.
Get Started with Codestral
Codestral is available for research and testing under the new Mistral AI Non-Production License. The model can be downloaded from HuggingFace. Additionally, a dedicated endpoint (codestral.mistral.ai
) offers free usage during an 8-week beta period, managed through a waitlist to ensure quality service.
For broader applications, Codestral is also accessible via the usual API endpoint (api.mistral.ai
), where queries are billed per token. Developers can create accounts on La Plateforme to start building applications with Codestral.
The model is integrated with popular tools like Continue.dev and Tabnine for VSCode and JetBrains environments, enhancing developer productivity. More information on these integrations is available in the Mistral AI documentation.
Community and Expert Feedback
Industry experts have praised Codestral’s performance:
- “A public autocomplete model with this combination of speed and quality hadn’t existed before, and it’s going to be a phase shift for developers everywhere.” – Nate Sesti, CTO and co-founder of Continue.dev
- “We are excited about the capabilities that Mistral unveils and delighted to see a strong focus on code and development assistance, an area that JetBrains cares deeply about.” – Vladislav Tankov, Head of JetBrains AI
- “We used Codestral to run a test on our Kotlin-HumanEval benchmark and were impressed with the results. For instance, in the case of the pass rate for T=0.2, Codestral achieved a score of 73.75, surpassing GPT-4-Turbo’s score of 72.05 and GPT-3.5-Turbo’s score of 54.66.” – Mikhail Evtikhiev, Researcher at JetBrains
These endorsements underscore Codestral’s potential to revolutionize code generation, making it a powerful tool for developers worldwide.
Image source: Shutterstock
. . .
Tags