• Home
  • Ai
  • Ai News
  • DeepSeek Prover V2, an Open Source Mathematics Focused AI Model, Released

DeepSeek Prover V2, an Open-Source Mathematics-Focused AI Model, Released

DeepSeek-Prover-V2 is an advanced language model specialised in formal theorem proving using the Lean 4 proof assistant.

Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News
DeepSeek Prover V2, an Open-Source Mathematics-Focused AI Model, Released

Photo Credit: Reuters

DeepSeek-Prover-V2 can help mathematicians in exploring new theorems and verifying formal proofs

Highlights
  • It is the successor to the Prover, which was last updated in August 2024
  • Prover-V2 is built on the company’s DeepSeek-V3 AI model
  • The AI model features 671 billion parameters
Advertisement

DeepSeek, the Hangzhou, China-based artificial intelligence (AI) firm, released an updated version of its Prover model on Wednesday. Dubbed DeepSeek-Prover-V2, it is a highly specialised model that focuses on proving formal mathematical theorems. The large language model (LLM) uses the Lean 4 programming language to check if the mathematical proofs are logically consistent by analysing each step independently. Similar to the Chinese firm's previous releases, the DeepSeek-Prover-V2 is an open-source model and can be downloaded from popular repositories such as GitHub and Hugging Face.

DeepSeek's New Mathematics-Focused AI Model Is Here

The AI firm detailed the new model on its GitHub listing page. It is essentially a reasoning-focused model with a visible chain-of-thought (CoT), which functions in the domain of mathematics. It is built on and distilled from the DeepSeek-V3 AI model, which was released in December 2024.

DeepSeek-Prover-V2 can be used in a variety of ways. It can solve high-school to college-level mathematical problems and find and fix errors in mathematical theorem proofs. It can also be used as a teaching aid and generate step-by-step explanations for proofs, and it can assist mathematicians and researchers in exploring new theorems and proving their validity.

It is available in two model sizes — a seven billion parameter size and a larger 671 billion parameter size. While the latter is trained on top of DeepSeek-V3-Base, the former is built upon DeepSeek-Prover-V1.5-Base and comes with a context length of up to 32,000 tokens.

Coming to the pre-training processes, the researchers implemented a cold-start training system by prompting the base model to decompose complex problems. These problems served as a series of subgoals. Then, the proofs of resolved subgoals were added to the CoT and combined with the reasoning of the base model to create an initial cold start for reinforcement learning.

Notably, apart from GitHub, the AI model can also be downloaded from DeepSeek's Hugging Face listing. The Prover-V2 model highlights how iterative changes to the training process of AI models can result in significantly improving their specialised capability. Similar to other open-source model releases, the details about the core architecture or the larger dataset are not known.

Play Video
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: DeepSeek, AI, Artificial Intelligence
Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In hi... more  »
Google’s Pichai Says US Fix Is ‘De Facto’ Spinoff of Search

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »