- Formalizer A: Trained on Lean workbook pairs
- Formalizer B: Trained on 170K Claude-formalized statements
- Each problem receives multiple formalizations
We introduce Goedel-Prover, an open-source large language model (LLM) that achieves the state-of-the-art (SOTA) performance in automated formal proof generation for mathematical problems. The key challenge in this field is the scarcity of formalized math statements and proofs, which we tackle in the following ways. We train statement formalizers to translate the natural language math problems from Numina into formal language (Lean 4), creating a dataset of 1.64 million formal statements. LLMs are used to check that the formal statements accurately preserve the content of the original natural language problems. We then iteratively build a large dataset of formal proofs by training a series of provers. Each prover succeeds in proving many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. The final prover outperforms all existing open-source models in whole-proof generation. On the miniF2F benchmark, it achieves a 57.6% success rate (Pass@32), exceeding the previous best open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by earlier works.
Improving
+7.6%
over previous SOTA on miniF2F Pass@32
Solving
1.9X
total problems compared to prior works on Lean-workbook
The Pass@N metric indicates that we generate N proofs for a single problem; if any one of these N proofs successfully solves the problem, it is considered solved. (Left): The performance of Pass@32 for full proof generation on miniF2F. Due to limited compute, we compare with DeepSeek-Prover-v1.5 on the Pass@32 metric. (Middle): This sub-figure presents a comparison of Goedel-Prover-SFT and Deepseek-Prover-v1.5 in terms of miniF2F performance across different inference budgets, ranging from Pass@32, 64, 128, ..., 4 Γ 6400, to 16 Γ 6400. (Right): The number of problems solved in Lean-workbook by Goedel-Prover-SFT compared to prior works. InternLM2.5-Step-Prover and InternLM-Math-Plus collectively solve and open-source 15.7K samples, while we solve and open-source 29.7K samples.
Large-scale data synthesis and iterative training
We train two distinct formalizers to enhance statement diversity:
Formalizer A: Trained using Qwen2.5-Coder-32B on natural language and formal language pairs from Lean workbook.
Formalizer B: Trained on 170K statements formalized by Claude-sonnet-3.5 and syntactically verified by Lean compiler.
Training completed in under 24 hours using 8 H100 GPUs. Each problem receives 16 total formalizations (8 from each formalizer).
Following previous works, we primarily use miniF2F as our main evaluation benchmark. Additionally, we track performance on ProofNet, Lean-workbook, and FormalNumina. Lean-workbook contains 140K statements in total. FormalNumina is a private test set created by formalizing a randomly sampled collection of 250 problems from Numina. The benchmarks represent a diverse range of mathematical problems, from high-school level to undergraduate mathematics.
Model | Pass | Performance |
---|---|---|
TheoremLamma | 128 | 33.6% |
Deepseek-Prover-v1 | 32 | 46.1% Β± 0.5% |
Deepseek-Prover-v1.5-SFT | 32 | 48.2% Β± 0.6% |
Deepseek-Prover-v1.5-RL | 32 | 50.0% Β± 0.5% |
Goedel-Prover-SFT | 32 | 57.6% Β± 0.7% |
Deepseek-Prover-v1.5-SFT | 3200 | 53.3% |
Deepseek-Prover-v1.5-RL | 3200 | 54.9% |
Goedel-Prover-SFT | 3200 | 62.7% |
Deepseek-Prover-v1.5-SFT | 4Γ6400 | 55.8% |
Deepseek-Prover-v1.5-RL | 4Γ6400 | 58.5% |
Goedel-Prover-SFT | 4Γ6400 | 64.7% |
Our Goedel-Prover-SFT achieves state-of-the-art performance on miniF2F, surpassing previous models by significant margins. The model shows consistent improvement across different computational budgets, achieving 57.6% at Pass@32, 62.7% at Pass@3200, and 64.7% at Pass@4Γ6400.
Model | miniF2F | ProofNet | FormalNumina | Lean-workbook |
---|---|---|---|---|
Deepseek-Prover-v1.5-RL | 50.0% | 16.0% | 54.0% | 14.7% |
Goedel-Prover-SFT | 57.6% (+7.6) | 15.2% (-0.8) | 61.2% (+7.2) | 21.2% (+6.5) |
The model demonstrates strong performance across multiple datasets, with notable improvements in miniF2F, FormalNumina, and Lean-workbook benchmarks. While performance on ProofNet shows a slight decrease, the overall average performance shows a significant improvement of 5.4 percentage points.
Ranking | Model | Type | Num-solved | Compute (Pass) |
---|---|---|---|---|
1 | Goedel-Prover-SFT β¦ | Whole Proof Generation | 7 | 512 |
2 | ABEL | Tree Search Method | 7 | 596 |
3 | Goedel-Prover-SFT β¦ | Whole Proof Generation | 6 | 32 |
3 | InternLM2.5-StepProver β¦ | Tree Search Method | 6 | 2Γ32Γ600 |
5 | InternLM 7B | Whole Proof Generation | 4 | 4096 |
6 | GPT-4o | Whole Proof Generation | 1 | 10 |
7 | COPRA (GPT-4o) β¦ | Whole Proof Generation | 1 | 1 |
8 | ReProver w/ retrieval β¦ | Tree Search Method | 0 | 1 |
9 | ReProver w/o retrieval β¦ | Tree Search Method | 0 | 1 |
On the challenging PutnamBench dataset, Goedel-Prover-SFT achieves new state-of-the-art performance, solving 7 out of 644 problems with Pass@512, ranking the first place at the Putnam Leaderboard. β¦ indicates a open-source model.
Iteration | Statements | Training Data | |||
---|---|---|---|---|---|
Lean-workbook | Formalized Statements | Lean-workbook Solved | Formalized Proofs | Mathlib | |
Iter-0 | 140K | 0 | 20.6K | 0 | 0 |
Iter-1 | 140K | 140K | 20.6K | 72.4K | 0 |
Iter-2 | 140K | 270K | 23.0K | 128.7K | 0 |
Iter-3 | 140K | 270K | 24.4K | 161.2K | 0 |
Iter-4 | 140K | 882K | 25.4K | 425.8K | 0 |
Iter-5 | 140K | 882K | 27.0K | 436.5K | 0 |
Iter-6 | 140K | 882K | 27.8K | 443.2K | 104K |
Iter-7 | 140K | 1.64M | 28.8K | 887.7K | 104K |
Iter-8 | 140K | 1.64M | 29.7K | 915.7K | 104K |
Iter-9 | 140K | 1.64M | 30.3K | 928.2K | 104K |
The model's performance across four datasets during iterative training is shown below.
The performance trends illustrated in Figures 1-4 demonstrate the consistent improvement of our model across different datasets during the iterative training process.
@misc{lin2025goedelproverfrontiermodelopensource,
title={Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving},
author={Yong Lin and Shange Tang and Bohan Lyu and Jiayun Wu and Hongzhou Lin and Kaiyu Yang and Jia Li and Mengzhou Xia and Danqi Chen and Sanjeev Arora and Chi Jin},
year={2025},
eprint={2502.07640},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.07640},
}