- Formalizer A: Trained on Lean workbook pairs
- Formalizer B: Trained on 170K Claude-formalized statements
- Each problem receives multiple formalizations
Large language models (LLMs) have shown impressive reasoning capabilities, especially in tackling mathematical problems. There are two main approaches: informal reasoning, which employs natural language but lacks verifiability, and formal reasoning, which utilizes proof assistants like Lean to produce machine-verifiable proofs. While state-of-the-art LLMs, such as OpenAI's o1 and DeepSeek's R1, excel in informal problem-solving, they face challenges in formal theorem proving.
A significant challenge in training LLMs for formal reasoning is the scarcity of data. To overcome this, we synthesize a large and diverse dataset by auto-formalizing a substantial corpus of informal mathematical problems. Our approach transforms natural language statements into various formal styles in Lean 4, resulting in 1.78 million syntactically correct and content-accurate statements. We then iteratively train a prover, alternating between generating verified proofs and training the model using these proofs. Our model, Goedel-Prover, achieves state-of-the-art performance across multiple benchmarks for whole-proof generation, which generates the entire proof without interacting with Lean. On the miniF2F benchmark (Pass@32), it attains a 57.6% success rate, surpassing the previous best open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), securing the top position on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean-workbook problems, nearly doubling the 15.7K produced by earlier works.
Improving
+7.6%
over previous SOTA on miniF2F Pass@32
Solving
1.9X
total problems compared to prior works on Lean-workbook
The Pass@N metric indicates that we generate N proofs for a single problem; if any one of these N proofs successfully solves the problem, it is considered solved. (Left): The performance of Pass@32 for full proof generation on miniF2F. Due to limited compute, we compare with DeepSeek-Prover-v1.5 on the Pass@32 metric. (Middle): This sub-figure presents a comparison of Goedel-Prover-SFT and Deepseek-Prover-v1.5 in terms of miniF2F performance across different inference budgets, ranging from Pass@32, 64, 128, ..., 4 Γ 6400, to 16 Γ 6400. (Right): The number of problems solved in Lean-workbook by Goedel-Prover-SFT compared to prior works. InternLM2.5-Step-Prover and InternLM-Math-Plus collectively solve and open-source 15.7K samples, while we solve and open-source 29.7K samples.
Large-scale data synthesis and iterative training
We train two distinct formalizers to enhance statement diversity:
Formalizer A: Trained using Qwen2.5-Coder-32B on natural language and formal language pairs from Lean workbook.
Formalizer B: Utilizes Claude-sonnet-3.5 to formalize 230K statements, with 170K successfully passing compilation.
Training completed in under 24 hours using 8 H100 GPUs. Each problem receives 16 total formalizations (8 from each formalizer).
Following previous works, we primarily use miniF2F as our main evaluation benchmark. Additionally, we track performance on ProofNet, Lean-workbook, and FormalNumina. Lean-workbook contains 140K statements in total. FormalNumina is a private test set created by formalizing a randomly sampled collection of 250 problems from Numina. The benchmarks represent a diverse range of mathematical problems, from high-school level to undergraduate mathematics.
Model | Pass | Performance |
---|---|---|
TheoremLamma | 128 | 33.6% |
Deepseek-Prover-v1 | 32 | 46.1% Β± 0.5% |
Deepseek-Prover-v1.5-SFT | 32 | 48.2% Β± 0.6% |
Deepseek-Prover-v1.5-RL | 32 | 50.0% Β± 0.5% |
Goedel-Prover-SFT | 32 | 57.6% Β± 0.7% |
Deepseek-Prover-v1.5-SFT | 3200 | 53.3% |
Deepseek-Prover-v1.5-RL | 3200 | 54.9% |
Goedel-Prover-SFT | 3200 | 62.7% |
Deepseek-Prover-v1.5-SFT | 4Γ6400 | 55.8% |
Deepseek-Prover-v1.5-RL | 4Γ6400 | 58.5% |
Goedel-Prover-SFT | 4Γ6400 | 64.7% |
Our Goedel-Prover-SFT achieves state-of-the-art performance on miniF2F, surpassing previous models by significant margins. The model shows consistent improvement across different computational budgets, achieving 57.6% at Pass@32, 62.7% at Pass@3200, and 64.7% at Pass@4Γ6400.
Model | miniF2F | ProofNet | FormalNumina | Lean-workbook |
---|---|---|---|---|
Deepseek-Prover-v1.5-RL | 50.0% | 16.0% | 54.0% | 14.7% |
Goedel-Prover-SFT | 57.6% (+7.6) | 15.2% (-0.8) | 61.2% (+7.2) | 21.2% (+6.5) |
The model demonstrates strong performance across multiple datasets, with notable improvements in miniF2F, FormalNumina, and Lean-workbook benchmarks. While performance on ProofNet shows a slight decrease, the overall average performance shows a significant improvement of 5.4 percentage points.
Ranking | Model | Type | Num-solved | Compute (Pass) |
---|---|---|---|---|
1 | Goedel-Prover-SFT β¦ | Whole Proof Generation | 7 | 512 |
2 | ABEL | Tree Search Method | 7 | 596 |
3 | Goedel-Prover-SFT β¦ | Whole Proof Generation | 6 | 32 |
3 | InternLM2.5-StepProver β¦ | Tree Search Method | 6 | 2Γ32Γ600 |
5 | InternLM 7B | Whole Proof Generation | 4 | 4096 |
6 | GPT-4o | Whole Proof Generation | 1 | 10 |
7 | COPRA (GPT-4o) β¦ | Whole Proof Generation | 1 | 1 |
8 | ReProver w/ retrieval β¦ | Tree Search Method | 0 | 1 |
9 | ReProver w/o retrieval β¦ | Tree Search Method | 0 | 1 |
On the challenging PutnamBench dataset, Goedel-Prover-SFT achieves new state-of-the-art performance, solving 7 out of 644 problems with Pass@512, ranking the first place at the Putnam Leaderboard. β¦ indicates a open-source model.
Iteration | Statements | Training Data | |||
---|---|---|---|---|---|
Lean-workbook | Formalized Statements | Lean-workbook Solved | Formalized Proofs | Mathlib | |
Iter-0 | 140K | 0 | 20.6K | 0 | 0 |
Iter-1 | 140K | 140K | 20.6K | 72.4K | 0 |
Iter-2 | 140K | 270K | 23.0K | 128.7K | 0 |
Iter-3 | 140K | 270K | 24.4K | 161.2K | 0 |
Iter-4 | 140K | 882K | 25.4K | 425.8K | 0 |
Iter-5 | 140K | 882K | 27.0K | 436.5K | 0 |
Iter-6 | 140K | 882K | 27.8K | 443.2K | 104K |
Iter-7 | 140K | 1.64M | 28.8K | 887.7K | 104K |
Iter-8 | 140K | 1.64M | 29.7K | 915.7K | 104K |
Iter-9 | 140K | 1.64M | 30.3K | 928.2K | 104K |
The model's performance across four datasets during iterative training is shown below.
The performance trends illustrated in Figures 1-4 demonstrate the consistent improvement of our model across different datasets during the iterative training process.
@article{lin2024Goedelprover,
title={Goedel-Prover: A New Frontier in Open-source Automated Theorem Proving},
author={Yong Lin and Shange Tang and Bohan Lyu and Jiayun Wu and Hongzhou Lin and Kaiyu Yang and Jia Li and Mengzhou Xia and Danqi Chen and Sanjeev Arora and Chi Jin},
}