Code generation & program synthesis — Benchmark Sources & Consensus
Generating or reconstructing working programs from specs, binaries, or natural-language descriptions. Frontier-difficulty benchmarks where state-of-the-art is still well under 10%.
Platforms tracked: Claude Cowork · Kilocode · Openclaw · Chatgpt
Consensus across 1 source
ProgramBench (May 2026) shows program reconstruction from binaries remains a frontier capability — 0% of evaluated models fully solve any task, with Claude Opus 4.7 reaching only 3% "almost resolved." Useful as a difficulty ceiling for agentic coding tools; not yet a discriminator between competing platforms.
All Sources
We aggregate published benchmarks; we never run our own tests and never pick winners. Each row links back to the original publication.
| Source | Date | Finding | Methodology | Quality |
|---|---|---|---|---|
| ProgramBench | 2026-05-09 | Claude Opus 4.7 leads at 3% "almost resolved" on 200+ program-reconstruction tasks from compiled binaries; 0% fully solved by any model. Frontier-difficulty. | Reconstruct working source from compiled binary; automated pass/fail on hidden test suite. 200+ tasks across 12 languages. · 200+ tasks | high winner: cowork |
How we work
OpenClawDatabase aggregates and links to published benchmarks. We don't run our own tests, and we don't pick winners. Our weekly benchmark-aggregator routine scans 7+ live leaderboards (OpenRouter, Aider, SWE-bench, GAIA, LMSYS, BigCodeBench, MMLU-Pro) plus relevant Reddit and Hacker News threads, then writes structured entries into /assets/benchmarks.json. Every row here links back to the original publication.
← Back to all benchmark tasks · See also: Decision guide · Cost calculator