Value ranking
Best value on Terminal-Bench 2
Long-horizon shell-and-filesystem tasks executed in a sandboxed terminal, scored by whether the agent's final state matches a target state. Tests practical tool-using ability for everyday devops and data-wrangling work; one of the hardest agentic benchmarks today.
“Value” is normalized benchmark score (0–100 for this leaderboard cohort) divided by input price per million tokens. Higher means more capability per dollar on this axis only — always sanity-check latency, context length, and your real workload.
- 1Gemini 3 FlashGoogle249.6374.9 / $0.30/M
- 2DeepSeek V3DeepSeek147.6339.9 / $0.27/M
- 3GPT-5 miniOpenAI132.2033.0 / $0.25/M
- 4Gemini 3 ProGoogle77.9697.5 / $1.25/M
- 5Kimi K2Moonshot (Kimi)74.9345.0 / $0.60/M
- 6GPT-5.5OpenAI66.67100.0 / $1.50/M
- 7GPT-5.4OpenAI66.4899.7 / $1.50/M
- 8GLM-4.7Zhipu AI (GLM)62.1231.1 / $0.50/M
- 9GPT-5.2OpenAI60.5975.7 / $1.25/M
- 10GPT-5OpenAI43.2354.0 / $1.25/M
- 11GPT-5.1OpenAI40.9751.2 / $1.25/M
- 12GLM-4.6Zhipu AI (GLM)36.8818.4 / $0.50/M
- 13Claude Haiku 4.5Anthropic34.0434.0 / $1.00/M
- 14Gemini 2.5 FlashGoogle26.477.9 / $0.30/M
- 15Gemini 2.5 ProGoogle23.9429.9 / $1.25/M
- 16Claude Sonnet 4.6Anthropic16.5549.6 / $3.00/M
- 17Claude Sonnet 4.5Anthropic14.8044.4 / $3.00/M
- 18Claude Opus 4.6Anthropic6.4696.9 / $15.00/M
- 19Claude Opus 4.5Anthropic4.8873.2 / $15.00/M
- 20Grok 4xAI4.4522.3 / $5.00/M
- 21Claude Opus 4Anthropic2.5137.6 / $15.00/M
- 22GPT-5 nanoOpenAI0.000.0 / $0.05/M
AI Model Analyzer does not recommend specific vendors; rankings are derived from public data only.