Model Comparison
OpenAI's GPT-4.1 mini beats Anthropic's Claude 3.5 Sonnet on both price and benchmarks — here's the full breakdown.
Data last updated March 4, 2026
GPT-4.1 mini is the clear winner — cheaper and higher-scoring than Claude 3.5 Sonnet. Claude 3.5 Sonnet costs $0.03 per request vs $0.0036 for GPT-4.1 mini (at 5K input / 1K output tokens). GPT-4.1 mini scores proportionally higher on mathematical reasoning (AIME: 0.43), while Claude 3.5 Sonnet's scores skew toward general knowledge (MMLU-Pro: 0.77). Claude 3.5 Sonnet's only edge might be vendor-specific features or API ecosystem.
| Metric | Claude 3.5 Sonnet | GPT-4.1 mini |
|---|---|---|
| Intelligence IndexComposite score from MMLU-Pro, GPQA, and AIME. Higher is better. | 15.9 | 22.9 |
| MMLU-ProGeneral knowledge and reasoning. Higher is better. | 0.8 | 0.8 |
| GPQAGraduate-level science questions. Higher is better. | 0.6 | 0.7 |
| AIMEMathematical problem solving. Higher is better. | 0.2 | 0.4 |
| Context windowMax tokens per request. Larger handles more text. | 200,000 | 1,047,576 |
List prices as published by the provider. Not adjusted for token efficiency.
| Metric | Claude 3.5 Sonnet | GPT-4.1 mini |
|---|---|---|
| Input price / 1M tokens | $3.00 | $0.40 |
| Output price / 1M tokens | $15.00 | $1.60 |
| Cache hit price / 1M tokens | $0.30 | $0.10 |
Cost per IQ point based on a typical request of 5,000 input and 1,000 output tokens.
Cheaper (list price)
GPT-4.1 mini
Higher Benchmarks
GPT-4.1 mini
Better Value ($/IQ point)
GPT-4.1 mini
Claude 3.5 Sonnet
$0.0019 / IQ point
GPT-4.1 mini
$0.0002 / IQ point
GPT-4.1 mini is dramatically cheaper — 8x less per request than Claude 3.5 Sonnet. GPT-4.1 mini is cheaper on both input ($0.4/M vs $3.0/M) and output ($1.6/M vs $15.0/M). At a fraction of the cost, GPT-4.1 mini saves significantly in production workloads. This comparison assumes a typical request of 5,000 input and 1,000 output tokens (5:1 ratio). Actual ratios vary by workload — chat and completion tasks typically run 2:1, code review around 3:1, document analysis and summarization 10:1 to 50:1, and embedding workloads are pure input with no output tokens.
GPT-4.1 mini scores higher overall (22.9 vs 15.9). GPT-4.1 mini leads on GPQA (0.66 vs 0.6) and AIME (0.43 vs 0.16), with both within 5% on MMLU-Pro. GPT-4.1 mini scores proportionally higher on AIME (mathematical reasoning) relative to its MMLU-Pro, while Claude 3.5 Sonnet's scores are more weighted toward general knowledge. If mathematical reasoning matters, GPT-4.1 mini's AIME score of 0.43 gives it an edge.
GPT-4.1 mini has a much larger context window — 1,047,576 tokens vs Claude 3.5 Sonnet at 200,000 tokens. That's roughly 1,396 vs 266 pages of text. GPT-4.1 mini's window can handle entire codebases or book-length documents; Claude 3.5 Sonnet works better for shorter inputs.
GPT-4.1 mini offers dramatically better value — $0.0002 per intelligence point vs Claude 3.5 Sonnet at $0.0019. GPT-4.1 mini is both cheaper and higher-scoring, making it the clear value pick. You don't sacrifice quality to save money with GPT-4.1 mini.
With prompt caching, GPT-4.1 mini is dramatically cheaper — 8x less per request than Claude 3.5 Sonnet. Caching saves 45% on Claude 3.5 Sonnet and 42% on GPT-4.1 mini compared to standard input prices. Both models benefit from caching at similar rates, so the uncached price comparison holds.
Pricing verified against official vendor documentation. Updated daily. See our methodology.
Related Comparisons
Create an account, install the SDK, and see your first margin data in minutes.
See My Margin DataNo credit card required