Model Comparison

Claude 4 Sonnet (Non-reasoning) vs GPT-4.1 mini

Anthropic vs OpenAI

OpenAI's GPT-4.1 mini costs less per intelligence point, even though Anthropic's Claude 4 Sonnet (Non-reasoning) scores higher.

Data last updated March 4, 2026

GPT-4.1 mini delivers more intelligence per dollar, while Claude 4 Sonnet (Non-reasoning) leads on raw benchmark scores. Claude 4 Sonnet (Non-reasoning) costs $0.03 per request vs $0.0036 for GPT-4.1 mini (at 5K input / 1K output tokens). GPT-4.1 mini scores proportionally higher on mathematical reasoning (AIME: 0.43), while Claude 4 Sonnet (Non-reasoning)'s scores skew toward general knowledge (MMLU-Pro: 0.84). The question is whether Claude 4 Sonnet (Non-reasoning)'s higher scores justify the 8x price premium.

Benchmarks & Performance

Metric Claude 4 Sonnet (Non-reasoning) GPT-4.1 mini
Intelligence Index 33.0 22.9
MMLU-Pro 0.8 0.8
GPQA 0.7 0.7
AIME 0.4 0.4
Output tokens/sec 48.5 70.6
Time to first token 1.03s 0.48s
Context window 200,000 1,047,576

Pricing per 1M Tokens

List prices as published by the provider. Not adjusted for token efficiency.

Metric Claude 4 Sonnet (Non-reasoning) GPT-4.1 mini
Input price / 1M tokens $3.00 $0.40
Output price / 1M tokens $15.00 $1.60
Cache hit price / 1M tokens $0.30 $0.10

Intelligence vs Price

15 20 25 30 35 40 $0.002 $0.005 $0.01 $0.02 $0.05 Typical request cost (5K input + 1K output) Intelligence Index Gemini 2.5 Pro DeepSeek R1 0528 GPT-4.1 Claude 4.5 Sonn... Gemini 2.5 Flas... Grok 3 mini Rea... Claude 4 Sonnet (Non-reasoning) GPT-4.1 mini
Claude 4 Sonnet (Non-reasoning) GPT-4.1 mini Other models

Value Analysis

Cost per IQ point based on a typical request of 5,000 input and 1,000 output tokens.

Cheaper (list price)

GPT-4.1 mini

Higher Benchmarks

Claude 4 Sonnet (Non-reasoning)

Better Value ($/IQ point)

GPT-4.1 mini

Claude 4 Sonnet (Non-reasoning)

$0.0009 / IQ point

GPT-4.1 mini

$0.0002 / IQ point

Frequently Asked Questions

How much cheaper is GPT-4.1 mini than Claude 4 Sonnet (Non-reasoning)?

GPT-4.1 mini is dramatically cheaper — 8x less per request than Claude 4 Sonnet (Non-reasoning). GPT-4.1 mini is cheaper on both input ($0.4/M vs $3.0/M) and output ($1.6/M vs $15.0/M). At a fraction of the cost, GPT-4.1 mini saves significantly in production workloads. This comparison assumes a typical request of 5,000 input and 1,000 output tokens (5:1 ratio). Actual ratios vary by workload — chat and completion tasks typically run 2:1, code review around 3:1, document analysis and summarization 10:1 to 50:1, and embedding workloads are pure input with no output tokens.

How much does Claude 4 Sonnet (Non-reasoning) outperform GPT-4.1 mini on benchmarks?

Claude 4 Sonnet (Non-reasoning) scores higher overall (33.0 vs 22.9). Claude 4 Sonnet (Non-reasoning) leads on MMLU-Pro (0.84 vs 0.78), with both within 5% on GPQA and AIME. GPT-4.1 mini scores proportionally higher on AIME (mathematical reasoning) relative to its MMLU-Pro, while Claude 4 Sonnet (Non-reasoning)'s scores are more weighted toward general knowledge. Claude 4 Sonnet (Non-reasoning)'s higher MMLU-Pro score suggests better performance on general-purpose tasks.

Which generates output faster, Claude 4 Sonnet (Non-reasoning) or GPT-4.1 mini?

GPT-4.1 mini is 46% faster at 70.6 tokens per second compared to Claude 4 Sonnet (Non-reasoning) at 48.5 tokens per second. GPT-4.1 mini also starts generating sooner at 0.48s vs 1.03s time to first token. The speed difference matters for chatbots but is less relevant in batch processing.

How much more context can GPT-4.1 mini handle than Claude 4 Sonnet (Non-reasoning)?

GPT-4.1 mini has a much larger context window — 1,047,576 tokens vs Claude 4 Sonnet (Non-reasoning) at 200,000 tokens. That's roughly 1,396 vs 266 pages of text. GPT-4.1 mini's window can handle entire codebases or book-length documents; Claude 4 Sonnet (Non-reasoning) works better for shorter inputs.

Is GPT-4.1 mini worth choosing over Claude 4 Sonnet (Non-reasoning) on value alone?

GPT-4.1 mini offers dramatically better value — $0.0002 per intelligence point vs Claude 4 Sonnet (Non-reasoning) at $0.0009. GPT-4.1 mini is cheaper, which offsets Claude 4 Sonnet (Non-reasoning)'s higher benchmark scores to deliver more value per dollar. If raw benchmark scores matter less than cost for your use case, GPT-4.1 mini is the efficient choice.

How does prompt caching affect Claude 4 Sonnet (Non-reasoning) and GPT-4.1 mini pricing?

With prompt caching, GPT-4.1 mini is dramatically cheaper — 8x less per request than Claude 4 Sonnet (Non-reasoning). Caching saves 45% on Claude 4 Sonnet (Non-reasoning) and 42% on GPT-4.1 mini compared to standard input prices. Both models benefit from caching at similar rates, so the uncached price comparison holds.

Pricing verified against official vendor documentation. Updated daily. See our methodology.

Related Comparisons

Stop guessing. Start measuring.

Create an account, install the SDK, and see your first margin data in minutes.

See My Margin Data

No credit card required