Model Comparison

Claude 3.5 Sonnet vs Gemini 2.0 Flash

Anthropic vs Google

Google's Gemini 2.0 Flash beats Anthropic's Claude 3.5 Sonnet on both price and benchmarks — here's the full breakdown.

Data last updated March 4, 2026

Gemini 2.0 Flash is the clear winner — cheaper and higher-scoring than Claude 3.5 Sonnet. Claude 3.5 Sonnet costs $0.03 per request vs $0.0009 for Gemini 2.0 Flash (at 5K input / 1K output tokens). Claude 3.5 Sonnet's only edge might be vendor-specific features or API ecosystem.

Benchmarks & Performance

Metric Claude 3.5 Sonnet Gemini 2.0 Flash
Intelligence Index 15.9 18.5
MMLU-Pro 0.8 0.8
GPQA 0.6 0.6
AIME 0.2 0.3
Context window 200,000 1,000,000

Pricing per 1M Tokens

List prices as published by the provider. Not adjusted for token efficiency.

Metric Claude 3.5 Sonnet Gemini 2.0 Flash
Input price / 1M tokens $3.00 $0.10
Output price / 1M tokens $15.00 $0.40
Cache hit price / 1M tokens $0.30 $0.02

Intelligence vs Price

10 15 20 25 30 35 40 $0.001 $0.002 $0.005 $0.01 $0.02 $0.05 Typical request cost (5K input + 1K output) Intelligence Index Gemini 2.5 Pro DeepSeek R1 0528 GPT-4.1 GPT-4.1 mini Claude 4 Sonnet... Claude 4.5 Sonn... Gemini 2.5 Flas... Grok 3 mini Rea... Claude 3.5 Sonnet Gemini 2.0 Flash
Claude 3.5 Sonnet Gemini 2.0 Flash Other models

Value Analysis

Cost per IQ point based on a typical request of 5,000 input and 1,000 output tokens.

Cheaper (list price)

Gemini 2.0 Flash

Higher Benchmarks

Gemini 2.0 Flash

Better Value ($/IQ point)

Gemini 2.0 Flash

Claude 3.5 Sonnet

$0.0019 / IQ point

Gemini 2.0 Flash

$0.000049 / IQ point

Frequently Asked Questions

How much cheaper is Gemini 2.0 Flash than Claude 3.5 Sonnet?

Gemini 2.0 Flash is dramatically cheaper — 33x less per request than Claude 3.5 Sonnet. Gemini 2.0 Flash is cheaper on both input ($0.1/M vs $3.0/M) and output ($0.4/M vs $15.0/M). At a fraction of the cost, Gemini 2.0 Flash saves significantly in production workloads. This comparison assumes a typical request of 5,000 input and 1,000 output tokens (5:1 ratio). Actual ratios vary by workload — chat and completion tasks typically run 2:1, code review around 3:1, document analysis and summarization 10:1 to 50:1, and embedding workloads are pure input with no output tokens.

How much does Gemini 2.0 Flash outperform Claude 3.5 Sonnet on benchmarks?

Gemini 2.0 Flash scores higher overall (18.5 vs 15.9). Gemini 2.0 Flash leads on AIME (0.33 vs 0.16), with both within 5% on MMLU-Pro and GPQA. If mathematical reasoning matters, Gemini 2.0 Flash's AIME score of 0.33 gives it an edge.

How much more context can Gemini 2.0 Flash handle than Claude 3.5 Sonnet?

Gemini 2.0 Flash has a much larger context window — 1,000,000 tokens vs Claude 3.5 Sonnet at 200,000 tokens. That's roughly 1,333 vs 266 pages of text. Gemini 2.0 Flash's window can handle entire codebases or book-length documents; Claude 3.5 Sonnet works better for shorter inputs.

Is Gemini 2.0 Flash worth choosing over Claude 3.5 Sonnet on value alone?

Gemini 2.0 Flash offers dramatically better value — $0.000049 per intelligence point vs Claude 3.5 Sonnet at $0.0019. Gemini 2.0 Flash is both cheaper and higher-scoring, making it the clear value pick. You don't sacrifice quality to save money with Gemini 2.0 Flash.

How does prompt caching affect Claude 3.5 Sonnet and Gemini 2.0 Flash pricing?

With prompt caching, Gemini 2.0 Flash is dramatically cheaper — 31x less per request than Claude 3.5 Sonnet. Caching saves 45% on Claude 3.5 Sonnet and 42% on Gemini 2.0 Flash compared to standard input prices. Both models benefit from caching at similar rates, so the uncached price comparison holds.

Pricing verified against official vendor documentation. Updated daily. See our methodology.

Related Comparisons

Stop guessing. Start measuring.

Create an account, install the SDK, and see your first margin data in minutes.

See My Margin Data

No credit card required