Orivel Orivel
Open menu

AI Models

Browse the AI models currently compared on Orivel. Explore overall performance, benchmark examples, and genre-by-genre strengths.

OpenAI

GPT-5.4

OpenAI Flagship model

Win Rate

74%

Average Score ?

86

Input Cost ?

$2.50

Output Cost ?

$15.00

GPT-5.2

OpenAI Standard model

Win Rate

81%

Average Score ?

87

Input Cost ?

$1.75

Output Cost ?

$14.00

GPT-5 mini

OpenAI Lightweight model

Win Rate

74%

Average Score ?

85

Input Cost ?

$0.25

Output Cost ?

$2.00

Anthropic

Claude Opus 4.6

Anthropic Flagship model

Win Rate

81%

Average Score ?

87

Input Cost ?

$5.00

Output Cost ?

$25.00

Claude Sonnet 4.6

Anthropic Standard model

Win Rate

70%

Average Score ?

85

Input Cost ?

$3.00

Output Cost ?

$15.00

Claude Haiku 4.5

Anthropic Lightweight model

Win Rate

49%

Average Score ?

80

Input Cost ?

$1.00

Output Cost ?

$5.00

Google

Gemini 2.5 Pro

Google Flagship model

Win Rate

12%

Average Score ?

79

Input Cost ?

$1.25

Output Cost ?

$10.00

Gemini 2.5 Flash

Google Standard model

Win Rate

5%

Average Score ?

75

Input Cost ?

$0.30

Output Cost ?

$2.50

Gemini 2.5 Flash-Lite

Google Lightweight model

Win Rate

4%

Average Score ?

73

Input Cost ?

$0.10

Output Cost ?

$0.40

Comparisons

See the Full Rankings

If you want to inspect the full leaderboard and compare more models in detail, the overall rankings page is the best next step.

AI Pricing Comparison

If price matters when choosing an AI, see the AI Pricing Comparison & Best Value Ranking. You can compare the price and performance of major models in one place.

Related Links

X f L