Model Statistics

Time Period

Weekly Token Usage by Model

Loading chart data...

Below is a list of AI models available on CodingFleet, along with statistics & benchmarks for the selected time period. The statistics are based on actual usage by CodingFleet users.

Legacy models are shown in the list but are grayed out and disabled. These models are no longer available.

The cost of the model is the number of credits required to use the model in a single request. For Unlimited and Elite users, the usage for 1-cost models is unlimited, while others are limited as shown in the pricing page.

Model Avg Speed (chars/s) The average speed of the model in characters per second, across all users utilizing this model. N° Tokens The total number of tokens in both the prompt and the completion combined, across all users utilizing this model. Cost The credit cost of utilizing this model in a request. Models with a cost of 2 or higher are considered premium models. Vote Score The average vote score given to this model by CodingFleet users, where -1 is the lowest score and 1 is the highest. This is displayed only if there are 10 or more votes for the model. LiveBench Coding The average LiveBench coding score (livebench.ai) LiveBench Avg The average LiveBench average score (livebench.ai) WebDev Arena WebDev Arena is an open-source benchmark evaluating AI capabilities in web development (https://web.lmarena.ai/leaderboard)
Claude Sonnet 4.5 Thinking
159.6 1.5B 0 7 0.4 80.4 78.3 1397.0
GPT-5 Mini
146.4 1.2B 0 1 0.6 - - -
Claude 4 Sonnet Thinking
157.8 1.2B 0 6 0.6 73.6 72.1 1381.8
GPT-5 Thinking High
71.1 871.7M 0 6 1.0 75.3 78.6 1480.5
Claude Sonnet 4.5
189.2 658.9M 1 6 0.3 - - -
Claude 3.5 Haiku
205.3 604.5M 1 1 0.1 53.2 45.0 1133.8
Claude 4 Sonnet
193.3 575.6M 1 6 0.8 77.5 69.7 1381.8
DeepSeek V3.2
81.2 570.7M 1 1 0.3 68.5 62.6 -
GPT-5.2 Thinking High
121.2 541.3M 1 8 - 76.1 73.6 1486.0
GPT-5 Thinking
91.6 517.5M 1 5 - 73.3 76.5 -
GPT-4.1
236.0 499.5M 2 4 0.8 73.2 63.0 1256.5
Claude Haiku 4.5
392.6 407.1M 3 2 - - - -
Claude Opus 4.5 Thinking
189.6 402.0M 3 15 - 79.7 79.8 1493.0
GPT-4.1 mini
216.0 380.9M 3 1 0.1 72.1 59.1 1194.4
Gemini 2.5 Pro
181.0 344.1M 3 4 0.7 72.9 79.0 1409.1
Gemini 2.5 Flash
357.1 336.2M 3 1 0.3 60.3 69.9 1304.9
Claude 3.7 Sonnet
221.8 331.5M 3 6 0.5 74.3 58.5 1356.7
GPT-5.1 Thinking High
138.0 329.2M 3 7 - 72.5 72.5 1395.0
Gemini 2.0 Flash
511.4 325.3M 3 1 0.2 - - -
GPT-5 Mini High
78.5 276.1M 3 2 1.0 66.4 72.2 -
GPT-4o
256.7 249.3M 7 5 -0.2 69.3 54.0 964.0
Claude 3.7 Sonnet Thinking
161.6 229.4M 3 6 - 73.2 67.4 1356.7
GPT-4o Mini
245.1 195.4M 11 1 -0.1 43.2 41.3 -
GPT-5
201.6 186.5M 3 4 1.0 72.5 75.3 -
Claude 3.5 Sonnet
226.2 176.9M 18 6 0.0 32.3 50.8 1239.3
GPT-5.1 Thinking
162.5 175.1M 2 5 1.0 - - 1395.0
OpenAI o4-mini
185.7 137.8M 3 2 -0.1 74.2 66.9 1095.1
Claude 4.1 Opus Thinking
118.5 122.4M 3 60 - 74.0 73.5 1476.5
Gemini 3 Pro
173.3 113.9M 4 5 - 74.6 79.7 1473.0
OpenAI o3-high
103.4 106.6M 5 5 1.0 76.7 74.6 1188.3
OpenAI o3
145.9 101.4M 5 4 1.0 77.9 72.0 1188.1
OpenAI o3-mini-high
194.0 83.4M 5 2 - 65.5 71.4 1136.2
GPT-5.1 Codex Max High
181.9 71.3M 6 7 - 81.4 75.2 -
Claude Haiku 4.5 Thinking
276.0 70.0M 6 2 - 72.8 71.4 -
GPT-5.2 Thinking xHigh
83.5 67.4M 6 10 - - - -
OpenAI o4-mini-high
123.0 67.0M 6 2 0.5 80.0 71.5 -
Grok-4 Fast
375.4 61.1M 6 1 - - - -
DeepSeek V3.2 Thinking
41.6 60.1M 6 1 0.0 64.6 66.6 -
OpenAI o3-mini
269.2 59.2M 6 2 - 58.4 67.2 1091.7
OpenAI o1 Mini
375.7 48.4M 2 2 0.5 48.1 57.8 1053.7
Gemini 2.0 Pro (Exp)
343.4 42.3M 6 2 - 35.3 61.6 1088.6
Llama 3.3 70B
566.3 41.1M 8 1 0.6 24.1 45.7 -
Claude 4 Opus Thinking
125.9 35.2M 8 50 - 73.3 72.9 1405.5
Llama 4 Maverick
315.5 29.6M 8 1 - 54.2 55.2 998.5
Llama 3.1 405B
97.6 18.9M 3 2 - 42.7 52.4 813.7
Claude 4.1 Opus
130.9 15.9M 11 50 - - - -
GPT-5.1
254.3 14.7M 12 4 - - - -
Mistral Large 3
149.7 14.5M 10 1 - 62.9 50.3 -
Claude Opus 4.5
192.7 14.5M 11 13 - 77.5 76.0 1479.0
OpenAI o1
264.7 14.3M 11 18 - - - 1045.2
GPT-5.1 Codex Max
239.4 14.3M 11 5 - 81.4 75.2 -
Grok-4
135.4 14.1M 11 5 - 71.3 72.1 -
Claude 4 Opus
140.3 13.6M 11 45 - 72.9 71.5 1405.5
GPT-5.2 Thinking
164.8 13.5M 11 7 - - - -
GLM 4.6
157.6 12.9M 11 1 - - - -
Grok-4.1 Fast
129.4 12.9M 11 1 - - - -
GPT-5 Pro
35.0 12.9M 11 70 - 72.1 78.7 -
Codestral (2508)
719.5 12.8M 3 1 - - - -
GPT-5.2
236.3 11.0M 10 5 - - - -
Gemini 1.5 Pro
148.8 10.5M 26 2 - - - -
Grok-3 Mini (Beta)
276.5 8.0M 9 1 - 54.5 70.3 -
DeepSeek R1
30.5 6.6M 10 2 - - - -
Qwen3 235B A22B
170.5 6.2M 10 1 - 66.4 64.9 -
Grok-3 (Beta)
256.6 4.4M 11 3 - 73.6 63.2 -
Gemini 2.0 Flash Thinking
416.5 3.2M 11 1 - 35.7 62.1 1030.1
DeepSeek V3
79.8 3.1M 11 1 - - - -
Qwen2.5-Coder 32B
307.1 2.9M 7 1 - 56.9 46.2 904.1
Qwen3 Coder
273.5 2.9M 10 2 - 73.2 60.5 -
Grok-2
231.5 2.8M 10 2 - 26.1 48.1 -
Mistral Medium 3
252.6 2.4M 10 1 - 61.5 56.6 1160.1
Llama 3.2 90B
195.5 2.1M 17 1 - - - -
Kimi K2 (0905)
202.0 1.5M 9 1 - 71.8 62.7 -
Gemini 1.5 Flash
431.7 893.3K 44 1 - - - -
Claude 3 Haiku
440.0 876.1K 50 1 - - - -
Kimi K2 Thinking
136.4 737.5K 7 1 - - - -
OpenAI o1 Preview
274.5 425.1K 20 3 - - - -
Llama 3.1 70B
561.2 392.4K 24 1 - 33.5 44.9 -
GPT-4 Turbo
119.3 233.0K 32 10 - - - -
Gemini 3 Flash
501.6 5.1K 4 1 - - - -
GPT-3.5 Turbo
85.0 58 54 1 - - - -
Claude 3 Opus
0.0 0 12 3 - - - -
Claude 3 Sonnet
0.0 0 32 2 - - - -
GPT-4
0.0 0 50 3 - - - -