Ollama Benchmark - Compare LLMs Locally icon

Ollama Benchmark - Compare LLMs Locally

by muskologlu

v1.0 Updated Apr 10, 2025 94.47KiB
CWS
13
Users
β˜… 0.00
0 reviews
#113408
of 207.5K
developer
#9709 of 18.1K

Description

πŸ” Benchmark and compare performance across LLMs (Large Language Models) like Mistral, LLaMA, Qwen, and others – powered by Ollama. This Chrome Extension allows you to test multiple models simultaneously and export detailed performance results. 🧠 Features: - πŸ“Œ Select one or multiple models to compare - πŸ§ͺ Run prompt-based benchmark tests - πŸ“Š Analyze token count, response time, and speed (Token/s) - πŸ’Ύ Export results in `.txt`, `.csv`, or `.json` format - πŸ—‚ Local storage of settings and results - 🌐 Works with both local and remote Ollama APIs - 🌍 Multilingual interface (English & Turkish) β˜• Like the tool? Support the developer: https://www.buymeacoffee.com/elroy πŸ” No data is collected. All processing happens in your browser. πŸ“¦ 100% free to use.
Ollama Benchmark - Compare LLMs Locally screenshot 1Ollama Benchmark - Compare LLMs Locally screenshot 2Ollama Benchmark - Compare LLMs Locally screenshot 3

Reviews

Loading reviews...

Permissions (1)

Permissions

storageβ„Ή Can store data locally in your browser

Details

Version 1.0
Updated Apr 10, 2025
Size 94.47KiB
First Seen Mar 30, 2026