PaperPilot Docs

Model Selection

Choose the right AI model for your needs

PaperPilot supports multiple AI model providers, giving you flexibility to choose the best model for your task.

Model Selection

Available Models

Google Gemini

ModelBest ForSpeed
Gemini 2.0 FlashFast responses, general tasks⚡⚡⚡
Gemini 1.5 ProComplex reasoning, long documents⚡⚡

Gemini 2.0 Flash is the default and works well for most tasks. Switch to 1.5 Pro for very long documents or complex analysis.

OpenAI

ModelBest ForSpeed
GPT-4oHigh-quality writing, nuanced edits⚡⚡
GPT-4o MiniQuick tasks, cost-effective⚡⚡⚡

Anthropic Claude

ModelBest ForSpeed
Claude 3.5 SonnetAcademic writing, careful analysis⚡⚡
Claude 3 HaikuFast responses, simple tasks⚡⚡⚡

Claude excels at following nuanced instructions and producing well-structured academic prose.

High-Speed Options

ModelProviderBest For
Llama 3.3 70BGroqUltra-fast responses
Llama 3.1 8BCerebrasNear-instant responses

These models run on specialized hardware for extremely low latency, perfect for quick iterations.

How to Change Models

Open Model Selector

Click the model name in the chat input area (shows current model).

Browse Available Models

A dialog opens showing all available models organized by provider.

Select Your Model

Click on any model to switch to it immediately.

Your model choice persists across sessions. Each conversation remembers which model you were using.

Choosing the Right Model

For Research Tasks

TaskRecommended Model
Quick paper searchGemini 2.0 Flash
Detailed paper analysisClaude 3.5 Sonnet
Summarizing many papersGPT-4o

For Writing Tasks

TaskRecommended Model
Drafting contentGPT-4o or Claude 3.5
Quick editsGemini 2.0 Flash
LaTeX fixesAny fast model
Final polishClaude 3.5 Sonnet

For Speed-Critical Tasks

When you need instant responses:

  1. Groq Llama — Best balance of speed and quality
  2. Cerebras Llama — Absolute fastest
  3. Gemini Flash — Good speed with Google's quality

Model Comparison

ModelQualitySpeedContext Window
GPT-4o⭐⭐⭐⭐⭐⭐⭐⭐128K tokens
Claude 3.5 Sonnet⭐⭐⭐⭐⭐⭐⭐⭐200K tokens
Gemini 2.0 Flash⭐⭐⭐⭐⭐⭐⭐⭐1M tokens
Gemini 1.5 Pro⭐⭐⭐⭐⭐⭐⭐2M tokens
Groq Llama 3.3⭐⭐⭐⭐⭐⭐⭐⭐⭐128K tokens

Large Documents

For very large documents or projects, use Gemini models which have the largest context windows (up to 2M tokens).

Bring Your Own API Key

Want to use your own API keys for higher rate limits or billing control?

  1. Go to SettingsAPI Keys
  2. Enter your API key for the desired provider
  3. Your key is stored securely and used for your requests

Keep your API keys secure. Never share them or commit them to version control.

On this page