Core Customization Options
Models
Choose and configure models for different tasks like chat, autocomplete, and editing• Select from popular providers like OpenAI, Anthropic, Gemini
• Use different models for specific roles
• Run local models with Ollama
• Use different models for specific roles
• Run local models with Ollama
Model Providers
Connect Continue to your favorite AI providers• Support for 40+ providers
• Self-host your own models
• Switch between providers easily
• Self-host your own models
• Switch between providers easily
Rules
Define guidelines that shape AI behavior and ensure consistency• Enforce coding standards
• Implement quality checks
• Create project-specific best practices
• Implement quality checks
• Create project-specific best practices
Prompts
Create specialized instructions for specific tasks• Define interaction patterns
• Encode domain expertise
• Share and reuse across teams
• Encode domain expertise
• Share and reuse across teams
MCP Tools
Extend your agents with external tools and functions• Connect to external APIs
• Add custom capabilities
• Use Model Context Protocol servers
• Add custom capabilities
• Use Model Context Protocol servers
Model Roles
Assign different models to specific tasks• Chat, Edit, Apply, Autocomplete
• Embeddings and Reranking
• Optimize for performance and cost
• Embeddings and Reranking
• Optimize for performance and cost
Advanced Configuration
Deep Dives
Detailed technical explanations of Continue’s internals• Configuration system
• Autocomplete mechanics
• Custom context providers
• Autocomplete mechanics
• Custom context providers
Reference
Complete configuration reference and API documentation• YAML configuration guide
• JSON reference
• Migration guides
• JSON reference
• Migration guides
Telemetry
Understand what data Continue collects• Anonymous usage statistics
• Opt-out instructions
• Privacy-first approach
• Opt-out instructions
• Privacy-first approach
Getting Started with Customization
Edit Your Configuration
Access your configuration directly from the Continue sidebar:- Open the sidebar with
cmd/ctrl + L(VS Code) orcmd/ctrl + J(JetBrains) - Click the Agent selector above the main chat input
- Hover over an agent and click the gear icon (local agents) or new window icon (hub agents)
Configuration Management
- Hub Configurations: See Editing Hub Configurations for managing cloud-based configs
- Local Configurations: See the Config Deep Dive for detailed local configuration options
Popular Customization Paths
Using a Different Model Provider
Using a Different Model Provider
Continue supports many model providers beyond the default. To switch:
- Choose your model provider
- Get an API key from the provider
- Add the model to your configuration
- Configure the API key in your secrets
Running Models Locally
Running Models Locally
Run AI models on your own hardware:
- Install Ollama
- Pull a model (e.g.,
ollama pull qwen3-coder) - Configure Ollama provider
- See our Ollama guide for details
Setting Up Project-Specific Rules
Setting Up Project-Specific Rules
Create rules that guide AI behavior for your project:
- Create a
.continue/rulesfolder in your project - Add markdown files with your guidelines
- Rules automatically apply with Hub configs
- Learn more in the rules documentation
Creating Custom Slash Commands
Creating Custom Slash Commands
Build reusable prompts as slash commands:
- Define prompts in your configuration
- Use them with
/in the chat - Share across your team via Hub
- See the prompts guide for examples