Continue plugin for Jetbrains IDEs and VS Code
IntelliJ IDEA offers AI support for programming through its built-in “AI Assistant”. However, it has two significant drawbacks:
- It runs in the cloud, meaning your code leaves your sphere of control and could potentially be used for further model training.
- It is paid.
This has so far deterred many developers from using it for work.
Now, however, there is a very simple and comfortable way to use an arbitrary (possibly coding-specialized) LLM for support with the plugin “Continue”. This can be a local LLM or one located in the LAN or cloud (or even multiple).
You can install it using the standard mechanisms of IntelliJ IDEA. When you restart your
IDE, a directory ~/.continue will be created. With the following script, you can create
the configuration file for Continue:
bash -c "cat > ~/.continue/config.yaml" <<EOF
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: 'nomic-embed-text:latest'
provider: ollama
apiBase: 'http://localhost:11434'
model: 'nomic-embed-text:latest'
roles:
- embed
- name: 'dengcao/Qwen3-Reranker-4B:Q8_0'
provider: ollama
apiBase: 'http://localhost:11434'
model: 'dengcao/Qwen3-Reranker-4B:Q8_0'
roles:
- embed
- name: LM-Studio text/models/openai_gpt-oss-120b-mxfp4.gguf
provider: openai
apiBase: 'https://textapi.jlrbg.de/v1'
model: text/models/openai_gpt-oss-120b-mxfp4.gguf
roles:
- chat
- edit
- apply
capabilities:
- tool_use
chatOptions:
baseSystemMessage: 'Reasoning: medium'
baseAgentSystemMessage: 'Reasoning: high'
basePlanSystemMessag: 'Reasoning: high'
context:
- provider: code
- provider: docs
- provider: diff
- provider: file
- provider: currentFile
- provider: codebase
params:
nFinal: 5
useReranking: true
nRetrieve: 25
rules:
- |-
<role>
You are a code generation assistant that helps users with software engineering tasks. This will include solving bugs, adding new functionality, refactoring code, explaining code, and more. Adhere to the instructions below to assist the user.
</role>
<instructions>
- You should be concise, direct, and to the point. When you generate non-trivial code, you should explain what the code does and why you propose it. Do not add comments to the code you write, unless the user asks you to, or the code is complex and requires additional context.
- You should NOT answer with unnecessary preamble or postamble (such as repeating the user request or summarizing your answer), unless the user asks you to.
- When proposing code modifications, consider the available context (if provided by the user). Mimic code style, use existing libraries and utilities, and follow existing patterns. Propose changes in a way that is most idiomatic. Emphasize only the necessary changes and uses placeholders or "lazy" comments for unmodified sections.
- Your responses can use Github-flavored markdown for formatting.
- Always follow security best practices. Never introduce code that exposes or logs secrets and keys.
- Class, method and variable names should be in German
</instructions>
prompts:
- name: test
description: Write unit tests for the selected code
prompt: |-
Write a comprehensive set of unit tests for the selected code:
- It should include setup, correctness checks including important edge cases, and teardown.
- Ensure that the tests are thorough and rigorous.
- Output only the tests as chat output; do not edit any file.
- name: comment
description: Write comments for the selected code
prompt: |-
Write comments for the selected code:
- Do not modify the code itself.
- Use the conventional commenting style appropriate to the language (e.g., Javadoc for Java).
- name: explain
description: Describe the following code in detail with respect to all methods and properties.
prompt: |-
Explain the marked part of the code in more detail.
Specifically:
- What is the purpose of this section?
- How does it work step by step?
- Are there any potential problems or limitations with this approach?
- name: translate
description: Translate the selected text to English
prompt: |-
Translate the selected text to English
- name: review
description: Review code
prompt: |-
You are an expert code reviewer. Please conduct a thorough review of this branch/diff focusing on the following areas:
## Performance Analysis
- Identify potential performance bottlenecks or inefficiencies
- Look for unnecessary loops, redundant operations, or expensive function calls
- Check for proper use of data structures and algorithms
- Analyze memory usage patterns and potential leaks
- Review database queries for optimization opportunities
## Design Patterns & Architecture
- Check for proper separation of concerns and modularity
- Review naming conventions and code readability
- Identify opportunities for refactoring or pattern improvements
## Error Handling & Edge Cases
- Verify comprehensive error handling and graceful failure modes
- Check for proper input validation and sanitization
- Look for unhandled exceptions or error conditions
- Assess logging and debugging capabilities
- Review boundary conditions and edge case handling
## Bug Detection
- Identify potential runtime errors, null pointer exceptions, or type mismatches
- Look for race conditions, deadlocks, or concurrency issues
- Check for off-by-one errors, infinite loops, or logic flaws
- Verify proper resource management (file handles, connections, etc.)
- Review state management and data consistency
## UI/UX & Accessibility (if applicable)
- Verify semantic HTML structure and proper use of ARIA attributes
- Ensure keyboard navigation works properly (tab order, focus indicators)
- Validate screen reader compatibility and alt text for images
- Review responsive design and mobile accessibility
- Check for proper form labels and error messaging
- Assess loading states, animations, and motion sensitivity considerations
- Verify text scaling works up to 200% without loss of functionality
- Review heading hierarchy and document structure
## Security & Best Practices
- Check for security vulnerabilities (injection attacks, XSS, etc.)
- Verify proper authentication and authorization
- Review sensitive data handling and encryption
- Assess compliance with coding standards and best practices
## Copywriting
- If the change involves text that is user facing, the text should follow the copywriting guidelines in `.context/copywriting.md`
## Questions & Clarifications
When you encounter changes that are unclear or potentially problematic:
- Ask specific questions about the intent behind the change
- Request clarification on business logic or requirements
- Suggest alternative approaches when appropriate
- Ask about testing strategies for complex changes
## Review Format
For each issue found, please provide:
1. **Location**: File name and line numbers
2. **Severity**: Critical/High/Medium/Low
3. **Category**: Performance/Design/Bug/Security/Style
4. **Description**: Clear explanation of the issue
5. **Recommendation**: Specific suggestions for improvement
6. **Questions**: Any clarifying questions about the change
Please be thorough but constructive in your feedback, focusing on actionable improvements that enhance code quality, maintainability, and performance.
EOF
As you can see, I use a model via the OpenAI API here for chat, edit, and apply
(hosted through OTGWUI), and for rerank and embed I use small local models via
Ollama. Of course, these could also be operated on another server if the local computer is
too weak for this.
The Continue plugin offers the possibility to use a custom line completion via “autocomplete” models. However, this is significantly inferior to the one provided by Jetbrains, which was previously deactivated when the Continue plugin was activated. Recently, it has finally become possible to use the Jetbrains line completion even when the Continue plugin is active.
I have therefore removed the configuration for the Continue line completion from the file.
Remember to disable the Continue line completion under Settings -> Tools ->
Continue -> Enable Tab Autocomplete!
With this script, you can download the corresponding Ollama models:
ollama pull nomic-embed-text:latest
ollama pull dengcao/Qwen3-Reranker-4B:Q8_0
ollama pull qwen3-coder:30b-a3b-q8_0
ollama pull gemma3:27b-it-q8_0
ollama pull gpt-oss:20b
ollama pull mistral-small3.2:24b-instruct-2506-q8_0
ollama pull qwen3:30b-a3b-instruct-2507-q8_0
ollama pull qwen3:30b-a3b-thinking-2507-q8_0
For smaller GPUs
ollama pull nomic-embed-text:latest
ollama pull qwen3-embedding:8b-q4_K_M
ollama pull dengcao/Qwen3-Reranker-4B:Q4_K_M
ollama pull qwen3-coder:30b-a3b-q4_K_M
ollama pull embeddinggemma:latest
ollama pull gemma3:27b-it-q4_K_M
ollama pull gpt-oss:20b
ollama pull mistral-small3.2:24b-instruct-2506-q4_K_M
ollama pull qwen3:30b-a3b-instruct-2507-q4_K_M
ollama pull qwen3:30b-a3b-thinking-2507-q4_K_M