Continue-Plugin für Jetbrains-IDEs und VS Code


IntelliJ IDEA bietet über den hauseigenen „AI Assistant“ die Möglichkeit, KI Unterstützung beim Programmieren zu nutzen. Dieser hat jedoch zwei gravierende Nachteile:

  1. Er läuft in der Cloud, d.h. Dein Code verlässt Deine Einflusssphäre und könnte vielleicht sogar für das weitergehende Training von Modellen genutzt werden.
  2. Er ist kostenpflichtig.

Dies hat bislang viele Entwickler davon abgehalten, ihn für die Arbeit zu nutzen.

Nun gibt es aber mit dem Plugin „Continue“ eine sehr einfache und komfortable Möglichkeit, ein beliebiges (evtl. auf Coding spezialisiertes) LLM für die Unterstützung zu nutzen. Das kann ein lokales oder auch im LAN oder in der Cloud liegendes LLM sein (oder auch mehrere).

Du kannst es über die Standard-Mechanismen von IntelliJ IDEA installieren. Wenn Du Deine IDE neu startest, wird ein Verzeichnis ~/.continue erstellt. Mit dem folgenden Skript kannst Du die Konfigurationsdatei für Continue erstellen:

bash -c "cat > ~/.continue/config.yaml" <<EOF
name: Local Assistant
version: 1.0.0
schema: v1
models: 
  - name: 'nomic-embed-text:latest'
    provider: ollama
    apiBase: 'http://localhost:11434'
    model: 'nomic-embed-text:latest'
    roles:
      - embed
  - name: 'dengcao/Qwen3-Reranker-4B:Q8_0'
    provider: ollama
    apiBase: 'http://localhost:11434'
    model: 'dengcao/Qwen3-Reranker-4B:Q8_0'
    roles:
      - embed
  - name: LM-Studio text/models/openai_gpt-oss-120b-mxfp4.gguf
    provider: openai
    apiBase: 'https://textapi.jlrbg.de/v1'
    model: text/models/openai_gpt-oss-120b-mxfp4.gguf
    roles:
      - chat
      - edit
      - apply
    capabilities:
      - tool_use
    chatOptions:
      baseSystemMessage: 'Reasoning: medium'
      baseAgentSystemMessage: 'Reasoning: high'
      basePlanSystemMessage: 'Reasoning: high'
context:
  - provider: code
  - provider: docs
  - provider: diff
  - provider: file
  - provider: currentFile
  - provider: codebase
    params:
      nFinal: 5
      useReranking: true
      nRetrieve: 25
rules:
- |-
  <role>
      You are a code generation assistant that helps users with software engineering tasks. This will include solving bugs, adding new functionality, refactoring code, explaining code, and more. Adhere to the instructions below to assist the user.
  </role>
  <instructions>
      - You should be concise, direct, and to the point. When you generate non-trivial code, you should explain what the code does and why you propose it. Do not add comments to the code you write, unless the user asks you to, or the code is complex and requires additional context.
      - You should NOT answer with unnecessary preamble or postamble (such as repeating the user request or summarizing your answer), unless the user asks you to.
      - When proposing code modifications, consider the available context (if provided by the user). Mimic code style, use existing libraries and utilities, and follow existing patterns. Propose changes in a way that is most idiomatic. Emphasize only the necessary changes and uses placeholders or "lazy" comments for unmodified sections.
      - Your responses can use Github-flavored markdown for formatting.
      - Always follow security best practices. Never introduce code that exposes or logs secrets and keys.
      - Class, method and variable names should be in German
  </instructions>
prompts:
- name: test
  description: Write unit tests for the selected code
  prompt: |-
    Write a comprehensive set of unit tests for the selected code:
    - It should include setup, correctness checks including important edge cases, and teardown.
    - Ensure that the tests are thorough and rigorous.
    - Output only the tests as chat output; do not edit any file.
- name: comment
  description: Write comments for the selected code
  prompt: |-
    Write comments for the selected code:
    - Do not modify the code itself.
    - Use the conventional commenting style appropriate to the language (e.g., Javadoc for Java).
- name: explain
  description: Describe the following code in detail with respect to all methods and properties.
  prompt: |-
    Explain the marked part of the code in more detail.
    Specifically:
    - What is the purpose of this section?
    - How does it work step by step?
    - Are there any potential problems or limitations with this approach?
- name: translate
  description: Translate the selected text to English
  prompt: |-
    Translate the selected text to English
- name: review
  description: Review code
  prompt: |-
    You are an expert code reviewer. Please conduct a thorough review of this branch/diff focusing on the following areas:

    ## Performance Analysis

    - Identify potential performance bottlenecks or inefficiencies
    - Look for unnecessary loops, redundant operations, or expensive function calls
    - Check for proper use of data structures and algorithms
    - Analyze memory usage patterns and potential leaks
    - Review database queries for optimization opportunities

    ## Design Patterns & Architecture

    - Check for proper separation of concerns and modularity
    - Review naming conventions and code readability
    - Identify opportunities for refactoring or pattern improvements

    ## Error Handling & Edge Cases

    - Verify comprehensive error handling and graceful failure modes
    - Check for proper input validation and sanitization
    - Look for unhandled exceptions or error conditions
    - Assess logging and debugging capabilities
    - Review boundary conditions and edge case handling

    ## Bug Detection

    - Identify potential runtime errors, null pointer exceptions, or type mismatches
    - Look for race conditions, deadlocks, or concurrency issues
    - Check for off-by-one errors, infinite loops, or logic flaws
    - Verify proper resource management (file handles, connections, etc.)
    - Review state management and data consistency

    ## UI/UX & Accessibility (if applicable)

    - Verify semantic HTML structure and proper use of ARIA attributes
    - Ensure keyboard navigation works properly (tab order, focus indicators)
    - Validate screen reader compatibility and alt text for images
    - Review responsive design and mobile accessibility
    - Check for proper form labels and error messaging
    - Assess loading states, animations, and motion sensitivity considerations
    - Verify text scaling works up to 200% without loss of functionality
    - Review heading hierarchy and document structure

    ## Security & Best Practices

    - Check for security vulnerabilities (injection attacks, XSS, etc.)
    - Verify proper authentication and authorization
    - Review sensitive data handling and encryption
    - Assess compliance with coding standards and best practices

    ## Copywriting

    - If the change involves text that is user facing, the text should follow the copywriting guidelines in `.context/copywriting.md`

    ## Questions & Clarifications

    When you encounter changes that are unclear or potentially problematic:

    - Ask specific questions about the intent behind the change
    - Request clarification on business logic or requirements
    - Suggest alternative approaches when appropriate
    - Ask about testing strategies for complex changes

    ## Review Format

    For each issue found, please provide:

    1. **Location**: File name and line numbers
    2. **Severity**: Critical/High/Medium/Low
    3. **Category**: Performance/Design/Bug/Security/Style
    4. **Description**: Clear explanation of the issue
    5. **Recommendation**: Specific suggestions for improvement
    6. **Questions**: Any clarifying questions about the change

    Please be thorough but constructive in your feedback, focusing on actionable improvements that enhance code quality, maintainability, and performance.
  EOF

Wie Du sehen kannst, verwende ich hier für chat, edit und apply ein Modell via OpenAI-API (über OTGWUI gehostet). Alternativ kannst Du da auch Ollama oder LM-Studio benutzen! Für rerank und embed verwende ich die kleinen Modelle lokal via Ollama. Die könnte man natürlich auch auf einem anderen Server betreiben, wenn der lokale Rechner nicht für alles ausreichende Ressourcen hat.

Das Continue-Plugin bietet die Möglichkeit, über „autocomplete“-Models eine eigene Zeilenvervollständigung zu nutzen. Erfahrungsgemäß ist die jedoch der von Jetbrains mitgelieferten weit unterlegen, die jedoch bis vor kurzem bei Aktivierung des Continue-Plugins deaktiviert wurde. Seit kurzem ist es endlich möglich, diese Jetbrains-Zeilenvervollständigung auch bei aktiviertem Continue-Plugin zu nutzen.

Die Konfiguration für die Continue-Zeilenvervollständigung habe ich daher aus der Datei entfernt.

Denke daran, die Continue-Zeilenvervollständigung unter Settings -> Tools -> Continue -> Enable Tab Autocomplete zu deaktivieren!

Mit diesem Skript kannst Du die entsprechenden Ollama-Modelle herunterladen (die oberen beiden):

ollama pull qwen3-embedding:8b-q8_0
ollama pull dengcao/Qwen3-Reranker-4B:Q8_0
ollama pull qqwen3-coder:30b-a3b-q8_0
ollama pull embeddinggemma:latest
ollama pull gemma3:27b-it-q8_0
ollama pull gpt-oss:20b
ollama pull mistral-small3.2:24b-instruct-2506-q8_0
ollama pull qwen3:30b-a3b-instruct-2507-q8_0
ollama pull qwen3:30b-a3b-thinking-2507-q8_0

Für kleinere Grafikkarten

ollama pull qwen3-embedding:8b-q4_K_M
ollama pull dengcao/Qwen3-Reranker-4B:Q4_K_M
ollama pull qwen3-coder:30b-a3b-q4_K_M
ollama pull embeddinggemma:latest
ollama pull gemma3:27b-it-q4_K_M
ollama pull gpt-oss:20b
ollama pull mistral-small3.2:24b-instruct-2506-q4_K_M
ollama pull qwen3:30b-a3b-instruct-2507-q4_K_M
ollama pull qwen3:30b-a3b-thinking-2507-q4_K_M