Skip to content

Settings Reference

CodeBuddy exposes 99 settings across the editor’s Settings UI and settings.json. Settings use several prefixes — provider settings use short names (e.g., anthropic.apiKey), while feature settings use the codebuddy.* prefix.

Open settings: Cmd + , → search for codebuddy or the provider name.


SettingTypeDefaultDescription
generativeAi.optionenum"Groq"Active AI provider: Gemini, Groq, Anthropic, XGrok, Deepseek, OpenAI, Qwen, GLM, Local
SettingTypeDefaultDescription
google.gemini.apiKeysstringGemini API key
google.gemini.modelstring"gemini-2.5-pro"Gemini model name
SettingTypeDefaultDescription
groq.llama3.apiKeystringGroq API key
groq.llama3.modelstring"llama-3.1-70b-versatile"Groq model name
SettingTypeDefaultDescription
anthropic.apiKeystringAnthropic API key
anthropic.modelstring"claude-sonnet-4-5"Anthropic model name
SettingTypeDefaultDescription
openai.apiKeystringOpenAI API key
openai.modelstring"gpt-4o"OpenAI model name
SettingTypeDefaultDescription
deepseek.apiKeystringDeepseek API key
deepseek.modelstring"deepseek-chat"Deepseek model (e.g., deepseek-chat V3, deepseek-reasoner R1)
SettingTypeDefaultDescription
qwen.apiKeystringQwen (DashScope) API key
qwen.modelstring"qwen-max"Qwen model (e.g., qwen-max, qwen-plus, qwen3-coder-plus)
SettingTypeDefaultDescription
glm.apiKeystringGLM (Zhipu AI) API key
glm.modelstring"glm-4"GLM model (e.g., glm-4, glm-4-plus, glm-4v)
SettingTypeDefaultDescription
local.modelstring"qwen2.5-coder"Local model name (e.g., qwen2.5-coder, llama3.2, gemma3)
local.baseUrlstring"http://localhost:11434/v1"Base URL. Ollama: http://localhost:11434/v1, Docker Model Runner: http://localhost:12434/engines/llama.cpp/v1
local.apiKeystring"not-needed"API key for local LLM (if required)
SettingTypeDefaultDescription
tavily.apiKeystringTavily API key for web search tool

SettingTypeDefaultDescription
codebuddy.enableStreamingbooleantrueEnable streaming responses from AI models
codebuddy.autoApprovebooleanfalseAutomatically approve agent actions without confirmation
codebuddy.allowFileEditsbooleantrueAllow the agent to create, modify, and delete files
codebuddy.allowTerminalbooleantrueAllow the agent to execute terminal commands
codebuddy.verboseLoggingbooleanfalseShow detailed agent activity logs for debugging
codebuddy.indexCodebasebooleanfalseEnable vector database indexing for semantic code search
codebuddy.contextWindowenum"16k"Max context window size: 4k, 8k, 16k, 32k, 128k
codebuddy.includeHiddenbooleanfalseInclude hidden files (.-prefixed) in context gathering
codebuddy.maxFileSizestring"1"Max file size in MB for context gathering
codebuddy.requireDiffApprovalbooleanfalseRequire manual approval before file changes are applied
codebuddy.languageenum"en"UI language: en, es, fr, de, zh-cn, ja, yo
codebuddy.browserTypeenum"system"How to open URLs: reader, simple, system
codebuddy.nicknamestring""Your display name in chat conversations
codebuddy.compactModebooleanfalseReduce spacing between messages for a denser view
SettingTypeDefaultDescription
font.familyenum"JetBrains Mono"Chat font family: Montserrat, SF Mono, Space Mono, Fira Code, Source Code Pro, JetBrains Mono, Roboto Mono, Ubuntu Mono, IBM Plex Mono, Inconsolata
chatview.themeenum"Atom One Dark"Syntax highlight theme: Atom One Dark, Code Pen, github dark, night owl, tokyo night, and others
chatview.font.sizenumber16Chat font size in pixels
SettingTypeDefaultRangeDescription
codebuddy.agent.maxEventCountnumber2000500–10000Max stream events before the agent pauses
codebuddy.agent.maxToolInvocationsnumber40050–2000Max tool calls before the agent pauses
codebuddy.agent.maxDurationMinutesnumber101–60Max wall-clock minutes before the agent pauses
codebuddy.agent.maxConcurrentStreamsnumber31–10Max concurrent agent streams; additional requests are queued
SettingTypeDefaultDescription
codebuddy.completion.enabledbooleantrueEnable inline code completions (ghost text)
codebuddy.completion.providerenum"Local"Completion provider: Gemini, Groq, Anthropic, Deepseek, OpenAI, Qwen, GLM, Local
codebuddy.completion.modelstring"qwen2.5-coder"Model to use for completions
codebuddy.completion.apiKeystring""API key for completion provider (falls back to main key if empty)
codebuddy.completion.debounceMsnumber300Delay in ms before triggering a completion (min: 50)
codebuddy.completion.maxTokensnumber128Maximum tokens to generate per completion
codebuddy.completion.triggerModeenum"automatic"Trigger mode: automatic, manual
codebuddy.completion.multiLinebooleantrueAllow multi-line completions
SettingTypeDefaultDescription
codebuddy.rules.enabledbooleantrueEnable project rules from .codebuddy/rules.md
codebuddy.rules.maxTokensnumber2000Max tokens for project rules (truncates if exceeded)
codebuddy.rules.showIndicatorbooleantrueShow indicator when project rules are active
rules.customRulesarray[]Custom rules for code generation (objects with id, name, description, content, enabled)
rules.customSystemPromptstring""Additional instructions appended to the base system prompt
rules.subagentsobject{}Per-subagent toggle configuration. Keys: subagent ID, values: { enabled: boolean }
SettingTypeDefaultDescription
codebuddy.review.inlineCommentsbooleantrueShow code review comments inline in workspace files
SettingTypeDefaultDescription
codebuddy.testCommandstring""Custom test command. If empty, auto-detects (Jest, Vitest, Mocha, Pytest, Go, Cargo)
codebuddy.testTimeoutnumber120000Test execution timeout in ms (5000–600000)
SettingTypeDefaultDescription
codebuddy.vectorDb.enabledbooleantrueEnable vector database for semantic code search
codebuddy.vectorDb.embeddingModelenum"gemini"Embedding provider: gemini, openai, local
codebuddy.vectorDb.maxTokensnumber6000Max tokens per chunk
codebuddy.vectorDb.batchSizenumber10Files per embedding batch (1–50)
codebuddy.vectorDb.searchResultLimitnumber8Max search results (1–20)
codebuddy.vectorDb.enableBackgroundProcessingbooleantrueIndex changes in the background
codebuddy.vectorDb.enableProgressNotificationsbooleantrueShow indexing progress
codebuddy.vectorDb.progressLocationenum"notification"Progress location: notification, statusBar
codebuddy.vectorDb.debounceDelaynumber1000Re-index debounce delay in ms
codebuddy.vectorDb.performanceModeenum"balanced"Performance mode: balanced, performance, memory
codebuddy.vectorDb.fallbackToKeywordSearchbooleantrueFall back to keyword search when vectors fail
codebuddy.vectorDb.cacheEnabledbooleantrueCache search results
codebuddy.vectorDb.logLevelenum"info"Log level: debug, info, warn, error
SettingTypeDefaultRangeDescription
codebuddy.hybridSearch.vectorWeightnumber0.70–1Weight for semantic similarity scores
codebuddy.hybridSearch.textWeightnumber0.30–1Weight for BM25 keyword scores
codebuddy.hybridSearch.topKinteger101–50Max results returned
codebuddy.hybridSearch.mmr.enabledbooleanfalseEnable MMR diversity re-ranking
codebuddy.hybridSearch.mmr.lambdanumber0.70–1MMR trade-off: 0 = max diversity, 1 = max relevance
codebuddy.hybridSearch.temporalDecay.enabledbooleanfalseBoost recently indexed content
codebuddy.hybridSearch.temporalDecay.halfLifeDaysinteger301–365Days until a result’s score is halved
SettingTypeDefaultDescription
codebuddy.permissionScope.defaultProfileenum"standard"Default permission profile: restricted (read-only, no terminal), standard (read/write, safe terminal), trusted (full access, auto-approve)
codebuddy.accessControl.defaultModeenum"open"Access control mode: open (no restrictions), allow (only listed users), deny (block listed users)
codebuddy.credentialProxy.enabledbooleanfalseRoute API calls through a local proxy that injects credentials from OS keychain
codebuddy.credentialProxy.rateLimitsobject{"anthropic":60,...}Per-provider rate limits (requests/min). Keys: provider names, values: 1–3600
SettingTypeDefaultDescription
codebuddy.connectors.statesobject{}Connection status of external connectors
codebuddy.mcp.serversobject{}MCP server configurations
codebuddy.mcp.disabledToolsobject{}Per-server lists of disabled MCP tool names
SettingTypeDefaultDescription
codebuddy.automations.dailyStandup.enabledbooleantrueDaily Standup automation (8:00 AM)
codebuddy.automations.codeHealth.enabledbooleantrueDaily Code Health Check (9:00 AM)
codebuddy.automations.codeHealth.hotspotMinChangesnumber3Min changes in 30 days to flag a file as a hotspot
codebuddy.automations.codeHealth.largeFileThresholdnumber300Min lines for “large file” warning (min: 50)
codebuddy.automations.codeHealth.maxTodoItemsnumber50Max TODO/FIXME/HACK items to report (min: 10)
codebuddy.automations.dependencyCheck.enabledbooleantrueDaily Dependency Check (11:00 AM)
codebuddy.automations.gitWatchdog.enabledbooleantrueGit Watchdog (every 2 hours)
codebuddy.automations.gitWatchdog.protectedBranchesstring[]["main","master","develop",...]Branches to protect from stale cleanup. Exact names or prefix/* patterns
codebuddy.automations.endOfDaySummary.enabledbooleantrueEnd-of-Day Summary (5:30 PM)
SettingTypeDefaultDescription
codebuddy.failover.enabledbooleantrueAuto-switch to fallback LLM on failure (rate limit, timeout, auth error)
codebuddy.failover.providersstring[][]Ordered fallback providers: Anthropic, OpenAI, Gemini, Groq, Deepseek, Qwen, GLM, Local. Empty = auto-detect
SettingTypeDefaultDescription
codebuddy.telemetry.persistTracesbooleantruePersist agent telemetry traces to disk
codebuddy.telemetry.retentionDaysnumber7Days to retain traces before pruning (1–90)
codebuddy.telemetry.otlpEndpointstring""OTLP HTTP endpoint for trace export (e.g., LangFuse, LangSmith, Jaeger). Empty = disabled
SettingTypeDefaultDescription
codebuddy.standup.myNamestring""Your name in standup/meeting notes (filters your action items). Falls back to git config user.name