-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Expand file tree
/
Copy path.env.example
More file actions
114 lines (105 loc) · 4.8 KB
/
.env.example
File metadata and controls
114 lines (105 loc) · 4.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
# ============================================
# DeepTutor Environment Template
# ============================================
# Copy this file to `.env` and fill in your values.
# Core runtime settings are grouped as:
# ports / llm / embedding / search / docker-cloud / security
# --------------------------------------------
# Ports
# --------------------------------------------
BACKEND_PORT=8001
FRONTEND_PORT=3782
# --------------------------------------------
# LLM (Required)
# --------------------------------------------
# Supported bindings: openai, lm_studio, ollama, azure_openai, deepseek, ...
# For a full list run: deeptutor provider list
LLM_BINDING=openai
LLM_MODEL=gpt-4o-mini
LLM_API_KEY=sk-xxx
LLM_HOST=https://api.openai.com/v1
LLM_API_VERSION=
# --------------------------------------------
# Embedding (Required for knowledge base features)
# --------------------------------------------
# Supported bindings: openai, azure_openai, cohere, jina, ollama, vllm,
# siliconflow, aliyun, custom.
#
# IMPORTANT (v1.3.0+): EMBEDDING_HOST is the FULL endpoint URL, not a base.
# LLM_HOST above remains a chat-completions base URL; this embedding value is
# exactly what gets called — no path appending. Examples:
# OpenAI: https://api.openai.com/v1/embeddings
# Gemini: https://generativelanguage.googleapis.com/v1beta/openai/embeddings
# Cohere v4: https://api.cohere.com/v2/embed
# Jina: https://api.jina.ai/v1/embeddings
# Ollama: http://localhost:11434/api/embed
# SiliconFlow: https://api.siliconflow.cn/v1/embeddings
# Aliyun: https://dashscope.aliyuncs.com/api/v1/services/embeddings/multimodal-embedding/multimodal-embedding
EMBEDDING_BINDING=openai
EMBEDDING_MODEL=text-embedding-3-large
EMBEDDING_API_KEY=sk-xxx
EMBEDDING_HOST=https://api.openai.com/v1/embeddings
# Leave EMBEDDING_DIMENSION empty to let DeepTutor auto-fill it from the
# provider's response on the first successful "Test connection". Override
# only if you want to force a specific Matryoshka dim (e.g. Qwen3-Embedding
# variants 1024/1536/2048/2560/4096, OpenAI text-embedding-3-large 3072).
EMBEDDING_DIMENSION=
EMBEDDING_API_VERSION=
# Whether to send the `dimensions` request parameter to the embedding endpoint.
# true -> always send (OpenAI text-embedding-3*, Qwen3-Embedding, Qwen3-VL-Embedding)
# false -> never send (set this if your provider returns HTTP 400 when
# `dimensions` is present)
# <empty> or unset -> auto-detect from model family.
EMBEDDING_SEND_DIMENSIONS=
# Per-provider API key fallbacks (used when the active embedding profile has
# no api_key set in the catalog). The catalog UI also reads these on first
# launch so you don't have to retype them.
SILICONFLOW_API_KEY=
DASHSCOPE_API_KEY=
COHERE_API_KEY=
JINA_API_KEY=
GEMINI_API_KEY=
# ⚠️ Docker + local LLM (LM Studio / Ollama / vLLM)
# ─────────────────────────────────────────────────────
# When running DeepTutor in Docker and your LLM runs on the HOST machine:
# - Do NOT use "localhost" or "127.0.0.1" — inside the container these
# refer to the container itself, not the host.
# - macOS / Windows Docker Desktop: use http://host.docker.internal:<port>/v1
# - Linux: use the host's LAN IP, e.g. http://192.168.1.100:<port>/v1
# (or run Docker with --network=host)
#
# Example (LM Studio on port 1234):
# LLM_BINDING=lm_studio
# LLM_HOST=http://host.docker.internal:1234/v1
# EMBEDDING_BINDING=lm_studio
# EMBEDDING_HOST=http://host.docker.internal:1234/v1/embeddings
# --------------------------------------------
# Web Search (Optional)
# --------------------------------------------
SEARCH_PROVIDER=
SEARCH_API_KEY=
SEARCH_BASE_URL=
# --------------------------------------------
# Docker / Cloud deployment (Optional)
# --------------------------------------------
# Public backend URL used by the frontend when deployed remotely.
NEXT_PUBLIC_API_BASE_EXTERNAL=
# Alternative direct API base URL.
NEXT_PUBLIC_API_BASE=
# --------------------------------------------
# Security / Networking (Optional)
# --------------------------------------------
# Keep this false in production.
DISABLE_SSL_VERIFY=false
# --------------------------------------------
# Chat Attachments (Optional)
# --------------------------------------------
# Where the chat turn runtime persists user-uploaded files (PDF, DOCX,
# images, …) so the preview drawer can re-fetch the originals after the
# in-memory base64 is dropped.
#
# Defaults to <project>/data/user/workspace/chat/attachments, which is
# already mounted as a volume in docker-compose.yml — leave unset for the
# default Docker / Linux deployment. Override only when running multiple
# backend instances against shared storage (e.g. NFS).
CHAT_ATTACHMENT_DIR=