Skip to content

Commit 4489f74

Browse files
committed
feat(model): Support model icon
1 parent 88bbd69 commit 4489f74

File tree

474 files changed

+15537
-7834
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

474 files changed

+15537
-7834
lines changed

Diff for: README.ja.md

+11
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,17 @@ DB-GPTのアーキテクチャは以下の図に示されています:
154154
私たちは、LLaMA/LLaMA2、Baichuan、ChatGLM、Wenxin、Tongyi、Zhipuなど、オープンソースおよびAPIエージェントからの数十の大規模言語モデル(LLM)を含む幅広いモデルをサポートしています。
155155

156156
- ニュース
157+
- 🔥🔥🔥 [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
158+
- 🔥🔥🔥 [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)
159+
- 🔥🔥🔥 [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3)
160+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)
161+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
162+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
163+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
164+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
165+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
166+
- 🔥🔥🔥 [Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
167+
- 🔥🔥🔥 [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct)
157168
- 🔥🔥🔥 [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
158169
- 🔥🔥🔥 [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
159170
- 🔥🔥🔥 [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)

Diff for: README.md

+11
Original file line numberDiff line numberDiff line change
@@ -169,6 +169,17 @@ At present, we have introduced several key features to showcase our current capa
169169
We offer extensive model support, including dozens of large language models (LLMs) from both open-source and API agents, such as LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, and many more.
170170

171171
- News
172+
- 🔥🔥🔥 [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
173+
- 🔥🔥🔥 [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)
174+
- 🔥🔥🔥 [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3)
175+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)
176+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
177+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
178+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
179+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
180+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
181+
- 🔥🔥🔥 [Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
182+
- 🔥🔥🔥 [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct)
172183
- 🔥🔥🔥 [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
173184
- 🔥🔥🔥 [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
174185
- 🔥🔥🔥 [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)

Diff for: README.zh.md

+11
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,17 @@
162162
海量模型支持,包括开源、API代理等几十种大语言模型。如LLaMA/LLaMA2、Baichuan、ChatGLM、文心、通义、智谱等。当前已支持如下模型:
163163

164164
- 新增支持模型
165+
- 🔥🔥🔥 [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
166+
- 🔥🔥🔥 [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)
167+
- 🔥🔥🔥 [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3)
168+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)
169+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
170+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
171+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
172+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
173+
- 🔥🔥🔥 [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
174+
- 🔥🔥🔥 [Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
175+
- 🔥🔥🔥 [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct)
165176
- 🔥🔥🔥 [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
166177
- 🔥🔥🔥 [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
167178
- 🔥🔥🔥 [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
---
2+
title: "ChatDashboardConfig Configuration"
3+
description: "Chat Dashboard Configuration"
4+
---
5+
6+
import { ConfigDetail } from "@site/src/components/mdx/ConfigDetail";
7+
8+
<ConfigDetail config={{
9+
"name": "ChatDashboardConfig",
10+
"description": "Chat Dashboard Configuration",
11+
"documentationUrl": "",
12+
"parameters": [
13+
{
14+
"name": "top_k",
15+
"type": "integer",
16+
"required": false,
17+
"description": "The top k for LLM generation"
18+
},
19+
{
20+
"name": "top_p",
21+
"type": "number",
22+
"required": false,
23+
"description": "The top p for LLM generation"
24+
},
25+
{
26+
"name": "temperature",
27+
"type": "number",
28+
"required": false,
29+
"description": "The temperature for LLM generation"
30+
},
31+
{
32+
"name": "max_new_tokens",
33+
"type": "integer",
34+
"required": false,
35+
"description": "The max new tokens for LLM generation"
36+
},
37+
{
38+
"name": "name",
39+
"type": "string",
40+
"required": false,
41+
"description": "The name of your app"
42+
},
43+
{
44+
"name": "memory",
45+
"type": "BaseGPTsAppMemoryConfig",
46+
"required": false,
47+
"description": "The memory configuration",
48+
"nestedTypes": [
49+
{
50+
"type": "link",
51+
"text": "window configuration",
52+
"url": "/docs/config-reference/memory/config_bufferwindowgptsappmemoryconfig_c31071"
53+
},
54+
{
55+
"type": "link",
56+
"text": "token configuration",
57+
"url": "/docs/config-reference/memory/config_tokenbuffergptsappmemoryconfig_6a2000"
58+
}
59+
]
60+
},
61+
{
62+
"name": "schema_retrieve_top_k",
63+
"type": "integer",
64+
"required": false,
65+
"description": "The number of tables to retrieve from the database.",
66+
"defaultValue": "10"
67+
},
68+
{
69+
"name": "schema_max_tokens",
70+
"type": "integer",
71+
"required": false,
72+
"description": "The maximum number of tokens to pass to the model, default 100 * 1024.Just work for the schema retrieval failed, and load all tables schema.",
73+
"defaultValue": "102400"
74+
}
75+
]
76+
}} />
77+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
---
2+
title: "ChatExcelConfig Configuration"
3+
description: "Chat Excel Configuration"
4+
---
5+
6+
import { ConfigDetail } from "@site/src/components/mdx/ConfigDetail";
7+
8+
<ConfigDetail config={{
9+
"name": "ChatExcelConfig",
10+
"description": "Chat Excel Configuration",
11+
"documentationUrl": "",
12+
"parameters": [
13+
{
14+
"name": "top_k",
15+
"type": "integer",
16+
"required": false,
17+
"description": "The top k for LLM generation"
18+
},
19+
{
20+
"name": "top_p",
21+
"type": "number",
22+
"required": false,
23+
"description": "The top p for LLM generation"
24+
},
25+
{
26+
"name": "temperature",
27+
"type": "number",
28+
"required": false,
29+
"description": "The temperature for LLM generation"
30+
},
31+
{
32+
"name": "max_new_tokens",
33+
"type": "integer",
34+
"required": false,
35+
"description": "The max new tokens for LLM generation"
36+
},
37+
{
38+
"name": "name",
39+
"type": "string",
40+
"required": false,
41+
"description": "The name of your app"
42+
},
43+
{
44+
"name": "memory",
45+
"type": "BaseGPTsAppMemoryConfig",
46+
"required": false,
47+
"description": "Memory configuration",
48+
"nestedTypes": [
49+
{
50+
"type": "link",
51+
"text": "window configuration",
52+
"url": "/docs/config-reference/memory/config_bufferwindowgptsappmemoryconfig_c31071"
53+
},
54+
{
55+
"type": "link",
56+
"text": "token configuration",
57+
"url": "/docs/config-reference/memory/config_tokenbuffergptsappmemoryconfig_6a2000"
58+
}
59+
],
60+
"defaultValue": "BufferWindowGPTsAppMemoryConfig"
61+
},
62+
{
63+
"name": "duckdb_extensions_dir",
64+
"type": "string",
65+
"required": false,
66+
"description": "The directory of the duckdb extensions.Duckdb will download the extensions from the internet if not provided.This configuration is used to tell duckdb where to find the extensions and avoid downloading. Note that the extensions are platform-specific and version-specific.",
67+
"defaultValue": "[]"
68+
},
69+
{
70+
"name": "force_install",
71+
"type": "boolean",
72+
"required": false,
73+
"description": "Whether to force install the duckdb extensions. If True, the extensions will be installed even if they are already installed.",
74+
"defaultValue": "False"
75+
}
76+
]
77+
}} />
78+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
---
2+
title: "ChatKnowledgeConfig Configuration"
3+
description: "Chat Knowledge Configuration"
4+
---
5+
6+
import { ConfigDetail } from "@site/src/components/mdx/ConfigDetail";
7+
8+
<ConfigDetail config={{
9+
"name": "ChatKnowledgeConfig",
10+
"description": "Chat Knowledge Configuration",
11+
"documentationUrl": "",
12+
"parameters": [
13+
{
14+
"name": "top_k",
15+
"type": "integer",
16+
"required": false,
17+
"description": "The top k for LLM generation"
18+
},
19+
{
20+
"name": "top_p",
21+
"type": "number",
22+
"required": false,
23+
"description": "The top p for LLM generation"
24+
},
25+
{
26+
"name": "temperature",
27+
"type": "number",
28+
"required": false,
29+
"description": "The temperature for LLM generation"
30+
},
31+
{
32+
"name": "max_new_tokens",
33+
"type": "integer",
34+
"required": false,
35+
"description": "The max new tokens for LLM generation"
36+
},
37+
{
38+
"name": "name",
39+
"type": "string",
40+
"required": false,
41+
"description": "The name of your app"
42+
},
43+
{
44+
"name": "memory",
45+
"type": "BaseGPTsAppMemoryConfig",
46+
"required": false,
47+
"description": "Memory configuration",
48+
"nestedTypes": [
49+
{
50+
"type": "link",
51+
"text": "window configuration",
52+
"url": "/docs/config-reference/memory/config_bufferwindowgptsappmemoryconfig_c31071"
53+
},
54+
{
55+
"type": "link",
56+
"text": "token configuration",
57+
"url": "/docs/config-reference/memory/config_tokenbuffergptsappmemoryconfig_6a2000"
58+
}
59+
],
60+
"defaultValue": "BufferWindowGPTsAppMemoryConfig"
61+
},
62+
{
63+
"name": "knowledge_retrieve_top_k",
64+
"type": "integer",
65+
"required": false,
66+
"description": "The number of chunks to retrieve from the knowledge space.",
67+
"defaultValue": "10"
68+
},
69+
{
70+
"name": "knowledge_retrieve_rerank_top_k",
71+
"type": "integer",
72+
"required": false,
73+
"description": "The number of chunks after reranking.",
74+
"defaultValue": "10"
75+
},
76+
{
77+
"name": "similarity_score_threshold",
78+
"type": "number",
79+
"required": false,
80+
"description": "The minimum similarity score to return from the query.",
81+
"defaultValue": "0.0"
82+
}
83+
]
84+
}} />
85+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
---
2+
title: "ChatNormalConfig Configuration"
3+
description: "Chat Normal Configuration"
4+
---
5+
6+
import { ConfigDetail } from "@site/src/components/mdx/ConfigDetail";
7+
8+
<ConfigDetail config={{
9+
"name": "ChatNormalConfig",
10+
"description": "Chat Normal Configuration",
11+
"documentationUrl": "",
12+
"parameters": [
13+
{
14+
"name": "top_k",
15+
"type": "integer",
16+
"required": false,
17+
"description": "The top k for LLM generation"
18+
},
19+
{
20+
"name": "top_p",
21+
"type": "number",
22+
"required": false,
23+
"description": "The top p for LLM generation"
24+
},
25+
{
26+
"name": "temperature",
27+
"type": "number",
28+
"required": false,
29+
"description": "The temperature for LLM generation"
30+
},
31+
{
32+
"name": "max_new_tokens",
33+
"type": "integer",
34+
"required": false,
35+
"description": "The max new tokens for LLM generation"
36+
},
37+
{
38+
"name": "name",
39+
"type": "string",
40+
"required": false,
41+
"description": "The name of your app"
42+
},
43+
{
44+
"name": "memory",
45+
"type": "BaseGPTsAppMemoryConfig",
46+
"required": false,
47+
"description": "Memory configuration",
48+
"nestedTypes": [
49+
{
50+
"type": "link",
51+
"text": "window configuration",
52+
"url": "/docs/config-reference/memory/config_bufferwindowgptsappmemoryconfig_c31071"
53+
},
54+
{
55+
"type": "link",
56+
"text": "token configuration",
57+
"url": "/docs/config-reference/memory/config_tokenbuffergptsappmemoryconfig_6a2000"
58+
}
59+
],
60+
"defaultValue": "TokenBufferGPTsAppMemoryConfig"
61+
}
62+
]
63+
}} />
64+

0 commit comments

Comments
 (0)