hzg0601
|
6f93a27e53
|
更新llm_api_sh.py调用说明
|
2023-08-03 14:39:00 +08:00 |
hzg0601
|
d0fd6253a3
|
update llm_api_sh
|
2023-08-02 09:19:58 +08:00 |
hzg0601
|
1d1a8e9339
|
Merge branch 'dev_fastchat' of github.com:chatchat-space/langchain-ChatGLM into dev_fastchat
|
2023-08-01 22:07:12 +08:00 |
hzg0601
|
ab7a76f380
|
update llm_api_sh.py and model_config.example
|
2023-08-01 22:07:05 +08:00 |
imClumsyPanda
|
e4fed93989
|
update format in api.py
|
2023-08-01 21:53:19 +08:00 |
hzg0601
|
18a94fcf45
|
Merge branch 'dev_fastchat' of github.com:chatchat-space/langchain-ChatGLM into dev_fastchat
|
2023-08-01 18:02:52 +08:00 |
hzg0601
|
15e67a4d3e
|
1.*在config里将所有fastchat的命令行参数加入;2.*加入启动和停止fastchat的shell脚本;3. **增加了通过命令行启动所有fastchat服务的python脚本llm_api_sh.py;4. 修改了默认的config日志格式
|
2023-08-01 17:59:20 +08:00 |
imClumsyPanda
|
7c01a2a253
|
add bing_search_chat.py and duckduckgo_search_chat.py
|
2023-08-01 16:39:17 +08:00 |
imClumsyPanda
|
5ce2484af0
|
update webui.py
|
2023-08-01 15:08:19 +08:00 |
imClumsyPanda
|
7d79b676d5
|
add model_config.py.example instead of model_config.py
|
2023-08-01 14:55:00 +08:00 |
imClumsyPanda
|
8261deb99d
|
fix model_config path
|
2023-08-01 14:49:49 +08:00 |
imClumsyPanda
|
bcfd3f5af5
|
add webui_pages
|
2023-08-01 14:47:38 +08:00 |
imClumsyPanda
|
c8a75ab11f
|
update llm_api.py and webui.py
|
2023-08-01 14:33:18 +08:00 |
liunux4odoo
|
2c5b6bb0ad
|
streamlit ui 实现LLM流式对话
|
2023-08-01 14:18:30 +08:00 |
liunux4odoo
|
a1a7484ef4
|
增加webui_utils.py,包括制作webui通用的工具,方便以后开发其他webui
|
2023-08-01 14:15:42 +08:00 |
imClumsyPanda
|
9f4567865c
|
add chatglm2-6b-32k and make m3e default embedding model
|
2023-08-01 14:12:28 +08:00 |
liunux4odoo
|
9e2b411b01
|
cuda error with multiprocessing, change model_worker to main process
|
2023-07-31 11:18:57 +08:00 |
hzg0601
|
47dfb6cd8b
|
udpate llm_api
|
2023-07-31 11:00:33 +08:00 |
liunux4odoo
|
946f10e1f2
|
split api_start to create_app & run_api. user can run api with uvicorn in console: uvicorn server.api:app --port 7861
|
2023-07-31 10:05:19 +08:00 |
liunux4odoo
|
463659f0ba
|
llm_api升级到fschat==0.2.20,支持百川模型
|
2023-07-30 23:16:47 +08:00 |
liunux4odoo
|
51ea717606
|
Merge branch 'dev_fastchat_me' into dev_fastchat
|
2023-07-30 09:01:31 +08:00 |
liunux4odoo
|
179c2a9a92
|
修改server.chat.openai_chat中的参数定义,使其与openai中/v1/chat/completions接口的入参保持一致,并按照model_config提供默认值。
openai_chat中的接口还要修改:openai根据参数stream有不同的返回值,本接口要与其对应。
|
2023-07-30 08:56:49 +08:00 |
imClumsyPanda
|
05ccc0346e
|
update test code
|
2023-07-30 00:48:07 +08:00 |
imClumsyPanda
|
41444fd4b5
|
update requirements.txt and llm_api.py
|
2023-07-30 00:24:34 +08:00 |
imClumsyPanda
|
d4ffc70d96
|
update requirements.txt
|
2023-07-29 23:46:02 +08:00 |
liunux4odoo
|
1a7271e966
|
fix: model_worker need global variable: args
|
2023-07-29 23:22:25 +08:00 |
liunux4odoo
|
c880412300
|
use multiprocessing to run fastchat server
|
2023-07-29 23:01:24 +08:00 |
liunux4odoo
|
829ced398b
|
make api.py import model_config coreectlly
|
2023-07-29 23:00:50 +08:00 |
liunux4odoo
|
8b1cf7effd
|
fix: os.environ['OPENAI_API_KEY'] exception when no environment variable set
|
2023-07-28 16:51:58 +08:00 |
liunux4odoo
|
70c6870776
|
增加api_one.py,把fastchat
3个服务端整合在一起。后面考虑把api.py也整合进来。
3个进程变成1个进程可能带来少许性能损失,但对于个人用户降低了部署难度。
|
2023-07-28 16:41:45 +08:00 |
hzg0601
|
97ee4686a1
|
更新会议记录
|
2023-07-28 16:16:59 +08:00 |
hzg0601
|
154cad1b45
|
会议记录
|
2023-07-28 16:12:57 +08:00 |
imClumsyPanda
|
e95996a9b9
|
update requirements.txt
|
2023-07-28 06:59:16 +08:00 |
imClumsyPanda
|
59442dcd4a
|
update webui.py
|
2023-07-28 06:58:34 +08:00 |
imClumsyPanda
|
dd0f90b4a4
|
readd .github
|
2023-07-28 06:54:54 +08:00 |
imClumsyPanda
|
620ccb3bdc
|
update model_config.py
|
2023-07-27 23:28:33 +08:00 |
imClumsyPanda
|
dcf49a59ef
|
v0.2.0 first commit
|
2023-07-27 23:22:07 +08:00 |
imClumsyPanda
|
f7a32f9248
|
Update README.md
|
2023-07-27 09:23:31 +08:00 |
imClumsyPanda
|
36a1c72573
|
Add files via upload
|
2023-07-27 09:23:07 +08:00 |
imClumsyPanda
|
01a54d1042
|
add chatglm-fitness-RLHF made by @BoFan-tunning to llm_model_dict in model_config.py
|
2023-07-26 21:41:27 +08:00 |
imClumsyPanda
|
0a062ba07b
|
update img
|
2023-07-26 21:37:22 +08:00 |
imClumsyPanda
|
db80358df8
|
Update model_config.py
fix openai api url
|
2023-07-25 11:16:02 +08:00 |
imClumsyPanda
|
3f627af745
|
Update README.md
|
2023-07-24 08:32:21 +08:00 |
imClumsyPanda
|
b586ee6e1f
|
Add files via upload
|
2023-07-24 08:31:54 +08:00 |
imClumsyPanda
|
fdd353e48a
|
Merge branch 'master' into dev
|
2023-07-23 18:40:04 +08:00 |
imClumsyPanda
|
ff5d2ecc1e
|
update img
|
2023-07-23 18:39:49 +08:00 |
imClumsyPanda
|
0f43845a98
|
merge master
|
2023-07-23 18:38:51 +08:00 |
hzg0601
|
1e1075b98a
|
Merge branch 'dev' of github.com:imClumsyPanda/langchain-ChatGLM into dev
|
2023-07-21 23:13:11 +08:00 |
hzg0601
|
b2278f2f25
|
更改api.py的chat函数的history的类型为Optional,修复chat接口不可调用的问题
|
2023-07-21 23:12:28 +08:00 |
liunux4odoo
|
a6f42e7c8e
|
fix iss#889:
when init_model on startup, set llm model's history_len to LLM_HISTORY_LEN from model_config.
|
2023-07-21 23:07:50 +08:00 |