支持昆仑万维天工大模型 (#2166)

---------

Co-authored-by: Eden <chuangqi.huang@ubtrobot.com>
Co-authored-by: liunux4odoo <liunux@qq.com>
This commit is contained in:
Eden 2023-11-24 22:25:35 +08:00 committed by GitHub
parent 824c29a6d2
commit dfcebf7bc3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 119 additions and 16 deletions

View File

@ -5,7 +5,7 @@
📃 **LangChain-Chatchat** (原 Langchain-ChatGLM)
基于 Langchain 与 ChatGLM 等大语言模型的本地知识库问答应用实现
基于 ChatGLM 等大语言模型与 Langchain 等应用框架实现,开源、可离线部署的检索增强生成(RAG)大模型知识库项目
---
@ -42,21 +42,21 @@
🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。
🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v10` 版本所使用代码已更新至本项目 `v0.2.6` 版本。
🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v11` 版本所使用代码已更新至本项目 `v0.2.7` 版本。
🐳 [Docker 镜像](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.6) 已经更新到 ```0.2.6``` 版本。
🐳 [Docker 镜像](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.6) 已经更新到 ```0.2.7``` 版本。
🌲 一行命令运行 Docker
```shell
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.6
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.7
```
🧩 本项目有一个非常完整的[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) README只是一个简单的介绍__仅仅是入门教程能够基础运行__。 如果你想要更深入的了解本项目,或者对相对本项目做出共享。请移步 [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) 界面
🧩 本项目有一个非常完整的[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) README只是一个简单的介绍__仅仅是入门教程能够基础运行__。 如果你想要更深入的了解本项目,或者想对本项目做出贡献。请移步 [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) 界面
## 解决的痛点
该项目是一个可以实现 __完全本地化__推理的知识库增强方案, 重点解决数据安全保护,私域化部署的企业痛点。
该项目是一个可以实现 __完全本地化__推理的知识库增强方案, 重点解决数据安全保护,私域化部署的企业痛点。
本开源方案采用```Apache License```,可以免费商用,无需付费。
我们支持市面上主流的本地大预言模型和Embedding模型支持开源的本地向量数据库。
@ -67,7 +67,7 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
### 1. 环境配置
+ 首先,确保你的机器安装了 Python 3.10
+ 首先,确保你的机器安装了 Python 3.8 - 3.10
```
$ python --version
Python 3.10.12
@ -148,11 +148,12 @@ $ python startup.py -a
[![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white "langchain-chatglm")](https://t.me/+RjliQ3jnJ1YyN2E9)
### 项目交流群
<img src="img/qr_code_71.jpg" alt="二维码" width="300" />
<img src="img/qr_code_74.jpg" alt="二维码" width="300" />
🎉 Langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。
### 公众号
![](img/official_wechat_mp_account.png)
🎉 Langchain-Chatchat 项目官方公众号,欢迎扫码关注。
<u>[Langchain-Chatchat](https://github.com/chatchat-space/Langchain-Chatchat)</u>:基于 ChatGLM 等大语言模型与 Langchain 等应用框架实现,开源、可离线部署的检索增强生成(RAG)大模型知识库项目
### 公众号
<img src="img/official_wechat_mp_account.png" alt="二维码" width="300" />
🎉 Langchain-Chatchat 项目官方公众号,欢迎扫码关注。

View File

@ -40,7 +40,7 @@ ONLINE_LLM_MODEL = {
# 线上模型。请在server_config中为每个在线API设置不同的端口
"openai-api": {
"model_name": "gpt-35-turbo",
"model_name": "gpt-3.5-turbo",
"api_base_url": "https://api.openai.com/v1",
"api_key": "",
"openai_proxy": "",
@ -113,6 +113,14 @@ ONLINE_LLM_MODEL = {
"api_key": "",
"provider": "AzureWorker",
},
# 昆仑万维天工 API https://model-platform.tiangong.cn/
"tiangong-api": {
"version":"SkyChat-MegaVerse",
"api_key": "",
"secret_key": "",
"provider": "TianGongWorker",
},
}

View File

@ -120,6 +120,9 @@ FSCHAT_MODEL_WORKERS = {
"azure-api": {
"port": 21008,
},
"tiangong-api": {
"port": 21009,
},
}
# fastchat multi model worker server

BIN
img/qr_code_72.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 198 KiB

BIN
img/qr_code_73.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

BIN
img/qr_code_74.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

View File

@ -7,7 +7,7 @@ xformers>=0.0.22.post4
openai>=0.28.1
sentence_transformers
transformers>=4.34
torch>=2.0.1 # 推荐2.1
torch>=2.0.1 # suggest version 2.1
torchvision
torchaudio
fastapi>=0.104
@ -58,5 +58,5 @@ streamlit-option-menu>=0.3.6
streamlit-antd-components>=0.1.11
streamlit-chatbox>=1.1.11
streamlit-aggrid>=0.3.4.post3
httpx[brotli,http2,socks]>=0.25.0
httpx~=0.24.0
watchdog

View File

@ -7,7 +7,7 @@ xformers>=0.0.22.post4
openai>=0.28.1
sentence_transformers
transformers>=4.34
torch>=2.0.1 # 推荐2.1
torch>=2.0.1 # suggest version 2.1
torchvision
torchaudio
fastapi>=0.104

View File

@ -7,3 +7,4 @@ from .fangzhou import FangZhouWorker
from .qwen import QwenWorker
from .baichuan import BaiChuanWorker
from .azure import AzureWorker
from .tiangong import TianGongWorker

View File

@ -0,0 +1,90 @@
import json
import time
import hashlib
from fastchat.conversation import Conversation
from server.model_workers.base import *
from server.utils import get_httpx_client
from fastchat import conversation as conv
import sys
import json
from typing import List, Literal, Dict
import requests
class TianGongWorker(ApiModelWorker):
def __init__(
self,
*,
controller_addr: str = None,
worker_addr: str = None,
model_names: List[str] = ["tiangong-api"],
version: Literal["SkyChat-MegaVerse"] = "SkyChat-MegaVerse",
**kwargs,
):
kwargs.update(model_names=model_names, controller_addr=controller_addr, worker_addr=worker_addr)
kwargs.setdefault("context_len", 32768)
super().__init__(**kwargs)
self.version = version
def do_chat(self, params: ApiChatParams) -> Dict:
params.load_config(self.model_names[0])
url = 'https://sky-api.singularity-ai.com/saas/api/v4/generate'
data = {
"messages": params.messages,
"model": "SkyChat-MegaVerse"
}
timestamp = str(int(time.time()))
sign_content = params.api_key + params.secret_key + timestamp
sign_result = hashlib.md5(sign_content.encode('utf-8')).hexdigest()
headers={
"app_key": params.api_key,
"timestamp": timestamp,
"sign": sign_result,
"Content-Type": "application/json",
"stream": "true" # or change to "false" 不处理流式返回内容
}
# 发起请求并获取响应
response = requests.post(url, headers=headers, json=data, stream=True)
text = ""
# 处理响应流
for line in response.iter_lines(chunk_size=None, decode_unicode=True):
if line:
# 处理接收到的数据
# print(line.decode('utf-8'))
resp = json.loads(line)
if resp["code"] == 200:
text += resp['resp_data']['reply']
yield {
"error_code": 0,
"text": text
}
else:
yield {
"error_code": resp["code"],
"text": resp["code_msg"]
}
def get_embeddings(self, params):
# TODO: 支持embeddings
print("embedding")
print(params)
def make_conv_template(self, conv_template: str = None, model_path: str = None) -> Conversation:
# TODO: 确认模板是否需要修改
return conv.Conversation(
name=self.model_names[0],
system_message="",
messages=[],
roles=["user", "system"],
sep="\n### ",
stop_str="###",
)