0.2.x稳定依赖更新 (#2627)

0.2.x不会支持langchain 0.1.x以上的内容
This commit is contained in:
zR 2024-01-11 19:58:25 +08:00 committed by GitHub
parent 3da68b5ce3
commit 4f07384c66
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 93 additions and 110 deletions

View File

@ -42,7 +42,7 @@
🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。
🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v11` 版本所使用代码已更新至本项目 `v0.2.7` 版本。
🌐 [AutoDL 镜像](https://www.codewithgpu.com/i/chatchat-space/Langchain-Chatchat/Langchain-Chatchat) 中 `v13` 版本所使用代码已更新至本项目 `v0.2.9` 版本。
🐳 [Docker 镜像](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.6) 已经更新到 ```0.2.7``` 版本。
@ -67,10 +67,10 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
### 1. 环境配置
+ 首先,确保你的机器安装了 Python 3.8 - 3.10
+ 首先,确保你的机器安装了 Python 3.8 - 3.11
```
$ python --version
Python 3.10.12
Python 3.11.7
```
接着,创建一个虚拟环境,并在虚拟环境内安装项目的依赖
```shell
@ -88,6 +88,7 @@ $ pip install -r requirements_webui.txt
# 默认依赖包括基本运行环境FAISS向量库。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。
```
请注意LangChain-Chatchat `0.2.x` 系列是针对 Langchain `0.0.x` 系列版本的,如果你使用的是 Langchain `0.1.x` 系列版本,需要降级。
### 2 模型下载
如需在本地或离线环境下运行本项目,需要首先将项目所需的模型下载至本地,通常开源 LLM 与 Embedding 模型可以从 [HuggingFace](https://huggingface.co/models) 下载。

View File

@ -55,10 +55,10 @@ The main process analysis from the aspect of document process:
🚩 The training or fine-tuning are not involved in the project, but still, one always can improve performance by do
these.
🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) is supported, and in v9 the codes are update
to v0.2.5.
🌐 [AutoDL image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5) is supported, and in v13 the codes are update
to v0.2.9.
🐳 [Docker image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5)
🐳 [Docker image](registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.7)
## Pain Points Addressed
@ -98,6 +98,7 @@ $ pip install -r requirements_webui.txt
# 默认依赖包括基本运行环境FAISS向量库。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。
```
Please note that the LangChain-Chachat `0.2.x` series is for the Langchain `0.0.x` series version. If you are using the Langchain `0.1.x` series version, you need to downgrade.
### Model Download

Binary file not shown.

Before

Width:  |  Height:  |  Size: 266 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 208 KiB

View File

@ -1,6 +1,5 @@
# API requirements
# On Windows system, install the cuda version manually from https://pytorch.org/
torch~=2.1.2
torchvision~=0.16.2
torchaudio~=2.1.2
@ -8,30 +7,30 @@ xformers==0.0.23.post1
transformers==4.36.2
sentence_transformers==2.2.2
langchain==0.0.352
langchain==0.0.354
langchain-experimental==0.0.47
pydantic==1.10.13
fschat==0.2.34
openai~=1.6.0
fastapi>=0.105
sse_starlette
openai~=1.7.1
fastapi~=0.108.0
sse_starlette==1.8.2
nltk>=3.8.1
uvicorn>=0.24.0.post1
starlette~=0.27.0
starlette~=0.32.0
unstructured[all-docs]==0.11.0
python-magic-bin; sys_platform == 'win32'
SQLAlchemy==2.0.19
faiss-cpu~=1.7.4 # `conda install faiss-gpu -c conda-forge` if you want to accelerate with gpus
accelerate==0.24.1
faiss-cpu~=1.7.4 # `conda install faiss-gpu -c conda-forge` if you want to accelerate with gpus
accelerate~=0.24.1
spacy~=3.7.2
PyMuPDF~=1.23.8
rapidocr_onnxruntime==1.3.8
requests>=2.31.0
pathlib>=1.0.1
pytest>=7.4.3
numexpr>=2.8.6 # max version for py38
strsimpy>=0.2.1
markdownify>=0.11.6
requests~=2.31.0
pathlib~=1.0.1
pytest~=7.4.3
numexpr~=2.8.6 # max version for py38
strsimpy~=0.2.1
markdownify~=0.11.6
tiktoken~=0.5.2
tqdm>=4.66.1
websockets>=12.0
@ -46,15 +45,14 @@ llama-index
# optional document loaders
# rapidocr_paddle[gpu]>=1.3.0.post5 # gpu accelleration for ocr of pdf and image files
jq>=1.6.0 # for .json and .jsonl files. suggest `conda install jq` on windows
# html2text # for .enex files
jq==1.6.0 # for .json and .jsonl files. suggest `conda install jq` on windows
beautifulsoup4~=4.12.2 # for .mhtml files
pysrt~=1.1.2
# Online api libs dependencies
zhipuai>=1.0.7, <=2.0.0 # zhipu
dashscope>=1.13.6 # qwen
zhipuai==1.0.7 # zhipu
dashscope==1.13.6 # qwen
# volcengine>=1.0.119 # fangzhou
# uncomment libs if you want to use corresponding vector store
@ -64,14 +62,14 @@ dashscope>=1.13.6 # qwen
# Agent and Search Tools
arxiv>=2.0.0
youtube-search>=2.1.2
duckduckgo-search>=3.9.9
metaphor-python>=0.1.23
arxiv~=2.1.0
youtube-search~=2.1.2
duckduckgo-search~=3.9.9
metaphor-python~=0.1.23
# WebUI requirements
streamlit~=1.29.0 # do remember to add streamlit to environment variables if you use windows
streamlit~=1.29.0
streamlit-option-menu>=0.3.6
streamlit-chatbox==1.1.11
streamlit-modal>=0.1.0

View File

@ -1,6 +1,3 @@
# API requirements
# On Windows system, install the cuda version manually from https://pytorch.org/
torch~=2.1.2
torchvision~=0.16.2
torchaudio~=2.1.2
@ -8,52 +5,52 @@ xformers==0.0.23.post1
transformers==4.36.2
sentence_transformers==2.2.2
langchain==0.0.352
langchain==0.0.354
langchain-experimental==0.0.47
pydantic==1.10.13
fschat==0.2.34
openai~=1.6.0
fastapi>=0.105
sse_starlette
openai~=1.7.1
fastapi~=0.108.0
sse_starlette==1.8.2
nltk>=3.8.1
uvicorn>=0.24.0.post1
starlette~=0.27.0
starlette~=0.32.0
unstructured[all-docs]==0.11.0
python-magic-bin; sys_platform == 'win32'
SQLAlchemy==2.0.19
faiss-cpu~=1.7.4 # `conda install faiss-gpu -c conda-forge` if you want to accelerate with gpus
accelerate==0.24.1
faiss-cpu~=1.7.4 # `conda install faiss-gpu -c conda-forge` if you want to accelerate with gpus
accelerate~=0.24.1
spacy~=3.7.2
PyMuPDF~=1.23.8
rapidocr_onnxruntime~=1.3.8
requests>=2.31.0
pathlib>=1.0.1
pytest>=7.4.3
numexpr>=2.8.6 # max version for py38
strsimpy>=0.2.1
markdownify>=0.11.6
rapidocr_onnxruntime==1.3.8
requests~=2.31.0
pathlib~=1.0.1
pytest~=7.4.3
numexpr~=2.8.6 # max version for py38
strsimpy~=0.2.1
markdownify~=0.11.6
tiktoken~=0.5.2
tqdm>=4.66.1
websockets>=12.0
numpy~=1.26.2
pandas~=2.1.4
numpy~=1.24.4
pandas~=2.0.3
einops>=0.7.0
transformers_stream_generator==0.0.4
vllm==0.2.6; sys_platform == "linux"
httpx[brotli,http2,socks]~=0.25.2
httpx[brotli,http2,socks]==0.25.2
llama-index
# optional document loaders
rapidocr_paddle[gpu]>=1.3.0.post5 # gpu accelleration for ocr of pdf and image files
jq>=1.6.0 # for .json and .jsonl files. suggest `conda install jq` on windows
# html2text # for .enex files
# rapidocr_paddle[gpu]>=1.3.0.post5 # gpu accelleration for ocr of pdf and image files
jq==1.6.0 # for .json and .jsonl files. suggest `conda install jq` on windows
beautifulsoup4~=4.12.2 # for .mhtml files
pysrt~=1.1.2
# Online api libs dependencies
zhipuai>=1.0.7, <=2.0.0 # zhipu
dashscope>=1.13.6 # qwen
zhipuai==1.0.7 # zhipu
dashscope==1.13.6 # qwen
# volcengine>=1.0.119 # fangzhou
# uncomment libs if you want to use corresponding vector store
@ -63,7 +60,7 @@ dashscope>=1.13.6 # qwen
# Agent and Search Tools
arxiv>=2.0.0
youtube-search>=2.1.2
duckduckgo-search>=3.9.9
metaphor-python>=0.1.23
arxiv~=2.1.0
youtube-search~=2.1.2
duckduckgo-search~=3.9.9
metaphor-python~=0.1.23

View File

@ -1,60 +1,44 @@
# API requirements
# On Windows system, install the cuda version manually from https://pytorch.org/
# torch~=2.1.2
# torchvision~=0.16.2
# torchaudio~=2.1.2
# xformers==0.0.23.post1
# transformers==4.36.2
# sentence_transformers==2.2.2
langchain==0.0.352
langchain==0.0.354
langchain-experimental==0.0.47
pydantic==1.10.13
fschat==0.2.34
openai~=1.6.0
fastapi>=0.105
sse_starlette
openai~=1.7.1
fastapi~=0.108.0
sse_starlette==1.8.2
nltk>=3.8.1
uvicorn>=0.24.0.post1
starlette~=0.27.0
unstructured[docx,csv]==0.11.0 # add pdf if need
starlette~=0.32.0
unstructured[all-docs]==0.11.0
python-magic-bin; sys_platform == 'win32'
SQLAlchemy==2.0.19
faiss-cpu~=1.7.4 # `conda install faiss-gpu -c conda-forge` if you want to accelerate with gpus
# accelerate==0.24.1
# spacy~=3.7.2
# PyMuPDF~=1.23.8
# rapidocr_onnxruntime~=1.3.8
requests>=2.31.0
pathlib>=1.0.1
pytest>=7.4.3
numexpr>=2.8.6 # max version for py38
strsimpy>=0.2.1
markdownify>=0.11.6
# tiktoken~=0.5.2
faiss-cpu~=1.7.4
requests~=2.31.0
pathlib~=1.0.1
pytest~=7.4.3
numexpr~=2.8.6 # max version for py38
strsimpy~=0.2.1
markdownify~=0.11.6
tiktoken~=0.5.2
tqdm>=4.66.1
websockets>=12.0
numpy~=1.26.2
pandas~=2.1.4
# einops>=0.7.0
# transformers_stream_generator==0.0.4
# vllm==0.2.6; sys_platform == "linux"
httpx[brotli,http2,socks]~=0.25.2
numpy~=1.24.4
pandas~=2.0.3
einops>=0.7.0
transformers_stream_generator==0.0.4
vllm==0.2.6; sys_platform == "linux"
httpx[brotli,http2,socks]==0.25.2
requests
pathlib
pytest
# optional document loaders
rapidocr_paddle[gpu]>=1.3.0.post5 # gpu accelleration for ocr of pdf and image files
jq>=1.6.0 # for .json and .jsonl files. suggest `conda install jq` on windows
# html2text # for .enex files
beautifulsoup4~=4.12.2 # for .mhtml files
pysrt~=1.1.2
# Online api libs dependencies
zhipuai>=1.0.7, <=2.0.0 # zhipu
dashscope>=1.13.6 # qwen
# volcengine>=1.0.119 # fangzhou
zhipuai==1.0.7
dashscope==1.13.6
# volcengine>=1.0.119
# uncomment libs if you want to use corresponding vector store
# pymilvus>=2.3.4
@ -63,17 +47,18 @@ dashscope>=1.13.6 # qwen
# Agent and Search Tools
arxiv>=2.0.0
youtube-search>=2.1.2
duckduckgo-search>=3.9.9
metaphor-python>=0.1.23
arxiv~=2.1.0
youtube-search~=2.1.2
duckduckgo-search~=3.9.9
metaphor-python~=0.1.23
# WebUI requirements
streamlit~=1.29.0 # do remember to add streamlit to environment variables if you use windows
streamlit>=1.29.0
streamlit-option-menu>=0.3.6
streamlit-chatbox==1.1.11
streamlit-antd-components>=0.3.0
streamlit-chatbox>=1.1.11
streamlit-modal>=0.1.0
streamlit-aggrid>=0.3.4.post3
httpx[brotli,http2,socks]>=0.25.2
watchdog>=3.0.0
watchdog>=3.0.0

View File

@ -1,9 +1,10 @@
# WebUI requirements
streamlit~=1.29.0 # do remember to add streamlit to environment variables if you use windows
streamlit>=1.29.0
streamlit-option-menu>=0.3.6
streamlit-chatbox==1.1.11
streamlit-antd-components>=0.3.0
streamlit-chatbox>=1.1.11
streamlit-modal>=0.1.0
streamlit-aggrid>=0.3.4.post3
httpx[brotli,http2,socks]>=0.25.2
watchdog>=3.0.0
watchdog>=3.0.0