40 lines
1.1 KiB
Plaintext
40 lines
1.1 KiB
Plaintext
pymupdf
|
||
paddlepaddle==2.4.2
|
||
paddleocr~=2.6.1.3
|
||
langchain==0.0.174
|
||
transformers==4.29.1
|
||
unstructured[local-inference]
|
||
layoutparser[layoutmodels,tesseract]
|
||
nltk~=3.8.1
|
||
sentence-transformers
|
||
beautifulsoup4
|
||
icetk
|
||
cpm_kernels
|
||
faiss-cpu
|
||
gradio==3.28.3
|
||
fastapi~=0.95.0
|
||
uvicorn~=0.21.1
|
||
pypinyin~=0.48.0
|
||
click~=8.1.3
|
||
tabulate
|
||
feedparser
|
||
azure-core
|
||
openai
|
||
#accelerate~=0.18.0
|
||
#peft~=0.3.0
|
||
#bitsandbytes; platform_system != "Windows"
|
||
|
||
# 要调用llama-cpp模型,如vicuma-13b量化模型需要安装llama-cpp-python库
|
||
# but!!! 实测pip install 不好使,需要手动从ttps://github.com/abetlen/llama-cpp-python/releases/下载
|
||
# 而且注意不同时期的ggml格式并不!兼!容!!!因此需要安装的llama-cpp-python版本也不一致,需要手动测试才能确定
|
||
# 实测ggml-vicuna-13b-1.1在llama-cpp-python 0.1.63上可正常兼容
|
||
# 不过!!!本项目模型加载的方式控制的比较严格,与llama-cpp-python的兼容性较差,很多参数设定不能使用,
|
||
# 建议如非必要还是不要使用llama-cpp
|
||
torch~=2.0.0
|
||
pydantic~=1.10.7
|
||
starlette~=0.26.1
|
||
numpy~=1.23.5
|
||
tqdm~=4.65.0
|
||
requests~=2.28.2
|
||
tenacity~=8.2.2
|
||
charset_normalizer==2.1.0 |