Go to file
imClumsyPanda 456229c13f update README.md and README_en.md 2023-09-15 14:18:35 +08:00
.github Update close-issue.yml:提示改成中文,改到凌晨05:30运行 (#1456) 2023-09-13 10:06:54 +08:00
chains update import pkgs and format 2023-08-10 21:26:05 +08:00
common search_engine_chat bug 2023-08-24 17:25:54 +08:00
configs update knowledge_base and dialogue page of webui 2023-09-15 13:45:47 +08:00
docs update README.md 2023-09-15 09:53:58 +08:00
document_loaders merge master 2023-09-12 15:51:28 +08:00
embeddings update import pkgs and format 2023-08-10 21:26:05 +08:00
image update readme.md,readme_cn.md:更新订阅号QRcode 2023-09-09 17:34:56 +08:00
img Merge branch 'master' into pre-release 2023-09-15 13:53:34 +08:00
knowledge_base/samples update knowledge_base and dialogue page of webui 2023-09-15 13:45:47 +08:00
nltk_data add nltk_data 2023-04-18 10:07:19 +08:00
server update README.md 2023-09-15 09:53:58 +08:00
tests 修复文心一言,添加测试用例 2023-09-14 23:37:34 +08:00
text_splitter update requirements.txt, requirements_api.txt, test_different_splitter.py and chinese_recursive_text_splitter.py 2023-09-14 22:59:05 +08:00
webui_pages update knowledge_base and dialogue page of webui 2023-09-15 13:45:47 +08:00
.gitignore merge master 2023-09-12 15:51:28 +08:00
CONTRIBUTING.md Add Contribution Guide 2023-04-19 00:51:53 +08:00
LICENSE Create LICENSE 2023-04-07 11:41:10 +08:00
README.md update README.md and README_en.md 2023-09-15 14:18:35 +08:00
README_en.md update README.md and README_en.md 2023-09-15 14:18:35 +08:00
init_database.py 增加数据库字段,重建知识库使用多线程 (#1280) 2023-08-28 13:50:35 +08:00
release.py Add release.py 2023-04-16 02:47:31 +08:00
requirements.txt update requirements.txt, requirements_api.txt, test_different_splitter.py and chinese_recursive_text_splitter.py 2023-09-14 22:59:05 +08:00
requirements_api.txt update requirements.txt, requirements_api.txt, test_different_splitter.py and chinese_recursive_text_splitter.py 2023-09-14 22:59:05 +08:00
requirements_webui.txt 清理不必要的依赖,增加星火API需要的websockets (#1463) 2023-09-13 15:35:04 +08:00
shutdown_all.sh update readme.md, shutdown_all.sh: 在Linux上使用ctrl+C退出可能会由于linux的多进程机制导致multiprocessing遗留孤儿进程 2023-08-25 16:16:44 +08:00
startup.py update README.md 2023-09-15 09:53:58 +08:00
webui.py 修改Embeddings和FAISS缓存加载方式,知识库相关API接口支持多线程并发 (#1434) 2023-09-11 20:41:41 +08:00

README_en.md

Telegram

🌍 中文文档

📃 LangChain-Chatchat (formerly Langchain-ChatGLM): A LLM application aims to implement knowledge and search engine based QA based on Langchain and open-source or remote LLM API.

Content


Introduction

🤖 A Q&A application based on local knowledge base implemented using the idea of langchain. The goal is to build a KBQA(Knowledge based Q&A) solution that is friendly to Chinese scenarios and open source models and can run both offline and online.

💡 Inspried by document.ai and ChatGLM-6B Pull Request , we build a local knowledge base question answering application that can be implemented using an open source model or remote LLM api throughout the process. In the latest version of this project, FastChat is used to access Vicuna, Alpaca, LLaMA, Koala, RWKV and many other models. Relying on [langchain](https:// github.com/langchain-ai/langchain) , this project supports calling services through the API provided based on FastAPI, or using the WebUI based on [Streamlit](https://github.com /streamlit/streamlit) .

Relying on the open source LLM and Embedding models, this project can realize full-process offline private deployment. At the same time, this project also supports the call of OpenAI GPT API- and Zhipu API, and will continue to expand the access to various models and remote APIs in the future.

⛓️ The implementation principle of this project is shown in the graph below. The main process includes: loading files -> reading text -> text segmentation -> text vectorization -> question vectorization -> matching the top-k most similar to the question vector in the text vector -> The matched text is added to prompt as context and question -> submitted to LLM to generate an answer.

📺video introdution

实现原理图

The main process analysis from the aspect of document process:

实现原理图2

🚩 The training or fined-tuning are not involved in the project, but still, one always can improve performance by do these.

🌐 AutoDL image is supported, and in v7 the codes are update to v0.2.3.

🐳 Docker image

💻 Run Docker with one command:

docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0

Change Log

plese refer to version change log

Current Features

  • Consistent LLM Service based on FastChat. The project use FastChat to provide the API service of the open source LLM models and access it in the form of OpenAI API interface to improve the loading effect of the LLM model;
  • Chain and Agent based on Langchian. Use the existing Chain implementation in langchain to facilitate subsequent access to different types of Chain, and will test Agent access;
  • Full fuction API service based on FastAPI. All interfaces can be tested in the docs automatically generated by FastAPI, and all dialogue interfaces support streaming or non-streaming output through parameters. ;
  • WebUI service based on Streamlit. With Streamlit, you can choose whether to start WebUI based on API services, add session management, customize session themes and switch, and will support different display of content forms of output in the future;
  • Abundant open source LLM and Embedding models. The default LLM model in the project is changed to THUDM/chatglm2-6b, and the default Embedding model is changed to [moka-ai/m3e-base](https:// huggingface.co/moka-ai/m3e-base), the file loading method and the paragraph division method have also been adjusted. In the future, context expansion will be re-implemented and optional settings will be added;
  • Multiply vector libraries. The project has expanded support for different types of vector libraries. Including FAISS, [Milvus](https://github.com/milvus -io/milvus), and PGVector;
  • Varied Search engines. We provide two search engines now: Bing and DuckDuckGo. DuckDuckGo search does not require configuring an API Key and can be used directly in environments with access to foreign services.

Supported Models

The default LLM model in the project is changed to THUDM/chatglm2-6b, and the default Embedding model is changed to [moka-ai/m3e-base](https:// huggingface.co/moka-ai/m3e-base).

Supported LLM models

The project use FastChat to provide the API service of the open source LLM models, supported models include:

  • Any EleutherAI pythia model such as pythia-6.9b(任何 EleutherAI 的 pythia 模型,如 pythia-6.9b)
  • Any Peft adapter trained on top of a model above. To activate, must have peft in the model path. Note: If loading multiple peft models, you can have them share the base model weights by setting the environment variable PEFT_SHARE_BASE_WEIGHTS=true in any model worker.

Please refer to llm_model_dict in configs.model_configs.py.example to invoke OpenAI API.

Supported Embedding models

Following models are tested by developers with Embedding class of HuggingFace:


Docker Deployment

🐳 Docker image path: registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0)

docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0
  • The image size of this version is 33.9GB, using v0.2.0, with nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 as the base image
  • This version has a built-in embedding model: m3e-large, built-in chatglm2-6b-32k
  • This version is designed to facilitate one-click deployment. Please make sure you have installed the NVIDIA driver on your Linux distribution.
  • Please note that you do not need to install the CUDA toolkit on the host system, but you need to install the NVIDIA Driver and the NVIDIA Container Toolkit, please refer to the [Installation Guide](https://docs.nvidia.com/datacenter/cloud -native/container-toolkit/latest/install-guide.html)
  • It takes a certain amount of time to pull and start for the first time. When starting for the first time, please refer to the figure below to use docker logs -f <container id> to view the log.
  • If the startup process is stuck in the Waiting.. step, it is recommended to use docker exec -it <container id> bash to enter the /logs/ directory to view the corresponding stage logs

Development

Environment Prerequisite

The project is tested under Python3.8-python 3.10, CUDA 11.0-CUDA11.7, Windows, macOS of ARM architecture, and Linux platform.

1. Preparing Deployment Environment

Please refer to install.md

2. Downloading model to local disk

For offline deployment only!

If you want to run this project in a local or offline environment, you need to first download the models required for the project to your local computer. Usually the open source LLM and Embedding models can be downloaded from HuggingFace.

Take the LLM model THUDM/chatglm2-6b and Embedding model [moka-ai/m3e-base](https://huggingface. co/moka-ai/m3e-base) for example:

To download the model, you need to install Git LFS, and then run:

$ git clone https://huggingface.co/THUDM/chatglm2-6b

$ git clone https://huggingface.co/moka-ai/m3e-base

3. Setting Configuration

Copy the model-related parameter configuration template file configs/model_config.py.example and save it in the ./configs path under the project path, and rename it to model_config.py.

Copy the service-related parameter configuration template file configs/server_config.py.example to save in the ./configs path under the project path, and rename it to server_config.py.

Before starting to execute Web UI or command line interaction, please check whether each model parameter in configs/model_config.py and configs/server_config.py meets the requirements.

  • Please confirm that the path to local LLM model and embedding model have been written in llm_dict of configs/model_config.py, here is an example:
  • If you choose to use OpenAI's Embedding model, please write the model's key into embedding_model_dict. To use this model, you need to be able to access the OpenAI official API, or set up a proxy.
llm_model_dict={
                "chatglm2-6b": {
                        "local_model_path": "/Users/xxx/Downloads/chatglm2-6b",
                        "api_base_url": "http://localhost:8888/v1",  # "name"修改为 FastChat 服务中的"api_base_url"
                        "api_key": "EMPTY"
                    },
                }
embedding_model_dict = {
                        "m3e-base": "/Users/xxx/Downloads/m3e-base",
                       }

4. Knowledge Base Migration

The knowledge base information is stored in the database, please initialize the database before running the project (we strongly recommend one back up the knowledge files before performing operations).

  • If you migrate from 0.1.x, for the established knowledge base, please confirm that the vector library type and Embedding model of the knowledge base are consistent with the default settings in configs/model_config.py, if there is no change, simply add the existing repository information to the database with the following command:

    $ python init_database.py
    
  • If you are a beginner of the project whose knowledge base has not been established, or the knowledge base type and embedding model in the configuration file have changed, or the previous vector library did not enable normalize_L2, you need the following command to initialize or rebuild the knowledge base:

    $ python init_database.py --recreate-vs
    

5. Launching API Service or WebUI with One Command

5.1 Command

The script is startuppy, you can luanch all fastchat related, API,WebUI service with is, here is an example:

$ python startup.py -a

optional args including: -a(or --all-webui), --all-api, --llm-api, -c(or --controller),--openai-api, -m(or --model-worker), --api, --webui, where:

  • --all-webui means to launch all related services of WEBUI
  • --all-api means to launch all related services of API
  • --llm-api means to launch all related services of FastChat
  • --openai-api means to launch controller and openai-api-server of FastChat only
  • model-worker means to launch model worker of FastChat only
  • any other optional arg is to launch one particular function only

5.2 Launch none-default model

If you want to specify a none-default model, use --model-name arg, here is a example:

$ python startup.py --all-webui --model-name Qwen-7B-Chat

5.3 Load model with multi-gpus

If you want to load model with multi-gpus, then the following three parameters in startup.create_model_worker_app should be changed:

gpus=None, 
num_gpus=1, 
max_gpu_memory="20GiB"

where:

  • gpus is about specifying the gpus' ID, such as '0,1';
  • num_gpus is about specifying the number of gpus to be used under gpus;
  • max_gpu_memory is about specifying the gpu memory of every gpu.

note:

  • These parameters now can be specified by server_config.FSCHST_MODEL_WORKERD.
  • In some extreme senses, gpus doesn't work, then one should specify the used gpus with environment variable CUDA_VISIBLE_DEVICES, here is an example:
CUDA_VISIBLE_DEVICES=0,1 python startup.py -a

5.4 Load PEFT

Including lora,p-tuning,prefix tuning, prompt tuning,ia3

This project loads the LLM service based on FastChat, so one must load the PEFT in a FastChat way, that is, ensure that the word peft must be in the path name, the name of the configuration file must be adapter_config.json, and the path contains PEFT weights in .bin format. The peft path is specified in args.model_names of the create_model_worker_app function in startup.py, and enable the environment variable PEFT_SHARE_BASE_WEIGHTS=true parameter.

If the above method fails, you need to start standard fastchat service step by step. Step-by-step procedure could be found Section 6. For further steps, please refer to [Model invalid after loading lora fine-tuning](https://github. com/chatchat-space/Langchain-Chatchat/issues/1130#issuecomment-1685291822).

5.5 Some Notes

  1. The startup.py uses multi-process mode to start the services of each module, which may cause printing order problems. Please wait for all services to be initiated before calling, and call the service according to the default or specified port (default LLM API service port: 127.0.0.1:8888 , default API service port:127.0.0.1:7861 , default WebUI service port: 127.0.0.1: 8501)
  2. The startup time of the service differs across devices, usually it takes 3-10 minutes. If it does not start for a long time, please go to the ./logs directory to monitor the logs and locate the problem.
  3. Using ctrl+C to exit on Linux may cause orphan processes due to the multi-process mechanism of Linux. You can exit through shutdown_all.sh

5.6 Interface Examples

The API, chat interface of WebUI, and knowledge management interface of WebUI are list below respectively.

  1. FastAPI docs

  1. Chat Interface of WebUI
  • Dialogue interface of WebUI

img

  • Knowledge management interface of WebUI

img

FAQ

Please refer to FAQ


Roadmap

  • Langchain applications

    • Load local documents
      • Unstructured documents
        • .md
        • .txt
        • .docx
      • Structured documents
        • .csv
        • .xlsx
      • TextSplitter and Retriever
        • multiple TextSplitter
        • ChineseTextSplitter
        • Reconstructed Context Retriever
      • Webpage
      • SQL
      • Knowledge Database
    • Search Engines
      • Bing
      • DuckDuckGo
    • Agent
  • LLM Models

    • FastChat -based LLM Models
    • Mutiply Remote LLM API
  • Embedding Models

    • HuggingFace -based Embedding models
    • Mutiply Remote Embedding API
  • FastAPI-based API

  • Web UI

    • Streamlit -based Web UI