2.8 KiB
2.8 KiB
ChatGLM Application Based on Local Knowledge
Introduction
🌍 中文文档
🤖️ A local knowledge based LLM Application with ChatGLM-6B and langchain.
💡 Inspired by document.ai by GanymedeNil and ChatGLM-6B Pull Request by AlexZhangji.
✅ In this project, GanymedeNil/text2vec-large-chinese is used as Embedding Model,and ChatGLM-6B used as LLM。Based on those models,this project can be deployed offline with all open source models。
Update
[2023/04/07]
- Fix bug which costs twice gpu memory (Thanks to @suc16 and @myml).
- Add gpu memory clear function after each call of ChatGLM.
Usage
Hardware Requirements
-
ChatGLM Hardware Requirements
Quantization Level GPU Memory FP16(no quantization) 13 GB INT8 10 GB INT4 6 GB -
Embedding Hardware Requirements
The default Embedding model in this repo is GanymedeNil/text2vec-large-chinese, 3GB GPU Memory required when running on GPU.
1. install python packages
pip install -r requirements
Attention: With langchain.document_loaders.UnstructuredFileLoader used to connect with local knowledge file, you may need some other dependencies as mentioned in langchain documentation
2. Run knowledge_based_chatglm.py script
python knowledge_based_chatglm.py
Known issues
- Currently tested to support txt, docx, md format files, for more file formats please refer to langchain documentation. If the document contains special characters, the file may not be correctly loaded.
- When running this project with macOS, it may not work properly due to incompatibility with pytorch caused by macOS version 13.3 and above.
Roadmap
- local knowledge based application with langchain + ChatGLM-6B
- unstructured files loaded with langchain
- more different file format loaded with langchain
- implement web ui DEMO with gradio/streamlit
- implement API with fastapi,and web ui DEMO with API