lurenlym
|
a04244cf6a
|
incorrect arg order and support zero his
|
2023-04-13 14:13:58 +08:00 |
imClumsyPanda
|
4ec339777c
|
Add support for folder path as input
|
2023-04-11 21:55:42 +08:00 |
imClumsyPanda
|
12ee17f3b3
|
use RetrievalQA instead of ChatVectorDBChain
|
2023-04-10 22:55:22 +08:00 |
imClumsyPanda
|
2240ed1ec2
|
update requirements.txt
|
2023-04-09 23:31:26 +08:00 |
imClumsyPanda
|
3dc5860cfe
|
Add llm_model_dict to choose llm and add chatglm-6b-int4 as an option
|
2023-04-09 23:23:11 +08:00 |
imClumsyPanda
|
6d5b143811
|
Add llm_model_dict to choose llm and add chatglm-6b-int4 as an option
|
2023-04-09 23:20:05 +08:00 |
imClumsyPanda
|
7720bb58e0
|
Add llm_model_dict to choose llm and add chatglm-6b-int4 as an option
|
2023-04-09 23:10:44 +08:00 |
littlepanda0716
|
c4b52dda72
|
add torch_gc to clear gpu cache in knowledge_based_chatglm.py
|
2023-04-07 10:46:02 +08:00 |
littlepanda0716
|
5664d1ff62
|
add torch_gc to clear gpu cache in knowledge_based_chatglm.py
|
2023-04-07 10:45:44 +08:00 |
littlepanda0716
|
dfe966ed41
|
update chatglm_llm.py
|
2023-04-07 09:58:44 +08:00 |
imClumsyPanda
|
63d900607f
|
Merge pull request #17 from myml/memory
fix: 修复chatglm模型被复制,显存占用过多
|
2023-04-07 09:43:49 +08:00 |
littlepanda0716
|
51c44e3e0a
|
update chatglm_llm.py
|
2023-04-07 09:28:45 +08:00 |
myml
|
bed03a6ff1
|
fix: 修复chatglm模型被复制,显存占用过多
model作为类成员会在类实例化时进行一次复制
这导致每询问一个问题显存占用就会翻倍
通过将model改成全局变量修复这个问题
|
2023-04-05 01:17:26 +08:00 |
littlepanda0716
|
f17d26addf
|
first commit
|
2023-03-31 20:09:40 +08:00 |