littlepanda0716
|
c4b52dda72
|
add torch_gc to clear gpu cache in knowledge_based_chatglm.py
|
2023-04-07 10:46:02 +08:00 |
littlepanda0716
|
5664d1ff62
|
add torch_gc to clear gpu cache in knowledge_based_chatglm.py
|
2023-04-07 10:45:44 +08:00 |
littlepanda0716
|
dfe966ed41
|
update chatglm_llm.py
|
2023-04-07 09:58:44 +08:00 |
imClumsyPanda
|
63d900607f
|
Merge pull request #17 from myml/memory
fix: 修复chatglm模型被复制,显存占用过多
|
2023-04-07 09:43:49 +08:00 |
littlepanda0716
|
51c44e3e0a
|
update chatglm_llm.py
|
2023-04-07 09:28:45 +08:00 |
myml
|
bed03a6ff1
|
fix: 修复chatglm模型被复制,显存占用过多
model作为类成员会在类实例化时进行一次复制
这导致每询问一个问题显存占用就会翻倍
通过将model改成全局变量修复这个问题
|
2023-04-05 01:17:26 +08:00 |
littlepanda0716
|
f17d26addf
|
first commit
|
2023-03-31 20:09:40 +08:00 |