djsaber
|
aaaf4d7b0e
|
fix(chat): fix stream_chat in modeling_internlm(hf) to avoid decode error (#560)
* fixed the issue that the HF model spontaneously conducted multiple rounds of Q&A and stream_chat method generates garbled characters
Signed-off-by: daijun1 <daijun1@eccom.com.cn>
* Update modeling_internlm.py
fixed the issue that the HF model spontaneously conducted multiple rounds of Q&A and stream_chat method generates garbled characters
* Update modeling_internlm.py
Correct spelling mistakes: chche -> cache
---------
Signed-off-by: daijun1 <daijun1@eccom.com.cn>
Co-authored-by: daijun1 <daijun1@eccom.com.cn>
|
2023-12-29 13:03:44 +08:00 |
Yining Li
|
68d6abc64a
|
doc(readme): update 7b/20b chat model information (#537)
* update chat model information in README
* modifications by pre-commit hook
* update 7b evaluation results
* fix readme
|
2023-12-14 17:46:03 +08:00 |
zhjunqin
|
8420115b5e
|
fix(chat): fix stream_chat to return generator (#123)
|
2023-09-10 23:46:45 +08:00 |
x54-729
|
0c1060435d
|
Use tempfile for convert2hf.py (#23)
Fix https://github.com/InternLM/InternLM/issues/50
|
2023-07-17 21:08:10 +08:00 |
Sun Peng
|
fa7337b37b
|
initial commit
|
2023-07-06 12:55:23 +08:00 |