github-actions[bot]
|
a41cf88e9b
|
[format] applied code formatting on changed files in pull request 4908 (#4918)
Co-authored-by: github-actions <github-actions@github.com>
|
2023-10-17 10:48:24 +08:00 |
Zian(Andy) Zheng
|
7768afbad0
|
Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
|
2023-10-16 14:00:45 +08:00 |
Camille Zhong
|
652adc2215
|
Update README.md
|
2023-10-10 23:19:34 +08:00 |
Camille Zhong
|
afe10a85fd
|
Update README.md
|
2023-10-10 23:19:34 +08:00 |
Camille Zhong
|
3043d5d676
|
Update modelscope link in README.md
add modelscope link
|
2023-10-10 23:19:34 +08:00 |
Yuanchen
|
1fa8c5e09f
|
Update Qwen-7B results (#4821)
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
|
2023-09-27 17:33:54 +08:00 |
Chandler-Bing
|
b6cf0aca55
|
[hotfix] change llama2 Colossal-LLaMA-2 script filename (#4800)
change filename:
pretraining.py -> trainin.py
there is no file named pretraing.py. wrong writing
|
2023-09-26 11:44:27 +08:00 |
Tong Li
|
8cbce6184d
|
update
|
2023-09-26 11:36:53 +08:00 |
Tong Li
|
bd014673b0
|
update readme
|
2023-09-26 10:58:05 +08:00 |
binmakeswell
|
d512a4d38d
|
[doc] add llama2 domain-specific solution news (#4789)
* [doc] add llama2 domain-specific solution news
|
2023-09-25 10:44:15 +08:00 |
Tong Li
|
74aa7d964a
|
initial commit: add colossal llama 2 (#4784)
|
2023-09-24 23:12:26 +08:00 |