Hongxin Liu
|
6c0fa7b9a8
|
[llama] fix dataloader for hybrid parallel (#5358)
* [plugin] refactor prepare dataloader
* [plugin] update train script
|
2024-02-05 15:14:56 +08:00 |
李文军
|
ec912b1ba9
|
[NFC] polish applications/Colossal-LLaMA-2/colossal_llama2/tokenizer/init_tokenizer.py code style (#5228)
|
2024-01-25 13:14:48 +08:00 |
Desperado-Jia
|
ddf879e2db
|
fix bug for mefture (#5299)
|
2024-01-22 22:17:54 +08:00 |
Yuanchen
|
b397104438
|
[Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)
* Add finetuning Colossal-Llama-2 example
* Add finetuning Colossal-Llama-2 example 2
* Add finetuning Colossal-Llama-2 example and support NEFTuning
* Add inference example and refine neftune
* Modify readme file
* update the imports
---------
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
|
2023-12-07 14:02:03 +08:00 |
github-actions[bot]
|
a41cf88e9b
|
[format] applied code formatting on changed files in pull request 4908 (#4918)
Co-authored-by: github-actions <github-actions@github.com>
|
2023-10-17 10:48:24 +08:00 |
Zian(Andy) Zheng
|
7768afbad0
|
Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
|
2023-10-16 14:00:45 +08:00 |
Tong Li
|
74aa7d964a
|
initial commit: add colossal llama 2 (#4784)
|
2023-09-24 23:12:26 +08:00 |