Yuanchen
|
b397104438
|
[Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)
* Add finetuning Colossal-Llama-2 example
* Add finetuning Colossal-Llama-2 example 2
* Add finetuning Colossal-Llama-2 example and support NEFTuning
* Add inference example and refine neftune
* Modify readme file
* update the imports
---------
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
|
2023-12-07 14:02:03 +08:00 |
github-actions[bot]
|
a41cf88e9b
|
[format] applied code formatting on changed files in pull request 4908 (#4918)
Co-authored-by: github-actions <github-actions@github.com>
|
2023-10-17 10:48:24 +08:00 |
Zian(Andy) Zheng
|
7768afbad0
|
Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
|
2023-10-16 14:00:45 +08:00 |
Tong Li
|
74aa7d964a
|
initial commit: add colossal llama 2 (#4784)
|
2023-09-24 23:12:26 +08:00 |