Camille Zhong
|
da885ed540
|
fix tensor data update for gemini loss caluculation (#5442)
|
2024-03-11 13:49:58 +08:00 |
Camille Zhong
|
743e7fad2f
|
[colossal-llama2] add stream chat examlple for chat version model (#5428)
* add stream chat for chat version
* remove os.system clear
* modify function name
|
2024-03-07 14:58:56 +08:00 |
Camille Zhong
|
4b8312c08e
|
fix sft single turn inference example (#5416)
|
2024-03-01 17:27:50 +08:00 |
Tong Li
|
a28c971516
|
update requirements (#5407)
|
2024-02-28 17:46:27 +08:00 |
CZYCW
|
b833153fd5
|
[hotfix] fix variable type for top_p (#5313)
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
|
2024-02-19 18:25:44 +08:00 |
Hongxin Liu
|
7303801854
|
[llama] fix training and inference scripts (#5384)
* [llama] refactor inference example to fit sft
* [llama] fix training script to fit gemini
* [llama] fix inference script
|
2024-02-19 16:41:04 +08:00 |
Hongxin Liu
|
084c91246c
|
[llama] fix memory issue (#5371)
* [llama] fix memory issue
* [llama] add comment
|
2024-02-06 19:02:37 +08:00 |
Hongxin Liu
|
eb4f2d90f9
|
[llama] polish training script and fix optim ckpt (#5368)
|
2024-02-06 11:52:17 +08:00 |
Camille Zhong
|
44ca61a22b
|
[llama] fix neftune & pbar with start_step (#5364)
|
2024-02-05 18:04:23 +08:00 |
Hongxin Liu
|
a4cec1715b
|
[llama] add flash attn patch for npu (#5362)
|
2024-02-05 16:48:34 +08:00 |
Hongxin Liu
|
73f9f23fc6
|
[llama] update training script (#5360)
* [llama] update training script
* [doc] polish docstr
|
2024-02-05 16:33:18 +08:00 |
Hongxin Liu
|
6c0fa7b9a8
|
[llama] fix dataloader for hybrid parallel (#5358)
* [plugin] refactor prepare dataloader
* [plugin] update train script
|
2024-02-05 15:14:56 +08:00 |
Frank Lee
|
8823cc4831
|
Merge pull request #5310 from hpcaitech/feature/npu
Feature/npu
|
2024-01-29 13:49:39 +08:00 |
李文军
|
ec912b1ba9
|
[NFC] polish applications/Colossal-LLaMA-2/colossal_llama2/tokenizer/init_tokenizer.py code style (#5228)
|
2024-01-25 13:14:48 +08:00 |
Desperado-Jia
|
ddf879e2db
|
fix bug for mefture (#5299)
|
2024-01-22 22:17:54 +08:00 |
ver217
|
148469348a
|
Merge branch 'main' into sync/npu
|
2024-01-18 12:05:21 +08:00 |
digger yu
|
41e52c1c6e
|
[doc] fix typo in Colossal-LLaMA-2/README.md (#5247)
|
2024-01-10 19:24:56 +08:00 |
Hongxin Liu
|
d202cc28c0
|
[npu] change device to accelerator api (#5239)
* update accelerator
* fix timer
* fix amp
* update
* fix
* update bug
* add error raise
* fix autocast
* fix set device
* remove doc accelerator
* update doc
* update doc
* update doc
* use nullcontext
* update cpu
* update null context
* change time limit for example
* udpate
* update
* update
* update
* [npu] polish accelerator code
---------
Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
|
2024-01-09 10:20:05 +08:00 |
github-actions[bot]
|
4fb4a22a72
|
[format] applied code formatting on changed files in pull request 5234 (#5235)
Co-authored-by: github-actions <github-actions@github.com>
|
2024-01-07 20:55:34 +08:00 |
binmakeswell
|
b9b32b15e6
|
[doc] add Colossal-LLaMA-2-13B (#5234)
* [doc] add Colossal-LLaMA-2-13B
* [doc] add Colossal-LLaMA-2-13B
* [doc] add Colossal-LLaMA-2-13B
|
2024-01-07 20:53:12 +08:00 |
Camille Zhong
|
915b4652f3
|
[doc] Update README.md of Colossal-LLAMA2 (#5233)
* Update README.md
* Update README.md
|
2024-01-06 17:06:41 +08:00 |
Tong Li
|
d992b55968
|
[Colossal-LLaMA-2] Release Colossal-LLaMA-2-13b-base model (#5224)
* update readme
* update readme
* update link
* update
* update readme
* update
* update
* update
* update title
* update example
* update example
* fix content
* add conclusion
* add license
* update
* update
* update version
* fix minor
|
2024-01-05 17:24:26 +08:00 |
Yuanchen
|
b397104438
|
[Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)
* Add finetuning Colossal-Llama-2 example
* Add finetuning Colossal-Llama-2 example 2
* Add finetuning Colossal-Llama-2 example and support NEFTuning
* Add inference example and refine neftune
* Modify readme file
* update the imports
---------
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
|
2023-12-07 14:02:03 +08:00 |
digger yu
|
9110406a47
|
fix typo change JOSNL TO JSONL etc. (#5116)
|
2023-11-29 11:08:32 +08:00 |
digger yu
|
d5661f0f25
|
[nfc] fix typo change directoty to directory (#5111)
|
2023-11-27 18:25:53 +08:00 |
github-actions[bot]
|
a41cf88e9b
|
[format] applied code formatting on changed files in pull request 4908 (#4918)
Co-authored-by: github-actions <github-actions@github.com>
|
2023-10-17 10:48:24 +08:00 |
Zian(Andy) Zheng
|
7768afbad0
|
Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
|
2023-10-16 14:00:45 +08:00 |
Camille Zhong
|
652adc2215
|
Update README.md
|
2023-10-10 23:19:34 +08:00 |
Camille Zhong
|
afe10a85fd
|
Update README.md
|
2023-10-10 23:19:34 +08:00 |
Camille Zhong
|
3043d5d676
|
Update modelscope link in README.md
add modelscope link
|
2023-10-10 23:19:34 +08:00 |
Yuanchen
|
1fa8c5e09f
|
Update Qwen-7B results (#4821)
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
|
2023-09-27 17:33:54 +08:00 |
Chandler-Bing
|
b6cf0aca55
|
[hotfix] change llama2 Colossal-LLaMA-2 script filename (#4800)
change filename:
pretraining.py -> trainin.py
there is no file named pretraing.py. wrong writing
|
2023-09-26 11:44:27 +08:00 |
Tong Li
|
8cbce6184d
|
update
|
2023-09-26 11:36:53 +08:00 |
Tong Li
|
bd014673b0
|
update readme
|
2023-09-26 10:58:05 +08:00 |
binmakeswell
|
d512a4d38d
|
[doc] add llama2 domain-specific solution news (#4789)
* [doc] add llama2 domain-specific solution news
|
2023-09-25 10:44:15 +08:00 |
Tong Li
|
74aa7d964a
|
initial commit: add colossal llama 2 (#4784)
|
2023-09-24 23:12:26 +08:00 |