Wenhao Chen
bb0a668fee
[hotfix] set return_outputs=False in examples and polish code ( #5404 )
...
* fix: simplify merge_batch
* fix: use return_outputs=False to eliminate extra memory consumption
* feat: add return_outputs warning
* style: remove `return_outputs=False` as it is the default value
2024-03-25 12:31:09 +08:00
Hongxin Liu
d882d18c65
[example] reuse flash attn patch ( #5400 )
2024-02-27 11:22:07 +08:00
Frank Lee
8823cc4831
Merge pull request #5310 from hpcaitech/feature/npu
...
Feature/npu
2024-01-29 13:49:39 +08:00
flybird11111
f7e3f82a7e
fix llama pretrain ( #5287 )
2024-01-19 17:49:02 +08:00
Hongxin Liu
d202cc28c0
[npu] change device to accelerator api ( #5239 )
...
* update accelerator
* fix timer
* fix amp
* update
* fix
* update bug
* add error raise
* fix autocast
* fix set device
* remove doc accelerator
* update doc
* update doc
* update doc
* use nullcontext
* update cpu
* update null context
* change time limit for example
* udpate
* update
* update
* update
* [npu] polish accelerator code
---------
Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
2024-01-09 10:20:05 +08:00
flybird11111
aae496631c
[shardformer]fix flash attention, when mask is casual, just don't unpad it ( #5084 )
...
* fix flash attn
* fix
fix
2023-11-22 16:00:07 +08:00
Hongxin Liu
079bf3cb26
[misc] update pre-commit and run all files ( #4752 )
...
* [misc] update pre-commit
* [misc] run pre-commit
* [misc] remove useless configuration files
* [misc] ignore cuda for clang-format
2023-09-19 14:20:26 +08:00
flybird11111
4c4482f3ad
[example] llama2 add fine-tune example ( #4673 )
...
* [shardformer] update shardformer readme
[shardformer] update shardformer readme
[shardformer] update shardformer readme
* [shardformer] update llama2/opt finetune example and shardformer update to llama2
* [shardformer] update llama2/opt finetune example and shardformer update to llama2
* [shardformer] update llama2/opt finetune example and shardformer update to llama2
* [shardformer] change dataset
* [shardformer] change dataset
* [shardformer] fix CI
* [shardformer] fix
* [shardformer] fix
* [shardformer] fix
* [shardformer] fix
* [shardformer] fix
[example] update opt example
[example] resolve comments
fix
fix
* [example] llama2 add finetune example
* [example] llama2 add finetune example
* [example] llama2 add finetune example
* [example] llama2 add finetune example
* fix
* update llama2 example
* update llama2 example
* fix
* update llama2 example
* update llama2 example
* update llama2 example
* update llama2 example
* update llama2 example
* update llama2 example
* Update requirements.txt
* update llama2 example
* update llama2 example
* update llama2 example
2023-09-15 18:45:44 +08:00
Hongxin Liu
0b00def881
[example] add llama2 example ( #4527 )
...
* [example] transfer llama-1 example
* [example] fit llama-2
* [example] refactor scripts folder
* [example] fit new gemini plugin
* [cli] fix multinode runner
* [example] fit gemini optim checkpoint
* [example] refactor scripts
* [example] update requirements
* [example] update requirements
* [example] rename llama to llama2
* [example] update readme and pretrain script
* [example] refactor scripts
2023-08-28 17:59:11 +08:00