ColossalAI/colossalai
Hongxin Liu 6c0fa7b9a8
[llama] fix dataloader for hybrid parallel (#5358)
* [plugin] refactor prepare dataloader

* [plugin] update train script
2024-02-05 15:14:56 +08:00
..
_C [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
_analyzer [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
accelerator [accelerator] fixed npu api 2024-01-29 14:27:52 +08:00
amp [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
auto_parallel [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
autochunk [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
booster [llama] fix dataloader for hybrid parallel (#5358) 2024-02-05 15:14:56 +08:00
checkpoint_io [checkpointio] fix gemini and hybrid parallel optim checkpoint (#5347) 2024-02-01 16:13:06 +08:00
cli [bug] Fix the version check bug in colossalai run when generating the cmd. (#4713) 2023-09-22 10:50:47 +08:00
cluster fix-test (#5210) 2024-01-03 14:26:13 +08:00
context [moe] merge moe into main (#4978) 2023-11-02 02:21:24 +00:00
device [npu] add npu support for hybrid plugin and llama (#5090) 2023-11-22 19:23:21 +08:00
fx [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
inference [Hotfix] Fix model policy matching strategy in ShardFormer (#5064) 2023-11-22 11:19:39 +08:00
interface [lazy] support from_pretrained (#4801) 2023-09-26 11:04:11 +08:00
kernel [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
lazy [doc] add lazy init docs (#4808) 2023-09-27 10:24:04 +08:00
legacy [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
logging [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe Merge pull request #5310 from hpcaitech/feature/npu 2024-01-29 13:49:39 +08:00
nn [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
pipeline [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
shardformer fix typo change dosen't to doesn't (#5308) 2024-01-30 09:57:38 +08:00
tensor [gemini] fix param op hook when output is tuple (#5355) 2024-02-04 11:58:26 +08:00
testing [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
utils Merge pull request #5310 from hpcaitech/feature/npu 2024-01-29 13:49:39 +08:00
zero [checkpointio] fix gemini and hybrid parallel optim checkpoint (#5347) 2024-02-01 16:13:06 +08:00
__init__.py [accelerator] init the accelerator module (#5129) 2023-11-30 13:25:17 +08:00
initialize.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00