ColossalAI/colossalai/booster
Zhongkai Zhao a0684e7bd6
[feature] support no master weights option for low level zero plugin (#4816)
* [feature] support no master weights for low level zero plugin

* [feature] support no master weights for low level zero plugin, remove data copy when no master weights

* remove data copy and typecasting when no master weights

* not load weights to cpu when using no master weights

* fix grad: use fp16 grad when no master weights

* only do not update working param when no master weights

* fix: only do not update working param when no master weights

* fix: passing params in dict format in hybrid plugin

* fix: remove extra params (tp_process_group) in hybrid_parallel_plugin
2023-10-13 07:57:45 +00:00
..
mixed_precision [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
plugin [feature] support no master weights option for low level zero plugin (#4816) 2023-10-13 07:57:45 +00:00
__init__.py [booster] implemented the torch ddd + resnet example (#3232) 2023-03-27 10:24:14 +08:00
accelerator.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
booster.py [lazy] support from_pretrained (#4801) 2023-09-26 11:04:11 +08:00