ColossalAI/colossalai/engine/gradient_handler
Jiarui Fang 5a560a060a Feature/zero (#279)
* add zero1 (#209)

* add zero1

* add test zero1

* update zero stage 1 develop (#212)

* Implement naive zero3 (#240)

* naive zero3 works well

* add zero3 param manager

* add TODOs in comments

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* fix bugs of hook and add unit tests (#252)

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* polish code and add state dict hook

* fix bug

* update unit test

* refactor reconstructed zero code

* clip_grad support zero3 and add unit test

* add unit test for Zero3ParameterManager

* [WIP] initialize the shard param class

* [WIP] Yet another sharded model implementation (#274)

* [WIP] initialize the shard param class

* [WIP] Yes another implementation of shardModel. Using a better hook method.

* torch.concat -> torch.cat

* fix test_zero_level_1.py::test_zero_level_1 unitest

* remove deepspeed implementation and refactor for the reconstructed zero module

* polish zero dp unittests

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
..
__init__.py moved env variables to global variables; (#215) 2022-02-15 11:31:13 +08:00
_base_gradient_handler.py
_data_parallel_gradient_handler.py add interleaved pipeline, fix naive amp and update pipeline model initializer (#80) 2021-12-20 23:26:19 +08:00
_moe_gradient_handler.py Fixed docstring in colossalai (#171) 2022-01-21 10:44:30 +08:00
_pipeline_parallel_gradient_handler.py Optimize pipeline schedule (#94) 2021-12-30 15:56:46 +08:00
_sequence_parallel_gradient_handler.py adapted for sequence parallel (#163) 2022-01-20 13:44:51 +08:00
_zero_gradient_handler.py Feature/zero (#279) 2022-03-11 15:50:28 +08:00