ColossalAI/colossalai/zero/gemini
Hanks b480eec738
[Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928)
* support fp8_communication in the Torch DDP grad comm, FSDP grad comm, and FSDP params comm

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* implement communication hook for FSDP params all-gather

* added unit test for fp8 operators

* support fp8 communication in GeminiPlugin

* update training scripts to support fsdp and fp8 communication

* fixed some minor bugs observed in unit test

* add all_gather_into_tensor_flat_fp8

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add skip the test if torch < 2.2.0

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add skip the test if torch < 2.2.0

* add skip the test if torch < 2.2.0

* add fp8_comm flag

* rebase latest fp8 operators

* rebase latest fp8 operators

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-08-08 15:55:01 +08:00
..
chunk [Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928) 2024-08-08 15:55:01 +08:00
memory_tracer [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
__init__.py [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
gemini_ddp.py [Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928) 2024-08-08 15:55:01 +08:00
gemini_hook.py [gemini] quick fix on possible async operation (#5803) 2024-06-13 10:35:17 +08:00
gemini_mgr.py [chore] remove unnecessary assert since compute list might not be recorded 2024-05-28 05:16:02 +00:00
gemini_optimizer.py [gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713) 2024-05-24 10:31:16 +08:00
placement_policy.py [bug] continue fix 2024-05-28 02:41:23 +00:00
utils.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00