ColossalAI/colossalai/booster
Wenhao Chen 1810b9100f [pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134)
* test: add more p2p tests

* fix: remove send_forward_recv_forward as p2p op list need to use the same group

* fix: make send and receive atomic

* feat: update P2PComm fn

* feat: add metadata cache in 1f1b

* feat: add metadata cache in interleaved pp

* feat: modify is_xx_stage fn

* revert: add _broadcast_object_list

* feat: add interleaved pp in llama policy

* feat: set NCCL_BUFFSIZE in HybridParallelPlugin
2024-01-05 13:58:53 +08:00
..
mixed_precision [npu] add npu support for hybrid plugin and llama (#5090) 2023-11-22 19:23:21 +08:00
plugin [pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134) 2024-01-05 13:58:53 +08:00
__init__.py [booster] implemented the torch ddd + resnet example (#3232) 2023-03-27 10:24:14 +08:00
accelerator.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
booster.py [lazy] support from_pretrained (#4801) 2023-09-26 11:04:11 +08:00