ColossalAI/examples/language
Elsa Granger b2ad0d9e8f
[pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017)
* Use p2p

* Cannot bidirectonal send p2p

* Refactor tensor creation and serialization in P2P
communication

* Fix llama forward args in flash attention

* Add flop estimate from megatron

* Support loading weight not in weight_map when strict=False in hybrid_parallel

* Use send_forward_recv_backward, etc in 1f1b

* Use dataclass for metdata
Remove torch.cuda.synchronize() as suggested

* Add comment about the torch.cuda.synchronize for potential error

* Typo

* Update hybrid_parallel_checkpoint_io.py

* Update p2p.py

* Update one_f_one_b.py

* Update p2p.py

---------

Co-authored-by: flybird11111 <1829166702@qq.com>
2023-11-16 20:15:59 +08:00
..
bert [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
commons [example] make gpt example directory more clear (#2353) 2023-01-06 11:11:26 +08:00
gpt [bug] fix get_default_parser in examples (#4764) 2023-09-21 10:42:25 +08:00
llama2 [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017) 2023-11-16 20:15:59 +08:00
openmoe [moe]: fix ep/tp tests, add hierarchical all2all (#4982) 2023-11-09 06:31:00 +00:00
opt [bug] fix get_default_parser in examples (#4764) 2023-09-21 10:42:25 +08:00
palm [nfc] fix minor typo in README (#4846) 2023-10-07 17:51:11 +08:00