ColossalAI/colossalai/gemini
Shawn-Kong 1712da2800
[NFC] polish colossalai/gemini/gemini_context.py code style (#2690)
2023-02-14 11:55:23 +08:00
..
chunk [gemini] add fake_release_chunk for keep-gathered chunk in the inference mode (#2671) 2023-02-13 14:35:32 +08:00
memory_tracer [example] update gpt example for larger model scale (#2211) 2022-12-28 13:54:08 +08:00
ophooks [Gemini] update the non model data record method in runtime memory tracer (#2128) 2022-12-13 17:11:31 +08:00
paramhooks
__init__.py
gemini_context.py [NFC] polish colossalai/gemini/gemini_context.py code style (#2690) 2023-02-14 11:55:23 +08:00
gemini_mgr.py [hotfix] fix zero ddp warmup check (#2545) 2023-02-02 16:42:38 +08:00
placement_policy.py [Gemini] fix the convert_to_torch_module bug (#2269) 2023-01-03 15:55:35 +08:00
stateful_tensor.py
stateful_tensor_mgr.py
tensor_placement_policy.py
tensor_utils.py