ColossalAI/colossalai/gemini
Shawn-Kong 1712da2800
[NFC] polish colossalai/gemini/gemini_context.py code style (#2690)
2023-02-14 11:55:23 +08:00
..
chunk [gemini] add fake_release_chunk for keep-gathered chunk in the inference mode (#2671) 2023-02-13 14:35:32 +08:00
memory_tracer [example] update gpt example for larger model scale (#2211) 2022-12-28 13:54:08 +08:00
ophooks [Gemini] update the non model data record method in runtime memory tracer (#2128) 2022-12-13 17:11:31 +08:00
paramhooks [hotfix] remove potiential circle import (#1307) 2022-07-14 13:44:26 +08:00
__init__.py [hotfix] polish chunk import (#1787) 2022-11-02 12:10:52 +08:00
gemini_context.py [NFC] polish colossalai/gemini/gemini_context.py code style (#2690) 2023-02-14 11:55:23 +08:00
gemini_mgr.py [hotfix] fix zero ddp warmup check (#2545) 2023-02-02 16:42:38 +08:00
placement_policy.py [Gemini] fix the convert_to_torch_module bug (#2269) 2023-01-03 15:55:35 +08:00
stateful_tensor.py
stateful_tensor_mgr.py [gemini] accelerate adjust_layout() (#878) 2022-04-26 18:08:31 +08:00
tensor_placement_policy.py [gemini] accelerate adjust_layout() (#878) 2022-04-26 18:08:31 +08:00
tensor_utils.py [Gemini] free and allocate cuda memory by tensor.storage, add grad hook (#2040) 2022-11-30 15:57:45 +08:00