ColossalAI/colossalai
Nikita Shulga 01066152f1
Don't use `torch._six` (#2775)
* Don't use `torch._six`

This is a private API which is gone after https://github.com/pytorch/pytorch/pull/94709

* Update common.py
2023-02-17 09:22:45 +08:00
..
_C
amp [test] fixed the triton version for testing (#2608) 2023-02-07 13:49:38 +08:00
auto_parallel [autoparallel] distinguish different parallel strategies (#2699) 2023-02-15 22:28:28 +08:00
autochunk [autochunk] support diffusion for autochunk (#2621) 2023-02-07 16:32:45 +08:00
builder
cli [NFC] polish colossalai/cli/cli.py code style (#2734) 2023-02-15 22:25:28 +08:00
communication
context [NFC] polish colossalai/context/process_group_initializer/initializer_2d.py code style (#2726) 2023-02-15 22:27:13 +08:00
device [autoparallel] accelerate gpt2 training (#2495) 2023-01-29 11:13:15 +08:00
engine [NFC] polish colossalai/engine/gradient_handler/utils.py code style (#2708) 2023-02-15 09:40:08 +08:00
fx [autoparallel] Patch meta information of `torch.matmul` (#2584) 2023-02-08 11:05:31 +08:00
gemini [NFC] polish colossalai/gemini/gemini_context.py code style (#2690) 2023-02-14 11:55:23 +08:00
kernel [kernel] fixed repeated loading of kernels (#2549) 2023-02-03 09:47:13 +08:00
logging
nn [gemini] add fake_release_chunk for keep-gathered chunk in the inference mode (#2671) 2023-02-13 14:35:32 +08:00
pipeline polish pp middleware (#2476) 2023-01-13 16:56:01 +08:00
registry
tensor [polish] polish ColoTensor and its submodules (#2537) 2023-02-03 11:44:10 +08:00
testing
trainer
utils Don't use `torch._six` (#2775) 2023-02-17 09:22:45 +08:00
zero Don't use `torch._six` (#2775) 2023-02-17 09:22:45 +08:00
__init__.py
constants.py
core.py
global_variables.py
initialize.py Fix False warning in initialize.py (#2456) 2023-01-12 13:49:01 +08:00