ColossalAI/tests
LuGY 1a49a5ea00 [zero] support shard optimizer state dict of zero (#4194)
* support shard optimizer of zero

* polish code

* support sync grad manually
2023-07-31 22:13:29 +08:00
..
components_to_test
kit [shardformer] added embedding gradient check (#4124) 2023-07-04 16:05:01 +08:00
test_amp
test_analyzer
test_auto_parallel [gemini] fix argument naming during chunk configuration searching 2023-06-25 13:34:15 +08:00
test_autochunk [test] fixed tests failed due to dtensor change (#4082) 2023-07-04 16:05:01 +08:00
test_booster [zero] refactor low level zero for shard evenly (#4030) 2023-07-31 22:13:29 +08:00
test_checkpoint_io [zero] support shard optimizer state dict of zero (#4194) 2023-07-31 22:13:29 +08:00
test_cluster
test_comm
test_config
test_context
test_data
test_data_pipeline_tensor_parallel
test_ddp
test_device [format] applied code formatting on changed files in pull request 4152 (#4157) 2023-07-04 16:07:47 +08:00
test_engine
test_fx [shardformer] shardformer support opt models (#4091) 2023-07-04 16:05:01 +08:00
test_kernels [Kernels] added triton-implemented of self attention for colossal-ai (#4241) 2023-07-18 23:53:38 +08:00
test_layers
test_lazy [lazy] support init on cuda (#4269) 2023-07-19 16:43:01 +08:00
test_moe
test_ops
test_optimizer
test_pipeline
test_shardformer [format] applied code formatting on changed files in pull request 4152 (#4157) 2023-07-04 16:07:47 +08:00
test_tensor [test] fixed tests failed due to dtensor change (#4082) 2023-07-04 16:05:01 +08:00
test_trainer
test_utils
test_zero [zero] add state dict for low level zero (#4179) 2023-07-31 22:13:29 +08:00
__init__.py