mirror of https://github.com/hpcaitech/ColossalAI
![]() * Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit
|
||
---|---|---|
.. | ||
test_config | ||
test_context | ||
test_data | ||
test_data_pipeline_tensor_parallel | ||
test_engine | ||
test_fp16_optimizer | ||
test_layers | ||
test_lr_scheduler | ||
test_models | ||
test_trainer | ||
test_utils | ||
test_zero_data_parallel | ||
test_zero_tensor_parallel |