mirror of https://github.com/hpcaitech/ColossalAI
![]() * Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit
|
||
---|---|---|
.. | ||
__init__.py | ||
activation_checkpoint.py | ||
checkpointing.py | ||
common.py | ||
cuda.py | ||
memory.py | ||
timer.py |