mirror of https://github.com/hpcaitech/ColossalAI
da01c234e1
* Add gradient accumulation, fix lr scheduler * fix FP16 optimizer and adapted torch amp with tensor parallel (#18) * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes * fixed trainer * Revert "fixed trainer" This reverts commit |
||
---|---|---|
.. | ||
layer | ||
loss | ||
lr_scheduler | ||
model | ||
optimizer | ||
__init__.py | ||
init.py |