mirror of https://github.com/hpcaitech/ColossalAI
da01c234e1
* Add gradient accumulation, fix lr scheduler * fix FP16 optimizer and adapted torch amp with tensor parallel (#18) * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes * fixed trainer * Revert "fixed trainer" This reverts commit |
||
---|---|---|
.. | ||
process_group_initializer | ||
random | ||
__init__.py | ||
config.py | ||
parallel_context.py | ||
parallel_mode.py |