This website requires JavaScript.
Explore
关于
Help
Register
Sign In
github
/
ColossalAI
mirror of
https://github.com/hpcaitech/ColossalAI
Watch
1
Star
0
Fork
You've already forked ColossalAI
0
Code
Issues
Projects
Releases
Wiki
Activity
b0f708dfc1
ColossalAI
/
requirements
/
requirements.txt
10 lines
84 B
Plaintext
Raw
Normal View
History
Unescape
Escape
Migrated project
2021-10-28 16:21:23 +00:00
torch>=1.8
torchvision>=0.9
numpy
tqdm
psutil
Support TP-compatible Torch AMP and Update trainer API (#27) * Add gradient accumulation, fix lr scheduler * fix FP16 optimizer and adapted torch amp with tensor parallel (#18) * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes * fixed trainer * Revert "fixed trainer" This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097. * improved consistency between trainer, engine and schedule (#23) Co-authored-by: 1SAA <c2h214748@gmail.com> Co-authored-by: 1SAA <c2h214748@gmail.com> Co-authored-by: ver217 <lhx0217@gmail.com>
2021-11-18 11:45:06 +00:00
tensorboard
Update GitHub action and pre-commit settings (#196) * Update GitHub action and pre-commit settings * Update GitHub action and pre-commit settings (#198)
2022-01-28 08:59:53 +00:00
packaging
pre-commit
[log] better logging display with rich (#426) * better logger using rich * remove deepspeed in zero requirements
2022-03-16 01:51:15 +00:00
rich