ColossalAI/colossalai
Frank Lee 7531c6271f
[fx] refactored the file structure of patched function and module (#1238)
* [fx] refactored the file structure of patched function and module

* polish code
2022-07-12 15:01:58 +08:00
..
amp [hotfix]different overflow status lead to communication stuck. (#1175) 2022-06-27 09:53:57 +08:00
builder [pipeline] refactor the pipeline module (#1087) 2022-06-10 11:27:38 +08:00
cli [hotfix] fix some bugs caused by size mismatch. (#1011) 2022-05-23 14:02:28 +08:00
communication [hotfix]fixed p2p process send stuck (#1181) 2022-06-28 14:41:11 +08:00
context [usability] improved error messages in the context module (#856) 2022-04-25 13:42:31 +08:00
engine [hotfix] fix an assertion bug in base schedule. (#1250) 2022-07-12 14:20:02 +08:00
fx [fx] refactored the file structure of patched function and module (#1238) 2022-07-12 15:01:58 +08:00
gemini make AutoPlacementPolicy configurable (#1191) 2022-06-30 15:18:30 +08:00
kernel [optim] refactor fused sgd (#1134) 2022-06-20 11:19:38 +08:00
logging [doc] improved docstring in the logging module (#861) 2022-04-25 13:42:00 +08:00
nn [tensor] redistribute among different process groups (#1247) 2022-07-12 10:24:05 +08:00
pipeline [pipeline]add customized policy (#1139) 2022-06-21 15:23:41 +08:00
registry Remove duplication registry (#1078) 2022-06-08 07:47:24 +08:00
tensor [tensor] redistribute among different process groups (#1247) 2022-07-12 10:24:05 +08:00
testing [test] skip tests when not enough GPUs are detected (#1090) 2022-06-09 17:19:13 +08:00
trainer fix issue #1080 (#1071) 2022-06-07 17:21:11 +08:00
utils [tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
zero [hotfix] fix sharded optim step and clip_grad_norm (#1226) 2022-07-08 13:34:48 +08:00
__init__.py [NFC] polish __init__.py code style (#965) 2022-05-17 10:25:06 +08:00
constants.py fix typo in constants (#1027) 2022-05-26 08:45:08 +08:00
core.py [Tensor] distributed view supports inter-process hybrid parallel (#1169) 2022-06-27 09:45:26 +08:00
global_variables.py [MOE] add unitest for MOE experts layout, gradient handler and kernel (#469) 2022-03-21 13:35:04 +08:00
initialize.py [ddp] supported customized torch ddp configuration (#1123) 2022-06-15 18:11:53 +08:00