ColossalAI/colossalai
ver217 f99f56dff4
fix colo parameter torch function (#1117)
2022-06-15 14:23:27 +08:00
..
amp [amp] included dict for type casting of model output (#1102) 2022-06-13 14:18:04 +08:00
builder [pipeline] refactor the pipeline module (#1087) 2022-06-10 11:27:38 +08:00
cli [hotfix] fix some bugs caused by size mismatch. (#1011) 2022-05-23 14:02:28 +08:00
communication [pipeline]refactor ppschedule to support tensor list (#1050) 2022-06-02 13:48:59 +08:00
context
engine [pipeline] supported more flexible dataflow control for pipeline parallel training (#1108) 2022-06-15 10:41:28 +08:00
fx [fx] added coloproxy (#1115) 2022-06-15 10:47:57 +08:00
gemini [zero] fixed api consistency (#1098) 2022-06-10 16:59:59 +08:00
kernel [NFC] polish colossalai/kernel/cuda_native/csrc/colossal_C_frontend.cpp code style 2022-05-20 23:57:38 +08:00
logging
nn [tensor] refactor param op hook (#1097) 2022-06-13 16:11:53 +08:00
pipeline [pipeline] refactor the pipeline module (#1087) 2022-06-10 11:27:38 +08:00
registry Remove duplication registry (#1078) 2022-06-08 07:47:24 +08:00
tensor fix colo parameter torch function (#1117) 2022-06-15 14:23:27 +08:00
testing [test] skip tests when not enough GPUs are detected (#1090) 2022-06-09 17:19:13 +08:00
trainer fix issue #1080 (#1071) 2022-06-07 17:21:11 +08:00
utils [pipeline] refactor the pipeline module (#1087) 2022-06-10 11:27:38 +08:00
zero [zero] fixed api consistency (#1098) 2022-06-10 16:59:59 +08:00
__init__.py [NFC] polish __init__.py code style (#965) 2022-05-17 10:25:06 +08:00
constants.py fix typo in constants (#1027) 2022-05-26 08:45:08 +08:00
core.py
global_variables.py
initialize.py [cudnn] set False to cudnn benchmark by default (#1063) 2022-06-03 17:58:06 +08:00