ColossalAI/colossalai
Ziyue Jiang 57929a6210
fix type of num_worker_threads (#2237)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-30 11:04:01 +08:00
..
_C [optimizer] add div_scale for optimizers (#2117) 2022-12-12 17:58:57 +08:00
amp [builder] unified cpu_optim fused_optim inferface (#2190) 2022-12-23 20:57:41 +08:00
auto_parallel [autoparallel] record parameter attribute in colotracer (#2217) 2022-12-28 19:29:08 +08:00
builder
cli [cli] updated installation cheheck with more inforamtion (#2050) 2022-11-30 17:53:55 +08:00
communication
context [hotfix] Fixing the bug related to ipv6 support 2022-12-27 12:42:46 +08:00
device [device] update flatten device mesh usage (#2079) 2022-12-05 16:16:07 +08:00
engine
fx [autoparallel] record parameter attribute in colotracer (#2217) 2022-12-28 19:29:08 +08:00
gemini [example] update gpt example for larger model scale (#2211) 2022-12-28 13:54:08 +08:00
kernel [builder] builder for scaled_upper_triang_masked_softmax (#2234) 2022-12-30 09:58:00 +08:00
logging [logger] hotfix, missing _FORMAT (#2231) 2022-12-29 22:59:39 +08:00
nn [zero] fix error for BEiT models (#2169) 2022-12-26 15:03:54 +08:00
pipeline fix type of num_worker_threads (#2237) 2022-12-30 11:04:01 +08:00
registry
tensor [autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162) 2022-12-28 13:37:40 +08:00
testing [zero] test gradient accumulation (#1964) 2022-11-29 13:00:30 +08:00
trainer [polish] remove useless file _mem_tracer_hook.py (#1963) 2022-11-16 15:55:10 +08:00
utils [builder] unified cpu_optim fused_optim inferface (#2190) 2022-12-23 20:57:41 +08:00
zero [example] add zero1, zero2 example in GPT examples (#2146) 2022-12-20 14:30:27 +08:00
__init__.py [setup] supported conda-installed torch (#2048) 2022-11-30 16:45:15 +08:00
constants.py updated tp layers 2022-11-02 12:19:38 +08:00
core.py
global_variables.py updated tp layers 2022-11-02 12:19:38 +08:00
initialize.py