Commit Graph

932 Commits (7486ed7d3a21ad35c4f465583426b25af6b33c04)

Author SHA1 Message Date
FoolPlayer f1cb5ac6bf [shardformer] Align bert value (#3907)
1 year ago
Baizhou Zhang 0bb0b481b4 [gemini] fix argument naming during chunk configuration searching
1 year ago
github-actions[bot] a52f62082d
[format] applied code formatting on changed files in pull request 4021 (#4022)
1 year ago
Frank Lee a5883aa790
[test] fixed codefactor format report (#4026)
1 year ago
Baizhou Zhang 822c3d4d66
[checkpointio] sharded optimizer checkpoint for DDP plugin (#4002)
1 year ago
Wenhao Chen 725af3eeeb
[booster] make optimizer argument optional for boost (#3993)
1 year ago
Baizhou Zhang c9cff7e7fa
[checkpointio] General Checkpointing of Sharded Optimizers (#3984)
1 year ago
digger yu e61ffc77c6
fix typo tests/ (#3936)
1 year ago
Frank Lee ddcf58cacf
Revert "[sync] sync feature/shardformer with develop"
1 year ago
Frank Lee eb39154d40
[dtensor] updated api and doc (#3845)
1 year ago
Hongxin Liu ae02d4e4f7
[bf16] add bf16 support (#3882)
1 year ago
Hongxin Liu dbb32692d2
[lazy] refactor lazy init (#3891)
1 year ago
wukong1992 6b305a99d6
[booster] torch fsdp fix ckpt (#3788)
2 years ago
Frank Lee 615e2e5fc1
[test] fixed lazy init test import error (#3799)
2 years ago
Hongxin Liu 3c07a2846e
[plugin] a workaround for zero plugins' optimizer checkpoint (#3780)
2 years ago
Hongxin Liu 5452df63c5
[plugin] torch ddp plugin supports sharded model checkpoint (#3775)
2 years ago
wukong1992 6050f37776
[booster] removed models that don't support fsdp (#3744)
2 years ago
Hongxin Liu afb239bbf8
[devops] update torch version of CI (#3725)
2 years ago
wukong1992 b37797ed3d
[booster] support torch fsdp plugin in booster (#3697)
2 years ago
digger-yu 1f73609adb
[CI] fix typo with tests/ etc. (#3727)
2 years ago
digger-yu b7141c36dd
[CI] fix some spelling errors (#3707)
2 years ago
jiangmingyan 20068ba188
[booster] add tests for ddp and low level zero's checkpointio (#3715)
2 years ago
Hongxin Liu 6552cbf8e1
[booster] fix no_sync method (#3709)
2 years ago
Hongxin Liu 3bf09efe74
[booster] update prepare dataloader method for plugin (#3706)
2 years ago
Hongxin Liu d0915f54f4
[booster] refactor all dp fashion plugins (#3684)
2 years ago
digger-yu b49020c1b1
[CI] Update test_sharded_optim_with_sync_bn.py (#3688)
2 years ago
jiangmingyan 307894f74d
[booster] gemini plugin support shard checkpoint (#3610)
2 years ago
Hongxin Liu 50793b35f4
[gemini] accelerate inference (#3641)
2 years ago
Hongxin Liu 4b3240cb59
[booster] add low level zero plugin (#3594)
2 years ago
Hongxin Liu f313babd11
[gemini] support save state dict in shards (#3581)
2 years ago
Hongxin Liu 152239bbfa
[gemini] gemini supports lazy init (#3379)
2 years ago
jiangmingyan 52a933e175
[checkpoint] support huggingface style sharded checkpoint (#3461)
2 years ago
Frank Lee 80eba05b0a
[test] refactor tests with spawn (#3452)
2 years ago
ver217 933048ad3e
[test] reorganize zero/gemini tests (#3445)
2 years ago
YuliangLiu0306 ffcdbf0f65
[autoparallel]integrate auto parallel feature with new tracer (#3408)
2 years ago
Frank Lee 1beb85cc25
[checkpoint] refactored the API and added safetensors support (#3427)
2 years ago
ver217 26b7aac0be
[zero] reorganize zero/gemini folder structure (#3424)
2 years ago
Frank Lee 638a07a7f9
[test] fixed gemini plugin test (#3411)
2 years ago
ver217 5f2e34e6c9
[booster] implement Gemini plugin (#3352)
2 years ago
HELSON 1a1d68b053
[moe] add checkpoint for moe models (#3354)
2 years ago
YuliangLiu0306 fee2af8610
[autoparallel] adapt autoparallel with new analyzer (#3261)
2 years ago
Frank Lee 73d3e4d309
[booster] implemented the torch ddd + resnet example (#3232)
2 years ago
YuliangLiu0306 4d5d8f98a4
[API] implement device mesh manager (#3221)
2 years ago
YuliangLiu0306 045afa3ea2
[hotfix] skip torchaudio tracing test (#3211)
2 years ago
Frank Lee cd142fbefa
[api] implemented the checkpoint io module (#3205)
2 years ago
ver217 f8289d4221
[lazyinit] combine lazy tensor with dtensor (#3204)
2 years ago
YuliangLiu0306 019a847432
[Analyzer] fix analyzer tests (#3197)
2 years ago
YuliangLiu0306 f57d34958b
[FX] refactor experimental tracer and adapt it with hf models (#3157)
2 years ago
Frank Lee e7f3bed2d3
[booster] added the plugin base and torch ddp plugin (#3180)
2 years ago
Zihao 18dbe76cae
[auto-parallel] add auto-offload feature (#3154)
2 years ago
zbian 7bc0afc901 updated flash attention usage
2 years ago
Frank Lee 085e7f4eff
[test] fixed torchrec registration in model zoo (#3177)
2 years ago
Frank Lee a9b8402d93
[booster] added the accelerator implementation (#3159)
2 years ago
Frank Lee 1ad3a636b1
[test] fixed torchrec model test (#3167)
2 years ago
ver217 6ae8ed0407
[lazyinit] add correctness verification (#3147)
2 years ago
Frank Lee ed19290560
[booster] implemented mixed precision class (#3151)
2 years ago
YuliangLiu0306 ecd643f1e4
[test] add torchrec models to test model zoo (#3139)
2 years ago
ver217 14a115000b
[tests] model zoo add torchaudio models (#3138)
2 years ago
Frank Lee 6d48eb0560
[test] added transformers models to test model zoo (#3135)
2 years ago
Frank Lee a674c63348
[test] added torchvision models to test model zoo (#3132)
2 years ago
HELSON 1216d1e7bd
[tests] diffuser models in model zoo (#3136)
2 years ago
YuliangLiu0306 2eca4cd376
[DTensor] refactor dtensor with new components (#3089)
2 years ago
Frank Lee 86ac782d7c
[test] added timm models to test model zoo (#3129)
2 years ago
Xuanlei Zhao 30dd13c450
[autochunk] support complete benchmark (#3121)
2 years ago
Super Daniel fff98f06ed
[analyzer] a minimal implementation of static graph analyzer (#2852)
2 years ago
Xuanlei Zhao 10c61de2f7
[autochunk] support vit (#3084)
2 years ago
YuliangLiu0306 8e4e8601b7
[DTensor] implement layout converter (#3055)
2 years ago
Xuanlei Zhao 2ca9728cbb
[autochunk] refactor chunk memory estimation (#2762)
2 years ago
YuliangLiu0306 29386a54e6
[DTensor] refactor CommSpec (#3034)
2 years ago
YuliangLiu0306 4269196c79
[hotfix] skip auto checkpointing tests (#3029)
2 years ago
YuliangLiu0306 cd2b0eaa8d
[DTensor] refactor sharding spec (#2987)
2 years ago
YuliangLiu0306 e414e4092b
[DTensor] implementation of dtensor (#2946)
2 years ago
YuliangLiu0306 197d0bf4ed
[autoparallel] apply repeat block to reduce solving time (#2912)
2 years ago
YuliangLiu0306 819e25d8b1
[hotfix] fix autoparallel compatibility test issues (#2754)
2 years ago
YuliangLiu0306 0f392d7403
[autoparallel] find repeat blocks (#2854)
2 years ago
Boyuan Yao c7764d3f22
[autoparallel] Patch meta information of `torch.where` (#2822)
2 years ago
Boyuan Yao fcc4097efa
[autoparallel] Patch meta information of `torch.tanh()` and `torch.nn.Dropout` (#2773)
2 years ago
Boyuan Yao 7ea6bc7f69
[autoparallel] Patch tensor related operations meta information (#2789)
2 years ago
HELSON 56ddc9ca7a
[hotfix] add correct device for fake_param (#2796)
2 years ago
Boyuan Yao a2b43e393d
[autoparallel] Patch meta information of `torch.nn.Embedding` (#2760)
2 years ago
YuliangLiu0306 1dc003c169
[autoparallel] distinguish different parallel strategies (#2699)
2 years ago
YuliangLiu0306 21d6a48f4d
[autoparallel] add shard option (#2696)
2 years ago
YuliangLiu0306 cb2c6a2415
[autoparallel] refactor runtime pass (#2644)
2 years ago
YuliangLiu0306 0b2a738393
[autoparallel] remove deprecated codes (#2664)
2 years ago
YuliangLiu0306 7fa6be49d2
[autoparallel] test compatibility for gemini and auto parallel (#2700)
2 years ago
Boyuan Yao 40c916b192
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` (#2674)
2 years ago
HELSON 8213f89fd2
[gemini] add fake_release_chunk for keep-gathered chunk in the inference mode (#2671)
2 years ago
Boyuan Yao 0385b26ebf
[autoparallel] Patch meta information of `torch.nn.LayerNorm` (#2647)
2 years ago
YuliangLiu0306 37df666f38
[autoparallel] refactor handlers which reshape input tensors (#2615)
2 years ago
YuliangLiu0306 cb3d1bef62
[autoparallel] adapt autoparallel tests with latest api (#2626)
2 years ago
Boyuan Yao 90a9fdd91d
[autoparallel] Patch meta information of `torch.matmul` (#2584)
2 years ago
oahzxl 6ba8364881
[autochunk] support diffusion for autochunk (#2621)
2 years ago
oahzxl c4b15661d7
[autochunk] add benchmark for transformer and alphafold (#2543)
2 years ago
oahzxl 05671fcb42
[autochunk] support multi outputs chunk search (#2538)
2 years ago
oahzxl 63199c6687
[autochunk] support transformer (#2526)
2 years ago
Frank Lee b55deb0662
[workflow] only report coverage for changed files (#2524)
2 years ago
HELSON b528eea0f0
[zero] add zero wrappers (#2523)
2 years ago
HELSON 077a5cdde4
[zero] fix gradient clipping in hybrid parallelism (#2521)
2 years ago
HELSON 707b11d4a0
[gemini] update ddp strict mode (#2518)
2 years ago
HELSON 2d1a7dfe5f
[zero] add strict ddp mode (#2508)
2 years ago
oahzxl c04f183237
[autochunk] support parsing blocks (#2506)
2 years ago
oahzxl 72341e65f4
[auto-chunk] support extramsa (#3) (#2504)
2 years ago
oahzxl ecccc91f21
[autochunk] support autochunk on evoformer (#2497)
2 years ago
HELSON d565a24849
[zero] add unit testings for hybrid parallelism (#2486)
2 years ago
oahzxl 4953b4ace1
[autochunk] support evoformer tracer (#2485)
2 years ago
YuliangLiu0306 67e1912b59
[autoparallel] support origin activation ckpt on autoprallel system (#2468)
2 years ago
HELSON 21c88220ce
[zero] add unit test for low-level zero init (#2474)
2 years ago
HELSON a5dc4253c6
[zero] polish low level optimizer (#2473)
2 years ago
Jiarui Fang 867c8c2d3a
[zero] low level optim supports ProcessGroup (#2464)
2 years ago
YuliangLiu0306 8221fd7485
[autoparallel] update binary elementwise handler (#2451)
2 years ago
HELSON 5521af7877
[zero] fix state_dict and load_state_dict for ddp ignored parameters (#2443)
2 years ago
YuliangLiu0306 41429b9b28
[autoparallel] add shard option (#2423)
2 years ago
HELSON bb4e9a311a
[zero] add inference mode and its unit test (#2418)
2 years ago
oahzxl 61fdd3464a update doc
2 years ago
oahzxl 36ab2cb783 change import
2 years ago
oahzxl 7ab2db206f adapt new fx
2 years ago
oahzxl e532679c95 Merge branch 'main' of https://github.com/oahzxl/ColossalAI into chunk
2 years ago
oahzxl c1492e5013 add test in import
2 years ago
HELSON ea13a201bb
[polish] polish code for get_static_torch_model (#2405)
2 years ago
oahzxl 212b5b1b5f add comments
2 years ago
oahzxl aafc3516a5 add available
2 years ago
oahzxl d5c4f0bf95 code style
2 years ago
oahzxl d106b271f8 add chunk search test
2 years ago
oahzxl a005965d2d update codegen test
2 years ago
oahzxl 3abbaf8bc6 update codegen test
2 years ago
oahzxl 74b81395a2 update codegen test
2 years ago
oahzxl 18a51c87fe rename test
2 years ago
oahzxl cb68ee864a set benchmark
2 years ago
Jiarui Fang 4e96039649
[device] find best logical mesh
2 years ago
Frank Lee 40d376c566
[setup] support pre-build and jit-build of cuda kernels (#2374)
2 years ago
oahzxl a6cdbf9161 seperate trace flow
2 years ago
oahzxl da4076846d rename
2 years ago
oahzxl fd87d78a28 rename ambiguous variable
2 years ago
oahzxl 8a634af2f5 close mem and code print
2 years ago
oahzxl 1a6d2a740b take apart chunk code gen
2 years ago
HELSON 48d33b1b17
[gemini] add get static torch model (#2356)
2 years ago
oahzxl d1f0773182 rename
2 years ago
oahzxl 06a5355d98 update test
2 years ago
oahzxl efb1c64c30 restruct dir
2 years ago
YuliangLiu0306 b5a3a4a65f [device] find best logical mesh
2 years ago
YuliangLiu0306 9c9246c0d9
[device] alpha beta profiler (#2311)
2 years ago
Jiarui Fang db6eea3583
[builder] reconfig op_builder for pypi install (#2314)
2 years ago
HELSON 5d3a2be3af
[amp] add gradient clipping for unit tests (#2283)
2 years ago
zbian e94c79f15b improved allgather & reducescatter for 3d
2 years ago
YuliangLiu0306 fb87322773
[autoparallel] fix spelling error (#2270)
2 years ago
YuliangLiu0306 8897b8f753
[autoparallel] autoparallel initialize (#2238)
2 years ago
YuliangLiu0306 3b1b91eaf4
[autoparallel] record parameter attribute in colotracer (#2217)
2 years ago
Boyuan Yao 24246f7aa5
[autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162)
2 years ago
YuliangLiu0306 78509124d3
[autoparallel] update getitem handler (#2207)
2 years ago
YuliangLiu0306 4851f2d607
[autoparallel] update_getattr_handler (#2193)
2 years ago
YuliangLiu0306 f10ce01e31
[autoparallel] add gpt2 performance test code (#2194)
2 years ago
HELSON a3100bd50d
[testing] add beit model for unit testings (#2196)
2 years ago
HELSON 2458659919
[zero] fix error for BEiT models (#2169)
2 years ago
Jiarui Fang 355ffb386e
[builder] unified cpu_optim fused_optim inferface (#2190)
2 years ago
Jiarui Fang 9587b080ba
[builder] use runtime builder for fused_optim (#2189)
2 years ago
Jiarui Fang bc0e271e71
[buider] use builder() for cpu adam and fused optim in setup.py (#2187)
2 years ago
Jiarui Fang d42afd30f8
[builder] runtime adam and fused_optim builder (#2184)
2 years ago
YuliangLiu0306 550f8f8905
[autoparallel] integrate_gpt_related_tests (#2134)
2 years ago
Jiarui Fang 27327a4c90
[example] add palm pytorch version (#2172)
2 years ago
Jiarui Fang b87496a66b
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157)
2 years ago
YuliangLiu0306 16335cb537
[hotfix] fix aten default bug (#2158)
2 years ago
Jiarui Fang 2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. (#2151)
2 years ago
アマデウス 077a66dd81
updated attention kernel (#2133)
2 years ago
YuliangLiu0306 536560ccc0
[autoparallel] implement softmax handler (#2132)
2 years ago
Jiarui Fang c89c66a858
[Gemini] update API of the chunkmemstatscollector. (#2129)
2 years ago
Jiarui Fang 2938edf446
[Gemini] update the non model data record method in runtime memory tracer (#2128)
2 years ago
Jiarui Fang deee317b0f
[Gemini] test step-tensor mapping using repeated_computed_layers.py (#2127)
2 years ago
Jiarui Fang 8fac837679
[Gemini] update non model data calculation method (#2126)
2 years ago
Jiarui Fang 5efda69735
[Gemini] hotfix the unittest bugs (#2125)
2 years ago
Jiarui Fang 05bb28aacf
[Gemini] mapping of preop timestep and param (#2124)
2 years ago
YuliangLiu0306 cd0af9f7f6
[autoparallel] gpt2lp runtimee test (#2113)
2 years ago
Jiarui Fang 9214d1fe28
[Gemini] chunk init using runtime visited param order (#2115)
2 years ago
HELSON e7d3afc9cc
[optimizer] add div_scale for optimizers (#2117)
2 years ago
Jiarui Fang e5aa8333e4
[NFC] update chunk manager API (#2119)
2 years ago
Jiarui Fang e99edfcb51
[NFC] polish comments for Chunk class (#2116)
2 years ago
Ziyue Jiang 09d69e1c25
[PP Middleware] Add bwd and step for PP middleware (#2111)
2 years ago
HELSON 63fbba3c19
[zero] add L2 gradient clipping for ZeRO (#2112)
2 years ago
Jiarui Fang 70a8556946
[gemini] get the param visited order during runtime (#2108)
2 years ago
YuliangLiu0306 d87baa85d9
[autoparallel] support linear function bias addition (#2104)
2 years ago
YuliangLiu0306 0fecbb9e20
[autoparallel] support addbmm computation (#2102)
2 years ago
YuliangLiu0306 d3d4630495
[autoparallel] add sum handler (#2101)
2 years ago
Ziyue Jiang e4705ba4e2
[Pipeline Middleware] fix data race in Pipeline Scheduler for DAG (#2087)
2 years ago
YuliangLiu0306 b175e6d58e
[autoparallel] add bias addtion function class (#2098)
2 years ago
YuliangLiu0306 3af7e65dea
[autoparallel] complete gpt related module search (#2097)
2 years ago
Jiarui Fang 85efb7ac2e
[Gemini] gemini use the runtime memory tracer (RMT) (#2099)
2 years ago
Jiarui Fang 978242326a
[Gemini] remove eval in gemini unittests! (#2092)
2 years ago
YuliangLiu0306 7f72eb0510
[autoparallel]add embedding handler (#2089)
2 years ago
Jiarui Fang 1fca5d79ea
[Gemini] remove GLOBAL_MODEL_DATA_TRACER (#2091)
2 years ago
Jiarui Fang 25abae6d7f
[Gemini] use MemStats in Runtime Memory tracer (#2088)
2 years ago
Jiarui Fang 33f4412102
[Gemini] use MemStats to store the tracing data. Seperate it from Collector. (#2084)
2 years ago
Jiarui Fang 1f99205827
[Gemini] remove static tracer (#2083)
2 years ago
YuliangLiu0306 0e9db368ef
[autoparallel] add tensor constructor handler (#2082)
2 years ago
YuliangLiu0306 cdf537a648
[autoparallel] add non_split linear strategy (#2078)
2 years ago
Boyuan Yao cf0268da93
[autoparallel] Add F.conv metainfo (#2069)
2 years ago
YuliangLiu0306 f123476666
[autoparallel] complete gpt block searching (#2065)
2 years ago
Ziyue Jiang 597cdd3006
[Pipeline Middleware] Adapt scheduler for Topo (#2066)
2 years ago
Jiarui Fang 4f21c9e8d9
[Gemini] polish runtime tracer tests (#2077)
2 years ago
Jiarui Fang a7adad9ccb
[Gemini] rename hooks related to runtime mem tracer (#2076)
2 years ago
Jiarui Fang 40b7d55bf3
[Gemini] add albert in test models. (#2075)
2 years ago
Jiarui Fang 616ed91ecd
[test] bert test in non-distributed way (#2074)
2 years ago