Commit Graph

883 Commits (a88e92251df546dc71f2ec3cd351487319a53577)

Author SHA1 Message Date
Zihao 18dbe76cae
[auto-parallel] add auto-offload feature (#3154)
2 years ago
zbian 7bc0afc901 updated flash attention usage
2 years ago
Frank Lee 085e7f4eff
[test] fixed torchrec registration in model zoo (#3177)
2 years ago
Frank Lee a9b8402d93
[booster] added the accelerator implementation (#3159)
2 years ago
Frank Lee 1ad3a636b1
[test] fixed torchrec model test (#3167)
2 years ago
ver217 6ae8ed0407
[lazyinit] add correctness verification (#3147)
2 years ago
Frank Lee ed19290560
[booster] implemented mixed precision class (#3151)
2 years ago
YuliangLiu0306 ecd643f1e4
[test] add torchrec models to test model zoo (#3139)
2 years ago
ver217 14a115000b
[tests] model zoo add torchaudio models (#3138)
2 years ago
Frank Lee 6d48eb0560
[test] added transformers models to test model zoo (#3135)
2 years ago
Frank Lee a674c63348
[test] added torchvision models to test model zoo (#3132)
2 years ago
HELSON 1216d1e7bd
[tests] diffuser models in model zoo (#3136)
2 years ago
YuliangLiu0306 2eca4cd376
[DTensor] refactor dtensor with new components (#3089)
2 years ago
Frank Lee 86ac782d7c
[test] added timm models to test model zoo (#3129)
2 years ago
Xuanlei Zhao 30dd13c450
[autochunk] support complete benchmark (#3121)
2 years ago
Super Daniel fff98f06ed
[analyzer] a minimal implementation of static graph analyzer (#2852)
2 years ago
Xuanlei Zhao 10c61de2f7
[autochunk] support vit (#3084)
2 years ago
YuliangLiu0306 8e4e8601b7
[DTensor] implement layout converter (#3055)
2 years ago
Xuanlei Zhao 2ca9728cbb
[autochunk] refactor chunk memory estimation (#2762)
2 years ago
YuliangLiu0306 29386a54e6
[DTensor] refactor CommSpec (#3034)
2 years ago
YuliangLiu0306 4269196c79
[hotfix] skip auto checkpointing tests (#3029)
2 years ago
YuliangLiu0306 cd2b0eaa8d
[DTensor] refactor sharding spec (#2987)
2 years ago
YuliangLiu0306 e414e4092b
[DTensor] implementation of dtensor (#2946)
2 years ago
YuliangLiu0306 197d0bf4ed
[autoparallel] apply repeat block to reduce solving time (#2912)
2 years ago
YuliangLiu0306 819e25d8b1
[hotfix] fix autoparallel compatibility test issues (#2754)
2 years ago
YuliangLiu0306 0f392d7403
[autoparallel] find repeat blocks (#2854)
2 years ago
Boyuan Yao c7764d3f22
[autoparallel] Patch meta information of `torch.where` (#2822)
2 years ago
Boyuan Yao fcc4097efa
[autoparallel] Patch meta information of `torch.tanh()` and `torch.nn.Dropout` (#2773)
2 years ago
Boyuan Yao 7ea6bc7f69
[autoparallel] Patch tensor related operations meta information (#2789)
2 years ago
HELSON 56ddc9ca7a
[hotfix] add correct device for fake_param (#2796)
2 years ago
Boyuan Yao a2b43e393d
[autoparallel] Patch meta information of `torch.nn.Embedding` (#2760)
2 years ago
YuliangLiu0306 1dc003c169
[autoparallel] distinguish different parallel strategies (#2699)
2 years ago
YuliangLiu0306 21d6a48f4d
[autoparallel] add shard option (#2696)
2 years ago
YuliangLiu0306 cb2c6a2415
[autoparallel] refactor runtime pass (#2644)
2 years ago
YuliangLiu0306 0b2a738393
[autoparallel] remove deprecated codes (#2664)
2 years ago
YuliangLiu0306 7fa6be49d2
[autoparallel] test compatibility for gemini and auto parallel (#2700)
2 years ago
Boyuan Yao 40c916b192
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` (#2674)
2 years ago
HELSON 8213f89fd2
[gemini] add fake_release_chunk for keep-gathered chunk in the inference mode (#2671)
2 years ago
Boyuan Yao 0385b26ebf
[autoparallel] Patch meta information of `torch.nn.LayerNorm` (#2647)
2 years ago
YuliangLiu0306 37df666f38
[autoparallel] refactor handlers which reshape input tensors (#2615)
2 years ago
YuliangLiu0306 cb3d1bef62
[autoparallel] adapt autoparallel tests with latest api (#2626)
2 years ago
Boyuan Yao 90a9fdd91d
[autoparallel] Patch meta information of `torch.matmul` (#2584)
2 years ago
oahzxl 6ba8364881
[autochunk] support diffusion for autochunk (#2621)
2 years ago
oahzxl c4b15661d7
[autochunk] add benchmark for transformer and alphafold (#2543)
2 years ago
oahzxl 05671fcb42
[autochunk] support multi outputs chunk search (#2538)
2 years ago
oahzxl 63199c6687
[autochunk] support transformer (#2526)
2 years ago
Frank Lee b55deb0662
[workflow] only report coverage for changed files (#2524)
2 years ago
HELSON b528eea0f0
[zero] add zero wrappers (#2523)
2 years ago
HELSON 077a5cdde4
[zero] fix gradient clipping in hybrid parallelism (#2521)
2 years ago
HELSON 707b11d4a0
[gemini] update ddp strict mode (#2518)
2 years ago
HELSON 2d1a7dfe5f
[zero] add strict ddp mode (#2508)
2 years ago
oahzxl c04f183237
[autochunk] support parsing blocks (#2506)
2 years ago
oahzxl 72341e65f4
[auto-chunk] support extramsa (#3) (#2504)
2 years ago
oahzxl ecccc91f21
[autochunk] support autochunk on evoformer (#2497)
2 years ago
HELSON d565a24849
[zero] add unit testings for hybrid parallelism (#2486)
2 years ago
oahzxl 4953b4ace1
[autochunk] support evoformer tracer (#2485)
2 years ago
YuliangLiu0306 67e1912b59
[autoparallel] support origin activation ckpt on autoprallel system (#2468)
2 years ago
HELSON 21c88220ce
[zero] add unit test for low-level zero init (#2474)
2 years ago
HELSON a5dc4253c6
[zero] polish low level optimizer (#2473)
2 years ago
Jiarui Fang 867c8c2d3a
[zero] low level optim supports ProcessGroup (#2464)
2 years ago
YuliangLiu0306 8221fd7485
[autoparallel] update binary elementwise handler (#2451)
2 years ago
HELSON 5521af7877
[zero] fix state_dict and load_state_dict for ddp ignored parameters (#2443)
2 years ago
YuliangLiu0306 41429b9b28
[autoparallel] add shard option (#2423)
2 years ago
HELSON bb4e9a311a
[zero] add inference mode and its unit test (#2418)
2 years ago
oahzxl 61fdd3464a update doc
2 years ago
oahzxl 36ab2cb783 change import
2 years ago
oahzxl 7ab2db206f adapt new fx
2 years ago
oahzxl e532679c95 Merge branch 'main' of https://github.com/oahzxl/ColossalAI into chunk
2 years ago
oahzxl c1492e5013 add test in import
2 years ago
HELSON ea13a201bb
[polish] polish code for get_static_torch_model (#2405)
2 years ago
oahzxl 212b5b1b5f add comments
2 years ago
oahzxl aafc3516a5 add available
2 years ago
oahzxl d5c4f0bf95 code style
2 years ago
oahzxl d106b271f8 add chunk search test
2 years ago
oahzxl a005965d2d update codegen test
2 years ago
oahzxl 3abbaf8bc6 update codegen test
2 years ago
oahzxl 74b81395a2 update codegen test
2 years ago
oahzxl 18a51c87fe rename test
2 years ago
oahzxl cb68ee864a set benchmark
2 years ago
Jiarui Fang 4e96039649
[device] find best logical mesh
2 years ago
Frank Lee 40d376c566
[setup] support pre-build and jit-build of cuda kernels (#2374)
2 years ago
oahzxl a6cdbf9161 seperate trace flow
2 years ago
oahzxl da4076846d rename
2 years ago
oahzxl fd87d78a28 rename ambiguous variable
2 years ago
oahzxl 8a634af2f5 close mem and code print
2 years ago
oahzxl 1a6d2a740b take apart chunk code gen
2 years ago
HELSON 48d33b1b17
[gemini] add get static torch model (#2356)
2 years ago
oahzxl d1f0773182 rename
2 years ago
oahzxl 06a5355d98 update test
2 years ago
oahzxl efb1c64c30 restruct dir
2 years ago
YuliangLiu0306 b5a3a4a65f [device] find best logical mesh
2 years ago
YuliangLiu0306 9c9246c0d9
[device] alpha beta profiler (#2311)
2 years ago
Jiarui Fang db6eea3583
[builder] reconfig op_builder for pypi install (#2314)
2 years ago
HELSON 5d3a2be3af
[amp] add gradient clipping for unit tests (#2283)
2 years ago
zbian e94c79f15b improved allgather & reducescatter for 3d
2 years ago
YuliangLiu0306 fb87322773
[autoparallel] fix spelling error (#2270)
2 years ago
YuliangLiu0306 8897b8f753
[autoparallel] autoparallel initialize (#2238)
2 years ago
YuliangLiu0306 3b1b91eaf4
[autoparallel] record parameter attribute in colotracer (#2217)
2 years ago
Boyuan Yao 24246f7aa5
[autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162)
2 years ago
YuliangLiu0306 78509124d3
[autoparallel] update getitem handler (#2207)
2 years ago
YuliangLiu0306 4851f2d607
[autoparallel] update_getattr_handler (#2193)
2 years ago
YuliangLiu0306 f10ce01e31
[autoparallel] add gpt2 performance test code (#2194)
2 years ago
HELSON a3100bd50d
[testing] add beit model for unit testings (#2196)
2 years ago
HELSON 2458659919
[zero] fix error for BEiT models (#2169)
2 years ago
Jiarui Fang 355ffb386e
[builder] unified cpu_optim fused_optim inferface (#2190)
2 years ago
Jiarui Fang 9587b080ba
[builder] use runtime builder for fused_optim (#2189)
2 years ago
Jiarui Fang bc0e271e71
[buider] use builder() for cpu adam and fused optim in setup.py (#2187)
2 years ago
Jiarui Fang d42afd30f8
[builder] runtime adam and fused_optim builder (#2184)
2 years ago
YuliangLiu0306 550f8f8905
[autoparallel] integrate_gpt_related_tests (#2134)
2 years ago
Jiarui Fang 27327a4c90
[example] add palm pytorch version (#2172)
2 years ago
Jiarui Fang b87496a66b
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157)
2 years ago
YuliangLiu0306 16335cb537
[hotfix] fix aten default bug (#2158)
2 years ago
Jiarui Fang 2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. (#2151)
2 years ago
アマデウス 077a66dd81
updated attention kernel (#2133)
2 years ago
YuliangLiu0306 536560ccc0
[autoparallel] implement softmax handler (#2132)
2 years ago
Jiarui Fang c89c66a858
[Gemini] update API of the chunkmemstatscollector. (#2129)
2 years ago
Jiarui Fang 2938edf446
[Gemini] update the non model data record method in runtime memory tracer (#2128)
2 years ago
Jiarui Fang deee317b0f
[Gemini] test step-tensor mapping using repeated_computed_layers.py (#2127)
2 years ago
Jiarui Fang 8fac837679
[Gemini] update non model data calculation method (#2126)
2 years ago
Jiarui Fang 5efda69735
[Gemini] hotfix the unittest bugs (#2125)
2 years ago
Jiarui Fang 05bb28aacf
[Gemini] mapping of preop timestep and param (#2124)
2 years ago
YuliangLiu0306 cd0af9f7f6
[autoparallel] gpt2lp runtimee test (#2113)
2 years ago
Jiarui Fang 9214d1fe28
[Gemini] chunk init using runtime visited param order (#2115)
2 years ago
HELSON e7d3afc9cc
[optimizer] add div_scale for optimizers (#2117)
2 years ago
Jiarui Fang e5aa8333e4
[NFC] update chunk manager API (#2119)
2 years ago
Jiarui Fang e99edfcb51
[NFC] polish comments for Chunk class (#2116)
2 years ago
Ziyue Jiang 09d69e1c25
[PP Middleware] Add bwd and step for PP middleware (#2111)
2 years ago
HELSON 63fbba3c19
[zero] add L2 gradient clipping for ZeRO (#2112)
2 years ago
Jiarui Fang 70a8556946
[gemini] get the param visited order during runtime (#2108)
2 years ago
YuliangLiu0306 d87baa85d9
[autoparallel] support linear function bias addition (#2104)
2 years ago
YuliangLiu0306 0fecbb9e20
[autoparallel] support addbmm computation (#2102)
2 years ago
YuliangLiu0306 d3d4630495
[autoparallel] add sum handler (#2101)
2 years ago
Ziyue Jiang e4705ba4e2
[Pipeline Middleware] fix data race in Pipeline Scheduler for DAG (#2087)
2 years ago
YuliangLiu0306 b175e6d58e
[autoparallel] add bias addtion function class (#2098)
2 years ago
YuliangLiu0306 3af7e65dea
[autoparallel] complete gpt related module search (#2097)
2 years ago
Jiarui Fang 85efb7ac2e
[Gemini] gemini use the runtime memory tracer (RMT) (#2099)
2 years ago
Jiarui Fang 978242326a
[Gemini] remove eval in gemini unittests! (#2092)
2 years ago
YuliangLiu0306 7f72eb0510
[autoparallel]add embedding handler (#2089)
2 years ago
Jiarui Fang 1fca5d79ea
[Gemini] remove GLOBAL_MODEL_DATA_TRACER (#2091)
2 years ago
Jiarui Fang 25abae6d7f
[Gemini] use MemStats in Runtime Memory tracer (#2088)
2 years ago
Jiarui Fang 33f4412102
[Gemini] use MemStats to store the tracing data. Seperate it from Collector. (#2084)
2 years ago
Jiarui Fang 1f99205827
[Gemini] remove static tracer (#2083)
2 years ago
YuliangLiu0306 0e9db368ef
[autoparallel] add tensor constructor handler (#2082)
2 years ago
YuliangLiu0306 cdf537a648
[autoparallel] add non_split linear strategy (#2078)
2 years ago
Boyuan Yao cf0268da93
[autoparallel] Add F.conv metainfo (#2069)
2 years ago
YuliangLiu0306 f123476666
[autoparallel] complete gpt block searching (#2065)
2 years ago
Ziyue Jiang 597cdd3006
[Pipeline Middleware] Adapt scheduler for Topo (#2066)
2 years ago
Jiarui Fang 4f21c9e8d9
[Gemini] polish runtime tracer tests (#2077)
2 years ago
Jiarui Fang a7adad9ccb
[Gemini] rename hooks related to runtime mem tracer (#2076)
2 years ago
Jiarui Fang 40b7d55bf3
[Gemini] add albert in test models. (#2075)
2 years ago
Jiarui Fang 616ed91ecd
[test] bert test in non-distributed way (#2074)
2 years ago
Jiarui Fang 223332ff7e
[Gemini] rename ParamTracerWrapper -> RuntimeMemTracer (#2073)
2 years ago
Jiarui Fang 9f828ef36f
[Gemini] remove not used MemtracerWrapper (#2072)
2 years ago
Boyuan Yao 616da17fab
[autoparallel] add binary elementwise metainfo for auto parallel (#2058)
2 years ago
Ziyue Jiang 44ea461890
[Pipeline] Add Topo Class (#2059)
2 years ago
YuliangLiu0306 e4293e5077
[hotfix] update test for latest version (#2060)
2 years ago
YuliangLiu0306 19438ea0ef
[hotfix] skip gpt tracing test (#2064)
2 years ago
Zihao 38ea4ba1bd
[Gemini] fix grad unreleased issue and param recovery issue (#2052)
2 years ago
YuliangLiu0306 1c1fe44305
[autoparallel] adapt solver with self attention (#2037)
2 years ago
HELSON f6178728a0
[gemini] fix init bugs for modules (#2047)
2 years ago
Zihao 6a9158f1fa
[Gemini] free and allocate cuda memory by tensor.storage, add grad hook (#2040)
2 years ago
Jiarui Fang 1e885329f4
[test] align model name with the file name. (#2045)
2 years ago
Jiarui Fang 31c644027b
[hotfix] hotfix Gemini for no leaf modules bug (#2043)
2 years ago
HELSON 384cd26314
[zero] fix testing parameters (#2042)
2 years ago
HELSON 17a3c685b0
[zero] fix unit-tests (#2039)
2 years ago
Jiarui Fang eb7742a4bb
[Gemini] more tests for Gemini (#2038)
2 years ago
HELSON 537e181705
[testing] fix testing models (#2036)
2 years ago
HELSON a1ce02d740
[zero] test gradient accumulation (#1964)
2 years ago
Ziyue Jiang b0936e4a44
[rpc] split with dag (#2028)
2 years ago
Jiarui Fang 96134e7be3
[hotfix] add bert test for gemini fwd bwd (#2035)
2 years ago
YuliangLiu0306 0dbcd4a6f5
[autoparallel] add split handler (#2032)
2 years ago
Jiarui Fang 28aa9a4294
[Gemini] more rigorous unit tests for run_fwd_bwd (#2034)
2 years ago
YuliangLiu0306 81330b0352
[autoparallel] add experimental permute handler (#2029)
2 years ago
Zihao 95c4532fff
[Gemini] paramWrapper paramTracerHook unitest (#2030)
2 years ago
Jiarui Fang 8daf1b4db1
[Gemini] patch for supporting orch.add_ function for ColoTensor (#2003)
2 years ago
Ziyue Jiang 632753abbc
[fx]Split partition with DAG information (#2025)
2 years ago
YuliangLiu0306 ea0f6b8df9
[autoparallel] add runtime pass and numerical test for view handler (#2018)
2 years ago
Jiarui Fang 2e9cbfca12
[Gemini] add unitests to check gemini correctness (#2015)
2 years ago
Jiarui Fang 0b0d8f9e17
[hotfix] revert bug PRs (#2016)
2 years ago
Zihao 0160a62a3c
[Gemini] param_tracer_wrapper and test case (#2009)
2 years ago
YuliangLiu0306 1438993113
[autoparallel] add experimental view handler (#2011)
2 years ago
Genghan Zhang d655eea515
[autoparallel] mix gather (#1977)
2 years ago
Jiarui Fang 3d907faede
[Gemini] add an inline_op_module to common test models and polish unitests. (#2004)
2 years ago
Boyuan Yao 6cd784ffee
[autoparallel] Add metainfo support for F.linear (#1987)
2 years ago
YuliangLiu0306 35e6b9ec82
[autoparallel] adapt handlers with attention block (#1990)
2 years ago
Jiarui Fang 5bec3b2168
[Gemini] open grad checkpoint when model building (#1984)
2 years ago
Boyuan Yao c26f21d365
[autoparallel] add pooling metainfo (#1968)
2 years ago
Jiarui Fang 3712ac7f90
[Gemini] add bert for MemtracerWrapper unintests (#1982)
2 years ago
Jiarui Fang e481489aa6
[Gemini] MemtracerWrapper unittests (#1981)
2 years ago
YuliangLiu0306 0da1d00399
[autoparallel] support distributed dataloader option (#1906)
2 years ago
Genghan Zhang 6630d45546
[autoparallel] Add alpha beta (#1973)
2 years ago
ver217 f8a7148dec
[kernel] move all symlinks of kernel to `colossalai._C` (#1971)
2 years ago
Boyuan Yao 7c7921f71b
[autoparallel] add torch.nn.ReLU metainfo (#1868)
2 years ago
YuliangLiu0306 fea3cb661c
[autoparallel] support addmm in tracer and solver (#1961)
2 years ago
Jiarui Fang f7e276fa71
[Gemini] add GeminiAdamOptimizer (#1960)
2 years ago
HELSON 7066dfbf82
[zero] fix memory leak for zero2 (#1955)
2 years ago
Jiarui Fang 52c6ad26e0
[ColoTensor] reconfig ColoInitContext, decouple default_pg and default_dist_spec. (#1953)
2 years ago
zbian 6877121377 updated flash attention api
2 years ago
Jiarui Fang 9f4fb3f28a
[ColoTensor] ColoInitContext initialize parameters in shard mode. (#1937)
2 years ago
HELSON 6e51d296f0
[zero] migrate zero1&2 (#1878)
2 years ago