Commit Graph

1228 Commits (4af31d263dd12c9238607fa48e5fd0488cd8cf25)

Author SHA1 Message Date
Tongping Liu 8e22c38b89
[hotfix] Fixing the bug related to ipv6 support
2 years ago
YuliangLiu0306 4851f2d607
[autoparallel] update_getattr_handler (#2193)
2 years ago
Jiarui Fang 5682e6d346
[hotfix] correcnt cpu_optim runtime compilation (#2197)
2 years ago
HELSON 2458659919
[zero] fix error for BEiT models (#2169)
2 years ago
Jiarui Fang 355ffb386e
[builder] unified cpu_optim fused_optim inferface (#2190)
2 years ago
Jiarui Fang 9587b080ba
[builder] use runtime builder for fused_optim (#2189)
2 years ago
Jiarui Fang bc0e271e71
[buider] use builder() for cpu adam and fused optim in setup.py (#2187)
2 years ago
Jiarui Fang d42afd30f8
[builder] runtime adam and fused_optim builder (#2184)
2 years ago
YuliangLiu0306 550f8f8905
[autoparallel] integrate_gpt_related_tests (#2134)
2 years ago
Ziyue Jiang 59e343328d
[Pipeline Middleware ] Fix deadlock when num_microbatch=num_stage (#2156)
2 years ago
Tongping Liu ab54fed292
[hotfix] add kwargs for colo_addmm (#2171)
2 years ago
アマデウス 622f863291
[hotfix] Jit type hint #2161 (#2164)
2 years ago
Zihao 12e7bcd720
register meta func for rnn (#2159)
2 years ago
Boyuan Yao cfe2a9bd90
[autoparallel] memory estimation for shape consistency (#2144)
2 years ago
Jiarui Fang b87496a66b
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157)
2 years ago
YuliangLiu0306 16335cb537
[hotfix] fix aten default bug (#2158)
2 years ago
HELSON a7d95b7024
[example] add zero1, zero2 example in GPT examples (#2146)
2 years ago
YuliangLiu0306 1cce6e36ca
[autoparallel] use metainfo in handler (#2149)
2 years ago
Jiarui Fang 2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. (#2151)
2 years ago
Jiarui Fang bdef9dfdbe
[NFC] remove useless graph node code (#2150)
2 years ago
BlueRum b3f73ce1c8
[Gemini] Update coloinit_ctx to support meta_tensor (#2147)
2 years ago
Zihao a128eec9d5
register aten._convolution.default (#2137)
2 years ago
Jiarui Fang ee287620f0
[Gemini] revert ZeROInitCtx related tracer (#2138)
2 years ago
アマデウス 077a66dd81
updated attention kernel (#2133)
2 years ago
YuliangLiu0306 a3c6924deb
[autoparallel] process size nodes in runtime pass (#2130)
2 years ago
YuliangLiu0306 536560ccc0
[autoparallel] implement softmax handler (#2132)
2 years ago
Jiarui Fang c89c66a858
[Gemini] update API of the chunkmemstatscollector. (#2129)
2 years ago
Jiarui Fang 2938edf446
[Gemini] update the non model data record method in runtime memory tracer (#2128)
2 years ago
Jiarui Fang 8fac837679
[Gemini] update non model data calculation method (#2126)
2 years ago
Jiarui Fang 5efda69735
[Gemini] hotfix the unittest bugs (#2125)
2 years ago
Jiarui Fang 05bb28aacf
[Gemini] mapping of preop timestep and param (#2124)
2 years ago
YuliangLiu0306 cd0af9f7f6
[autoparallel] gpt2lp runtimee test (#2113)
2 years ago
Jiarui Fang 9214d1fe28
[Gemini] chunk init using runtime visited param order (#2115)
2 years ago
HELSON e7d3afc9cc
[optimizer] add div_scale for optimizers (#2117)
2 years ago
Jiarui Fang e5aa8333e4
[NFC] update chunk manager API (#2119)
2 years ago
Jiarui Fang e99edfcb51
[NFC] polish comments for Chunk class (#2116)
2 years ago
Ziyue Jiang 09d69e1c25
[PP Middleware] Add bwd and step for PP middleware (#2111)
2 years ago
Jiarui Fang 8afc001f4f
[Gemini] chunk init use OrderedParamGenerator (#2110)
2 years ago
HELSON 63fbba3c19
[zero] add L2 gradient clipping for ZeRO (#2112)
2 years ago
Jiarui Fang 70a8556946
[gemini] get the param visited order during runtime (#2108)
2 years ago
Jiarui Fang 61f31c3cf0
[Gemini] NFC, polish search_chunk_configuration (#2107)
2 years ago
Jiarui Fang 8e14344ec9
[hotfix] fix a type in ColoInitContext (#2106)
2 years ago
Jiarui Fang 05545bfee9
[ColoTensor] throw error when ColoInitContext meets meta parameter. (#2105)
2 years ago
YuliangLiu0306 d87baa85d9
[autoparallel] support linear function bias addition (#2104)
2 years ago
YuliangLiu0306 0fecbb9e20
[autoparallel] support addbmm computation (#2102)
2 years ago
YuliangLiu0306 d3d4630495
[autoparallel] add sum handler (#2101)
2 years ago
Ziyue Jiang e4705ba4e2
[Pipeline Middleware] fix data race in Pipeline Scheduler for DAG (#2087)
2 years ago
YuliangLiu0306 b175e6d58e
[autoparallel] add bias addtion function class (#2098)
2 years ago
YuliangLiu0306 3af7e65dea
[autoparallel] complete gpt related module search (#2097)
2 years ago
Jiarui Fang 85efb7ac2e
[Gemini] gemini use the runtime memory tracer (RMT) (#2099)
2 years ago
Super Daniel 2bf2d1cd3b
[fx] An experimental version of ColoTracer.' (#2002)
2 years ago
Jiarui Fang 4b055351b0
[Gemini] make RuntimeMemTracer work correctly (#2096)
2 years ago
YuliangLiu0306 7f72eb0510
[autoparallel]add embedding handler (#2089)
2 years ago
Jiarui Fang 1fca5d79ea
[Gemini] remove GLOBAL_MODEL_DATA_TRACER (#2091)
2 years ago
Jiarui Fang 28e55c2530
[Gemini] remove GLOBAL_CUDA_MEM_INFO (#2090)
2 years ago
Jiarui Fang 25abae6d7f
[Gemini] use MemStats in Runtime Memory tracer (#2088)
2 years ago
Jiarui Fang 33f4412102
[Gemini] use MemStats to store the tracing data. Seperate it from Collector. (#2084)
2 years ago
Jiarui Fang 1f99205827
[Gemini] remove static tracer (#2083)
2 years ago
YuliangLiu0306 0e9db368ef
[autoparallel] add tensor constructor handler (#2082)
2 years ago
YuliangLiu0306 cdf537a648
[autoparallel] add non_split linear strategy (#2078)
2 years ago
Boyuan Yao cf0268da93
[autoparallel] Add F.conv metainfo (#2069)
2 years ago
YuliangLiu0306 f123476666
[autoparallel] complete gpt block searching (#2065)
2 years ago
Ziyue Jiang 597cdd3006
[Pipeline Middleware] Adapt scheduler for Topo (#2066)
2 years ago
Jiarui Fang b3b89865e2
[Gemini] ParamOpHook -> ColoParamOpHook (#2080)
2 years ago
YuliangLiu0306 677e1e20d4
[device] update flatten device mesh usage (#2079)
2 years ago
Jiarui Fang a7adad9ccb
[Gemini] rename hooks related to runtime mem tracer (#2076)
2 years ago
Jiarui Fang 223332ff7e
[Gemini] rename ParamTracerWrapper -> RuntimeMemTracer (#2073)
2 years ago
Jiarui Fang 9f828ef36f
[Gemini] remove not used MemtracerWrapper (#2072)
2 years ago
Boyuan Yao 616da17fab
[autoparallel] add binary elementwise metainfo for auto parallel (#2058)
2 years ago
Boyuan Yao 4b40fbd743
[autoparallel] fix forward memory calculation (#2062)
2 years ago
Ziyue Jiang 44ea461890
[Pipeline] Add Topo Class (#2059)
2 years ago
YuliangLiu0306 e4293e5077
[hotfix] update test for latest version (#2060)
2 years ago
Zihao 38ea4ba1bd
[Gemini] fix grad unreleased issue and param recovery issue (#2052)
2 years ago
YuliangLiu0306 1c1fe44305
[autoparallel] adapt solver with self attention (#2037)
2 years ago
Frank Lee ea74a3b9cc
[cli] updated installation cheheck with more inforamtion (#2050)
2 years ago
HELSON f6178728a0
[gemini] fix init bugs for modules (#2047)
2 years ago
Frank Lee 81e0da7fa8
[setup] supported conda-installed torch (#2048)
2 years ago
HELSON e37f3db40c
[gemini] add arguments (#2046)
2 years ago
Zihao 6a9158f1fa
[Gemini] free and allocate cuda memory by tensor.storage, add grad hook (#2040)
2 years ago
Jiarui Fang 31c644027b
[hotfix] hotfix Gemini for no leaf modules bug (#2043)
2 years ago
HELSON a1ce02d740
[zero] test gradient accumulation (#1964)
2 years ago
Ziyue Jiang b0936e4a44
[rpc] split with dag (#2028)
2 years ago
Jiarui Fang 96134e7be3
[hotfix] add bert test for gemini fwd bwd (#2035)
2 years ago
YuliangLiu0306 0dbcd4a6f5
[autoparallel] add split handler (#2032)
2 years ago
Jiarui Fang 28aa9a4294
[Gemini] more rigorous unit tests for run_fwd_bwd (#2034)
2 years ago
YuliangLiu0306 81330b0352
[autoparallel] add experimental permute handler (#2029)
2 years ago
Zihao 95c4532fff
[Gemini] paramWrapper paramTracerHook unitest (#2030)
2 years ago
Jiarui Fang 8daf1b4db1
[Gemini] patch for supporting orch.add_ function for ColoTensor (#2003)
2 years ago
Ziyue Jiang 632753abbc
[fx]Split partition with DAG information (#2025)
2 years ago
YuliangLiu0306 ea0f6b8df9
[autoparallel] add runtime pass and numerical test for view handler (#2018)
2 years ago
Zihao a719b89a41
[gemini] param_trace_hook (#2020)
2 years ago
Jiarui Fang 0b0d8f9e17
[hotfix] revert bug PRs (#2016)
2 years ago
Zihao aba3db464d
[Gemini] ParamMemHook (#2008)
2 years ago
Zihao 0160a62a3c
[Gemini] param_tracer_wrapper and test case (#2009)
2 years ago
YuliangLiu0306 1438993113
[autoparallel] add experimental view handler (#2011)
2 years ago
Genghan Zhang d655eea515
[autoparallel] mix gather (#1977)
2 years ago
Frank Lee 2bab6f512c
[release] release v0.1.11rc4 (#2007)
2 years ago
Boyuan Yao 6cd784ffee
[autoparallel] Add metainfo support for F.linear (#1987)
2 years ago
Super Daniel 2edbef13cc
[fx] add more meta_registry for MetaTensor execution. (#2000)
2 years ago
Jiarui Fang a2d3266648
[hotfix] make Gemini work for conv DNN (#1998)
2 years ago
YuliangLiu0306 155891113e
[autoparallel] use pytree map style to process data (#1989)
2 years ago
YuliangLiu0306 35e6b9ec82
[autoparallel] adapt handlers with attention block (#1990)
2 years ago
YuliangLiu0306 05020e50d0
[autoparallel] support more flexible data type (#1967)
2 years ago
Boyuan Yao c26f21d365
[autoparallel] add pooling metainfo (#1968)
2 years ago
Jiarui Fang 3712ac7f90
[Gemini] add bert for MemtracerWrapper unintests (#1982)
2 years ago
Jiarui Fang e481489aa6
[Gemini] MemtracerWrapper unittests (#1981)
2 years ago
Jiarui Fang 31922110ad
[Gemini] memory trace hook (#1978)
2 years ago
Jiarui Fang 0529fcde06
[Gemini] independent runtime tracer (#1974)
2 years ago
YuliangLiu0306 0da1d00399
[autoparallel] support distributed dataloader option (#1906)
2 years ago
Genghan Zhang 6630d45546
[autoparallel] Add alpha beta (#1973)
2 years ago
Jiarui Fang cc0ed7cf33
[Gemini] ZeROHookV2 -> GeminiZeROHook (#1972)
2 years ago
ver217 f8a7148dec
[kernel] move all symlinks of kernel to `colossalai._C` (#1971)
2 years ago
Jiarui Fang 7e24b9b9ee
[Gemini] clean no used MemTraceOp (#1970)
2 years ago
Boyuan Yao 7c7921f71b
[autoparallel] add torch.nn.ReLU metainfo (#1868)
2 years ago
Jiarui Fang 8c66a1d0aa
[polish] remove useless file _mem_tracer_hook.py (#1963)
2 years ago
Jiarui Fang c4739a725a
[Gemini] polish memstats collector (#1962)
2 years ago
YuliangLiu0306 fea3cb661c
[autoparallel] support addmm in tracer and solver (#1961)
2 years ago
Jiarui Fang f7e276fa71
[Gemini] add GeminiAdamOptimizer (#1960)
2 years ago
HELSON 7066dfbf82
[zero] fix memory leak for zero2 (#1955)
2 years ago
Jiarui Fang 52c6ad26e0
[ColoTensor] reconfig ColoInitContext, decouple default_pg and default_dist_spec. (#1953)
2 years ago
zbian 598d456d0e fixed logger
2 years ago
zbian 6877121377 updated flash attention api
2 years ago
YuliangLiu0306 36c0f3ea5b
[autoparallel] remove redundancy comm node (#1893)
2 years ago
アマデウス e52f9d9109
[tensorparallel] fixed tp layers (#1938)
2 years ago
Jiarui Fang 9f4fb3f28a
[ColoTensor] ColoInitContext initialize parameters in shard mode. (#1937)
2 years ago
Boyuan Yao d5c5bc219e
[SC] add GPT example for auto checkpoint (#1889)
2 years ago
Junming Wu 14a0b18305
[NFC] polish colossalai/amp/naive_amp/__init__.py code style (#1905)
2 years ago
HELSON 6e51d296f0
[zero] migrate zero1&2 (#1878)
2 years ago
Super Daniel cc55ff0aa4
[autoparallel] user-friendly API for CheckpointSolver. (#1879)
2 years ago
Super Daniel 448248b27c
[fx] metainfo_trace as an API. (#1873)
2 years ago
Jiarui Fang 986f8cbaa7
[inference] overlap comm and compute in Linear1D_Row when stream_chunk_num > 1 (#1876)
2 years ago
YuliangLiu0306 1b494ad73c
[autoparallel] fix linear logical convert issue (#1857)
2 years ago
Jiarui Fang c2947dadf1
[inference] streaming Linear 1D Row inference (#1874)
2 years ago
Frank Lee e6ec99d389
[utils] fixed lazy init context (#1867)
2 years ago
zbian 653b0a620e added skip_bias_add for non-tp linear
2 years ago
LuGY 94329fc139
[NFC] polish colossalai/amp/apex_amp/__init__.py code style (#1853)
2 years ago
zbian 1559a09fb7 [NFC] polish amp.naive_amp.grad_scaler code style
2 years ago
HELSON 72c9448920 [NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/operator_handler.py code style (#1845)
2 years ago
Genghan Zhang b25030cc07 [NFC] polish ./colossalai/amp/torch_amp/__init__.py code style (#1836)
2 years ago
Sze-qq 95ac4f88ea [NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/conv_handler.py code style (#1829)
2 years ago
Ziyue Jiang 5da03c936d [NFC] polish colossalai/amp/torch_amp/_grad_scaler.py code style (#1823)
2 years ago
Fazzie-Maqianli 399f84d8f6 [NFC] polish colossalai/amp/naive_amp/_fp16_optimizer.py code style (#1819)
2 years ago
CsRic 9623ec1b02 [NFC] polish colossalai/amp/naive_amp/_utils.py code style (#1816)
2 years ago
binmakeswell 3c3714fc2a [NFC] polish strategies_constructor.py code style (#1806)
2 years ago
Jiarui Fang 3ce4463fe6
[utils] remove lazy_memory_allocate from ColoInitContext (#1844)
2 years ago
Jiarui Fang fba34efb5a
version to 0.1.11rc2 (#1832)
2 years ago
YuliangLiu0306 49216d7ab1
[autoparallel] fix bugs caused by negative dim key (#1808)
2 years ago
アマデウス 4268ae017b
[kernel] added jit warmup (#1792)
2 years ago
YuliangLiu0306 f6032ddb17
[autoparallel] fix bias addition module (#1800)
2 years ago
Jiarui Fang cd5a0d56fa
[Gemini] make gemini usage simple (#1821)
2 years ago
ver217 99870726b1
[CheckpointIO] a uniform checkpoint I/O module (#1689)
2 years ago
Boyuan Yao 629172b319
[autoparallel] add batch norm metainfo (#1815)
2 years ago
Super Daniel 441d584e4a
[fx] add a symbolic_trace api. (#1812)
2 years ago
xcnick e0da01ea71
[hotfix] fix build error when torch version >= 1.13 (#1803)
2 years ago
oahzxl 9639ea88fc
[kernel] more flexible flashatt interface (#1804)
2 years ago
Zihao 20e255d4e8
MemStatsCollectorStatic (#1765)
2 years ago
Boyuan Yao 327d07c44a
[autoparallel] add conv metainfo class for auto parallel (#1796)
2 years ago
oahzxl 501a9e9cd2
[hotfix] polish flash attention (#1802)
2 years ago
Jiarui Fang 218c75fd9d
[NFC] polish type hint for shape consistency (#1801)
2 years ago
Jiarui Fang c248800359
[kernel] skip tests of flash_attn and triton when they are not available (#1798)
2 years ago
YuliangLiu0306 e34e850a4c
[autoparallel]add essential CommActions for broadcast oprands (#1793)
2 years ago
Boyuan Yao 05ce3d369f
[fx] Add linear metainfo class for auto parallel (#1783)
2 years ago
Super Daniel e8a9bebc87
[autoparallel] refactor and add rotorc. (#1789)
2 years ago
YuliangLiu0306 2c4c7b3618
[autoparallel] add getattr handler (#1767)
2 years ago
HELSON c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 (#1786)
2 years ago
kurisusnowdeng 0b8161fab8 updated tp layers
2 years ago
Jiarui Fang cb5a587e9a
[hotfix] polish chunk import (#1787)
2 years ago
YuliangLiu0306 e859380bf7
[fx] support module with bias addition (#1780)
2 years ago
Frank Lee f3f19a5c47
[autoparallel] added matmul handler (#1763)
2 years ago
Ziyue Jiang 4df0194976
[Pipeline]Adapt to Pipelinable OPT (#1782)
2 years ago
YuliangLiu0306 27de252334
[autoparallel] fix conv handler numerical test (#1771)
2 years ago
Super Daniel 1e88811c7a
[autoparallel] move ckpt solvers to autoparallel folder / refactor code (#1764)
2 years ago
Jiarui Fang f34dab4270
[compatibility] ChunkMgr import error (#1772)
2 years ago
YuliangLiu0306 b0f7c8bde8
[autoparallel] update CommSpec to CommActions (#1768)
2 years ago
YuliangLiu0306 b4cc59b61e
[autoparallel] add numerical test for node strategies (#1760)
2 years ago
oahzxl 25952b67d7
[feat] add flash attention (#1762)
2 years ago
Super Daniel 0584654c79
[fx] refactor memory utils and extend shard utils. (#1754)
2 years ago
Ziyue Jiang 63f250bbd4
fix file name (#1759)
2 years ago
YuliangLiu0306 314d8c497f
[autoparallel] refactor the runtime apply pass and add docstring to passes (#1757)
2 years ago
Frank Lee f9a613d660
[autoparallel] added binary elementwise node handler (#1758)
2 years ago
YuliangLiu0306 d2fc067231
[autoparallel] fix param hook issue in transform pass (#1755)
2 years ago
Frank Lee 262652c8bc
[autoparallel] added addbmm handler (#1751)
2 years ago
YuliangLiu0306 980ed21723
[autoparallel] shard param and buffer as expected (#1753)
2 years ago
YuliangLiu0306 cdb7d5e7d2
[hotfix] autoparallel unit test (#1752)
2 years ago
YuliangLiu0306 a4ce180e85
[autoparallel] add sequential order to communication actions (#1735)
2 years ago
Frank Lee 474111ecb5
[autoparallel] fixed wrong sharding strategy in conv handler (#1747)
2 years ago
Frank Lee 8b8937d901
[autoparallel] fixed wrong generated strategy for dot op (#1746)
2 years ago
Frank Lee 993b8875b6
[autoparallel] handled illegal sharding strategy in shape consistency (#1744)
2 years ago
Frank Lee 88a79814fb
[autoparallel] handled illegal strategy in node handler (#1743)
2 years ago
Super Daniel 30874f1692
[fx/profiler] debug the fx.profiler / add an example test script for fx.profiler (#1730)
2 years ago
Frank Lee eee84908d4
[autoparallel] handled illegal sharding strategy (#1728)
2 years ago
Sze-qq 23703c9dd6 [NFC] polish colossalai/nn/metric/_utils.py code style (#1727)
2 years ago
Ofey Chan 7e62af28a0 [NFC] polish accuracy_2d.py code style (#1719)
2 years ago
LuGY 730f88f8e1 [NFC] polish _checkpoint_hook.py code style (#1722)
2 years ago
CsRic ea961d8fd1 [NFC] polish colossalai/zero/sharded_param/__init__.py code style (#1717)
2 years ago
yuxuan-lou 2b49ca80a3 [NFC] polish colossalai/nn/lr_scheduler/linear.py code style (#1716)
2 years ago
shenggan e1d780030d [NFC] polish colossalai/nn/metric/accuracy_2p5d.py code style (#1714)
2 years ago
YuliangLiu0306 d373e67b99
[hotfix] resharding cost issue (#1742)
2 years ago
Jiarui Fang 24e84eba60
upgrade version to 0.1.11rc1 (#1739)
2 years ago
Frank Lee d2e0e39c9d
[release] update to v0.1.11 (#1736)
2 years ago