Jiarui Fang
9587b080ba
[builder] use runtime builder for fused_optim ( #2189 )
2 years ago
Jiarui Fang
d42afd30f8
[builder] runtime adam and fused_optim builder ( #2184 )
2 years ago
Tongping Liu
ab54fed292
[hotfix] add kwargs for colo_addmm ( #2171 )
2 years ago
アマデウス
622f863291
[hotfix] Jit type hint #2161 ( #2164 )
2 years ago
Jiarui Fang
2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. ( #2151 )
2 years ago
Jiarui Fang
bdef9dfdbe
[NFC] remove useless graph node code ( #2150 )
2 years ago
Jiarui Fang
9214d1fe28
[Gemini] chunk init using runtime visited param order ( #2115 )
2 years ago
HELSON
e7d3afc9cc
[optimizer] add div_scale for optimizers ( #2117 )
...
* [optimizer] add div_scale for optimizers
* [zero] use div_scale in zero optimizer
* fix testing error
2 years ago
Jiarui Fang
e5aa8333e4
[NFC] update chunk manager API ( #2119 )
2 years ago
Jiarui Fang
e99edfcb51
[NFC] polish comments for Chunk class ( #2116 )
2 years ago
HELSON
63fbba3c19
[zero] add L2 gradient clipping for ZeRO ( #2112 )
...
* [zero] add L2 gradient clipping
* [testing] add MlpModel
* [zero] add unit test for grad clipping
* fix atol
2 years ago
Jiarui Fang
1f99205827
[Gemini] remove static tracer ( #2083 )
2 years ago
Jiarui Fang
b3b89865e2
[Gemini] ParamOpHook -> ColoParamOpHook ( #2080 )
2 years ago
HELSON
e37f3db40c
[gemini] add arguments ( #2046 )
...
* [zero] fix testing parameters
* [gemini] add arguments
* add docstrings
2 years ago
Jiarui Fang
96134e7be3
[hotfix] add bert test for gemini fwd bwd ( #2035 )
2 years ago
Jiarui Fang
8daf1b4db1
[Gemini] patch for supporting orch.add_ function for ColoTensor ( #2003 )
2 years ago
Jiarui Fang
a2d3266648
[hotfix] make Gemini work for conv DNN ( #1998 )
2 years ago
Jiarui Fang
cc0ed7cf33
[Gemini] ZeROHookV2 -> GeminiZeROHook ( #1972 )
2 years ago
ver217
f8a7148dec
[kernel] move all symlinks of kernel to `colossalai._C` ( #1971 )
2 years ago
Jiarui Fang
f7e276fa71
[Gemini] add GeminiAdamOptimizer ( #1960 )
2 years ago
アマデウス
e52f9d9109
[tensorparallel] fixed tp layers ( #1938 )
2 years ago
Jiarui Fang
986f8cbaa7
[inference] overlap comm and compute in Linear1D_Row when stream_chunk_num > 1 ( #1876 )
2 years ago
Jiarui Fang
c2947dadf1
[inference] streaming Linear 1D Row inference ( #1874 )
2 years ago
zbian
653b0a620e
added skip_bias_add for non-tp linear
2 years ago
アマデウス
4268ae017b
[kernel] added jit warmup ( #1792 )
2 years ago
Jiarui Fang
cd5a0d56fa
[Gemini] make gemini usage simple ( #1821 )
2 years ago
Zihao
20e255d4e8
MemStatsCollectorStatic ( #1765 )
2 years ago
HELSON
c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 ( #1786 )
...
* [hotfix] fix zero's incompatibility with checkpoint in torch-1.12
* [zero] add cpu shard init
* [zero] add tiny example test
* [colo_tensor] fix bugs for torch-1.11
2 years ago
kurisusnowdeng
0b8161fab8
updated tp layers
2 years ago
Sze-qq
23703c9dd6
[NFC] polish colossalai/nn/metric/_utils.py code style ( #1727 )
2 years ago
Ofey Chan
7e62af28a0
[NFC] polish accuracy_2d.py code style ( #1719 )
2 years ago
yuxuan-lou
2b49ca80a3
[NFC] polish colossalai/nn/lr_scheduler/linear.py code style ( #1716 )
2 years ago
shenggan
e1d780030d
[NFC] polish colossalai/nn/metric/accuracy_2p5d.py code style ( #1714 )
2 years ago
HELSON
1468e4bcfc
[zero] add constant placement policy ( #1705 )
...
* fixes memory leak when paramter is in fp16 in ZeroDDP init.
* bans chunk releasement in CUDA. Only when a chunk is about to offload, it is allowed to release.
* adds a constant placement policy. With it, users can allocate a reserved caching memory space for parameters.
2 years ago
binmakeswell
5f41463a76
add optimizer README for tutorials ( #1707 )
2 years ago
Jiarui Fang
21962e1593
[embedding] rename FreqAwareEmbedding -> CachedEmbedding ( #1699 )
2 years ago
Jiarui Fang
363fc2861a
[embeddings] more detailed timer ( #1692 )
2 years ago
jim
e5ab6be72e
[hotfix[ fix colotensor.type() raise NotImplementedError ( #1682 )
2 years ago
HELSON
b28991dd0a
[feature] A new ZeRO implementation ( #1644 )
2 years ago
Jiarui Fang
c638bec028
[embedding] polish async copy ( #1657 )
2 years ago
Jiarui Fang
988570e4a6
[embedding] add more detail profiling ( #1656 )
2 years ago
Jiarui Fang
e1f97fd2b8
[embedding] print profiling results ( #1654 )
2 years ago
Jiarui Fang
04443605a5
[embedding] non-blocking cpu-gpu copy ( #1647 )
2 years ago
CsRic
0767f67a0f
[embedding] isolate cache_op from forward ( #1645 )
...
Co-authored-by: ric <mkkt_bkkt@mail.ustc.edu.cn>
2 years ago
Jiarui Fang
c5d39215f6
Revert "[feature] new zero implementation ( #1623 )" ( #1643 )
...
This reverts commit 5be118f405
.
2 years ago
HELSON
5be118f405
[feature] new zero implementation ( #1623 )
2 years ago
Jiarui Fang
e57df80325
[embeddings] cache option ( #1635 )
2 years ago
HELSON
a088022efc
[moe] fix moe bugs ( #1633 )
2 years ago
HELSON
f7f2248771
[moe] fix MoE bugs ( #1628 )
...
* remove forced FP32 modules
* correct no_shard-contexts' positions
2 years ago
Jiarui Fang
38c68b5b9a
[embedding] rollback for better FAW performance ( #1625 )
2 years ago