Yuer867
4a0f8c2c50
fix format parallel_2p5d ( #357 )
3 years ago
Liang Bowen
7eb87f516d
flake8 style ( #352 )
3 years ago
Xu Kai
54ee8d1254
Fix/format colossalai/engine/paramhooks/( #350 )
3 years ago
Maruyama_Aya
e83970e3dc
fix format ColossalAI\colossalai\context\process_group_initializer
3 years ago
yuxuan-lou
3b88eb2259
Flake8 code restyle
3 years ago
xyupeng
af801cb4df
fix format setup.py ( #343 )
3 years ago
xuqifan897
148207048e
Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py ( #342 )
3 years ago
Cautiousss
3a51d909af
fix format ( #332 )
...
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
3 years ago
DouJS
cbb6436ff0
fix format for dir-[parallel_3d] ( #333 )
3 years ago
ExtremeViscent
eaac03ae1d
[formart] format fixed for kernel\cuda_native codes ( #335 )
3 years ago
Jiarui Fang
00670c870e
[zero] bucketized tensor cpu gpu copy ( #368 )
3 years ago
Jiarui Fang
44e4891f57
[zero] able to place params on cpu after zero init context ( #365 )
...
* place params on cpu after zero init context
* polish code
3 years ago
ver217
b66f3b994c
increase the timeout limit in CI temporarily
3 years ago
ver217
52d055119b
increase the timeout limit in CI temporarily
3 years ago
ver217
253e54d98a
fix grad shape
3 years ago
Jiarui Fang
ea2872073f
[zero] global model data memory tracer ( #360 )
3 years ago
Jiarui Fang
cb34cd384d
[test] polish zero related unitest ( #351 )
3 years ago
HELSON
534e0bb118
Fixed import bug for no-tensorboard environment ( #354 )
3 years ago
HELSON
c57e089824
[profile] added example for ProfilerContext ( #349 )
3 years ago
ver217
532ae79cb0
add test sharded optim with cpu adam ( #347 )
3 years ago
Jiarui Fang
10e2826426
move async memory to an individual directory ( #345 )
3 years ago
HELSON
425bb0df3f
Added Profiler Context to manage all profilers ( #340 )
3 years ago
ver217
d0ae0f2215
[zero] update sharded optim v2 ( #334 )
3 years ago
ver217
2b8cddd40e
skip bert in test engine
3 years ago
ver217
d41a9f12c6
install transformers in CI
3 years ago
ver217
f5f0ad266e
fix bert unit test
3 years ago
jiaruifang
5663616921
polish code
3 years ago
jiaruifang
d271f2596b
polish engine unitest
3 years ago
jiaruifang
354c0f9047
polish code
3 years ago
jiaruifang
4d94cd513e
adapting bert unitest interface
3 years ago
jiaruifang
7977422aeb
add bert for unitest and sharded model is not able to pass the bert case
3 years ago
Frank Lee
3d5d64bd10
refactored grad scaler ( #338 )
3 years ago
Frank Lee
6a3188167c
set criterion as optional in colossalai initialize ( #336 )
3 years ago
Jie Zhu
3213554cc2
[profiler] add adaptive sampling to memory profiler ( #330 )
...
* fix merge conflict
modify unit test
remove unnessesary log info
reformat file
* remove unused module
* remove unnecessary sync function
* change doc string style from Google to Sphinx
3 years ago
ver217
1388671699
[zero] Update sharded model v2 using sharded param v2 ( #323 )
3 years ago
jiaruifang
799d105bb4
using pytest parametrize
3 years ago
jiaruifang
dec24561cf
show pytest parameterize
3 years ago
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
3 years ago
Frank Lee
6268446b81
[test] refactored testing components ( #324 )
3 years ago
HELSON
4f26fabe4f
fixed strings in profiler outputs ( #325 )
3 years ago
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
3 years ago
1SAA
73bff11288
Added profiler communication operations
...
Fixed bug for learning rate scheduler
3 years ago
binmakeswell
d275b98b7d
add badge and contributor list
3 years ago
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
3 years ago
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
3 years ago
ver217
3092317b80
polish code
3 years ago
ver217
36f9a74ab2
fix sharded param hook and unit test
3 years ago
ver217
001ca624dd
impl shard optim v2 and add unit test
3 years ago
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
3 years ago