Liang Bowen
7eb87f516d
flake8 style ( #352 )
2022-03-11 15:50:28 +08:00
Xu Kai
54ee8d1254
Fix/format colossalai/engine/paramhooks/( #350 )
2022-03-11 15:50:28 +08:00
Maruyama_Aya
e83970e3dc
fix format ColossalAI\colossalai\context\process_group_initializer
2022-03-11 15:50:28 +08:00
yuxuan-lou
3b88eb2259
Flake8 code restyle
2022-03-11 15:50:28 +08:00
xyupeng
af801cb4df
fix format setup.py ( #343 )
2022-03-11 15:50:28 +08:00
xuqifan897
148207048e
Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py ( #342 )
2022-03-11 15:50:28 +08:00
Cautiousss
3a51d909af
fix format ( #332 )
...
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
2022-03-11 15:50:28 +08:00
DouJS
cbb6436ff0
fix format for dir-[parallel_3d] ( #333 )
2022-03-11 15:50:28 +08:00
ExtremeViscent
eaac03ae1d
[formart] format fixed for kernel\cuda_native codes ( #335 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
00670c870e
[zero] bucketized tensor cpu gpu copy ( #368 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
44e4891f57
[zero] able to place params on cpu after zero init context ( #365 )
...
* place params on cpu after zero init context
* polish code
2022-03-11 15:50:28 +08:00
ver217
b66f3b994c
increase the timeout limit in CI temporarily
2022-03-11 15:50:28 +08:00
ver217
52d055119b
increase the timeout limit in CI temporarily
2022-03-11 15:50:28 +08:00
ver217
253e54d98a
fix grad shape
2022-03-11 15:50:28 +08:00
Jiarui Fang
ea2872073f
[zero] global model data memory tracer ( #360 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
cb34cd384d
[test] polish zero related unitest ( #351 )
2022-03-11 15:50:28 +08:00
HELSON
534e0bb118
Fixed import bug for no-tensorboard environment ( #354 )
2022-03-11 15:50:28 +08:00
HELSON
c57e089824
[profile] added example for ProfilerContext ( #349 )
2022-03-11 15:50:28 +08:00
ver217
532ae79cb0
add test sharded optim with cpu adam ( #347 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
10e2826426
move async memory to an individual directory ( #345 )
2022-03-11 15:50:28 +08:00
HELSON
425bb0df3f
Added Profiler Context to manage all profilers ( #340 )
2022-03-11 15:50:28 +08:00
ver217
d0ae0f2215
[zero] update sharded optim v2 ( #334 )
2022-03-11 15:50:28 +08:00
ver217
2b8cddd40e
skip bert in test engine
2022-03-11 15:50:28 +08:00
ver217
d41a9f12c6
install transformers in CI
2022-03-11 15:50:28 +08:00
ver217
f5f0ad266e
fix bert unit test
2022-03-11 15:50:28 +08:00
jiaruifang
5663616921
polish code
2022-03-11 15:50:28 +08:00
jiaruifang
d271f2596b
polish engine unitest
2022-03-11 15:50:28 +08:00
jiaruifang
354c0f9047
polish code
2022-03-11 15:50:28 +08:00
jiaruifang
4d94cd513e
adapting bert unitest interface
2022-03-11 15:50:28 +08:00
jiaruifang
7977422aeb
add bert for unitest and sharded model is not able to pass the bert case
2022-03-11 15:50:28 +08:00
Frank Lee
3d5d64bd10
refactored grad scaler ( #338 )
2022-03-11 15:50:28 +08:00
Frank Lee
6a3188167c
set criterion as optional in colossalai initialize ( #336 )
2022-03-11 15:50:28 +08:00
Jie Zhu
3213554cc2
[profiler] add adaptive sampling to memory profiler ( #330 )
...
* fix merge conflict
modify unit test
remove unnessesary log info
reformat file
* remove unused module
* remove unnecessary sync function
* change doc string style from Google to Sphinx
2022-03-11 15:50:28 +08:00
ver217
1388671699
[zero] Update sharded model v2 using sharded param v2 ( #323 )
2022-03-11 15:50:28 +08:00
jiaruifang
799d105bb4
using pytest parametrize
2022-03-11 15:50:28 +08:00
jiaruifang
dec24561cf
show pytest parameterize
2022-03-11 15:50:28 +08:00
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
2022-03-11 15:50:28 +08:00
Frank Lee
6268446b81
[test] refactored testing components ( #324 )
2022-03-11 15:50:28 +08:00
HELSON
4f26fabe4f
fixed strings in profiler outputs ( #325 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
2022-03-11 15:50:28 +08:00
1SAA
73bff11288
Added profiler communication operations
...
Fixed bug for learning rate scheduler
2022-03-11 15:50:28 +08:00
binmakeswell
d275b98b7d
add badge and contributor list
2022-03-11 15:50:28 +08:00
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-11 15:50:28 +08:00
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
2022-03-11 15:50:28 +08:00
ver217
3092317b80
polish code
2022-03-11 15:50:28 +08:00
ver217
36f9a74ab2
fix sharded param hook and unit test
2022-03-11 15:50:28 +08:00
ver217
001ca624dd
impl shard optim v2 and add unit test
2022-03-11 15:50:28 +08:00
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
80364c7686
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
2022-03-11 15:50:28 +08:00