Jiarui Fang
7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager ( #728 )
3 years ago
binmakeswell
600e769a42
add video ( #732 )
3 years ago
Frank Lee
a5c3f072f6
[bug] removed zero installation requirements ( #731 )
3 years ago
HELSON
b9b469ea50
[moe] add checkpoint for moe zero test ( #729 )
3 years ago
Frank Lee
6f7d1362c9
[doc] removed outdated installation command ( #730 )
3 years ago
FrankLeeeee
e88a498c9c
[test] removed trivial outdated test
3 years ago
FrankLeeeee
62b4ce7326
[test] added missing decorators to model checkpointing tests
3 years ago
Frank Lee
1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 ( #721 )
3 years ago
Frank Lee
2412429d54
[util] fixed activation checkpointing on torch 1.9 ( #719 )
3 years ago
Frank Lee
04ff5ea546
[utils] support detection of number of processes on current node ( #723 )
3 years ago
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
3 years ago
Frank Lee
20ab1f5520
[bug] fixed broken test_found_inf ( #725 )
3 years ago
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
3 years ago
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
3 years ago
ver217
715b86eadd
[hotfix] fix stm cuda model data size ( #710 )
3 years ago
LuGY
140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes ( #711 )
...
* fixed bugs of assigning grad states to non leaf nodes
* use detach()
3 years ago
Frank Lee
eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 ( #700 )
3 years ago
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
3 years ago
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
3 years ago
アマデウス
3fc8a204dc
[]Corrected 3d vocab parallel embedding ( #707 )
3 years ago
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
3 years ago
binmakeswell
896ade15d6
add PaLM link ( #704 ) ( #705 )
3 years ago
binmakeswell
270157e9e7
add PaLM link ( #704 )
...
* add PaLM link
3 years ago
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
ver217
70e8dd418b
[hotfix] update requirements-test ( #701 )
3 years ago
Frank Lee
1ae94ea85a
[ci] remove ipc config for rootless docker ( #694 )
3 years ago
github-actions[bot]
d878d843ad
Automated submodule synchronization ( #695 )
...
Co-authored-by: github-actions <github-actions@github.com>
3 years ago
github-actions[bot]
d50cdabbc9
Automated submodule synchronization ( #556 )
...
Co-authored-by: github-actions <github-actions@github.com>
3 years ago
Frank Lee
dbe8e030fb
[ci] added missing field in workflow ( #692 )
3 years ago
Frank Lee
0372ed7951
[ci] update workflow trigger condition and support options ( #691 )
3 years ago
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
3 years ago
YuliangLiu0306
0ed7042f42
[pipeline] refactor pipeline ( #679 )
...
* refactor pipeline---put runtime schedule into engine.
* add type hint for schedule Optional[BaseSchedule]
* preprocess schedule during engine initializing
* infer pipeline schedule params from config
3 years ago
Frank Lee
eace69387d
[ci] fixed compatibility workflow ( #678 )
3 years ago
Jiarui Fang
59bf2dc590
[zero] initialize a stateful tensor manager ( #614 )
3 years ago
Frank Lee
cc236916c6
[ci] replace the dngc ocker image with self-built pytorch image ( #672 )
3 years ago
ver217
03e1d35931
[release] update version ( #673 )
3 years ago
encmps
79ccfa4310
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style ( #667 )
3 years ago
lucasliunju
e4bcff9b0f
[NFC] polish colossalai/builder/builder.py code style ( #662 )
3 years ago
shenggan
331683bf82
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style ( #661 )
3 years ago
FredHuang99
c336cd3066
[NFC] polish colossalai/communication/utils.py code style ( #656 )
3 years ago
MaxT
5ab9a71299
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style ( #642 )
3 years ago
Xue Fuzhao
10afec728f
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style ( #641 )
3 years ago
Cautiousss
055d0270c8
[NFC] polish colossalai/context/process_group_initializer/initializer_sequence.py colossalai/context/process_group_initializer initializer_tensor.py code style ( #639 )
...
Co-authored-by: 何晓昕 <cautious@r-236-100-25-172.comp.nus.edu.sg>
3 years ago
Ziheng Qin
c7c224ee17
[NFC] polish colossalai/builder/pipeline.py code style ( #638 )
3 years ago
Sze-qq
10591ecdf9
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style ( #636 )
3 years ago
Wangbo Zhao
6fcb381801
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style ( #635 )
3 years ago
ExtremeViscent
8a5d526e95
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu and cross_entropy.cu code style ( #634 )
3 years ago
RichardoLuo
ad1e7ab2b2
'[NFC] polish <colossalai/engine/_base_engine.py> code style' ( #631 )
...
Co-authored-by: RichardoLuo <14049555596@qq.com>
3 years ago
Zangwei
2e11853d04
[NFC] polish colossalai/communication/ring.py code style ( #630 )
3 years ago
puck_WCR
01cc941e1d
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code stype ( #629 )
3 years ago