Super Daniel
8ea50d999e
[hotfix] pass a parameter. ( #2288 )
...
* [autockpt] make it work.
* [autockpt] linearize / merge shape-consistency nodes.
* [autockpt] considering parameter and optimizer weights.
* [hotfix] pass a parameter.
2 years ago
ZijianYY
df1d6dc553
[examples] using args and combining two versions for PaLM ( #2284 )
2 years ago
zbian
e94c79f15b
improved allgather & reducescatter for 3d
2 years ago
binmakeswell
c719798abe
[doc] add feature diffusion v2, bloom, auto-parallel ( #2282 )
2 years ago
HELSON
62c38e3330
[zero] polish low level zero optimizer ( #2275 )
2 years ago
Ziyue Jiang
ac863a01d6
[example] add benchmark ( #2276 )
...
* add benchmark
* merge common func
* add total and avg tflops
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2 years ago
Boyuan Yao
22e947f982
[autoparallel] fix runtime apply memory estimation ( #2281 )
...
* [autoparallel] align the data_ptr with the old version of auto activation checkpoint pipeline
* [autoparallel] using fwd_time and bwd_time instead of fwd_flop and bwd_flop
* [autoparallel] specifycomm nodes' memory cost in construct chain
* [autoparallel] fix wrong runtime apply calculation
* [autoparallel] fix wrong runtime apply calculation
* [autoparallel] fix wrong runtime apply calculation
2 years ago
BlueRum
1405b4381e
[example] fix save_load bug for dreambooth ( #2280 )
2 years ago
Super Daniel
8e8900ff3f
[autockpt] considering parameter and optimizer weights. ( #2279 )
...
* [autockpt] make it work.
* [autockpt] linearize / merge shape-consistency nodes.
* [autockpt] considering parameter and optimizer weights.
2 years ago
YuliangLiu0306
f027ef7913
[hotfix] fix fp16 optimzier bug ( #2273 )
2 years ago
YuliangLiu0306
fb87322773
[autoparallel] fix spelling error ( #2270 )
2 years ago
Jiarui Fang
af32022f74
[Gemini] fix the convert_to_torch_module bug ( #2269 )
2 years ago
Jiarui Fang
879df8b943
[example] GPT polish readme ( #2274 )
2 years ago
Ziyue Jiang
9654df0e9a
Add GPT PP Example ( #2272 )
...
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2 years ago
Super Daniel
b0d21d0c4f
[autockpt] linearize / merge shape-consistency nodes. ( #2271 )
...
* [autockpt] make it work.
* [autockpt] linearize / merge shape-consistency nodes.
2 years ago
YuliangLiu0306
4b29112ab2
[autoparallel] gpt2 autoparallel examples ( #2267 )
...
* [autoparallel] gpt2 autoparallel examples
* polish code
* polish code
2 years ago
Ziyue Jiang
8b045b3c1f
[Pipeline Middleware] Reduce comm redundancy by getting accurate output ( #2232 )
...
* move to cpu to avoid dead lock
* get output by offsets
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2 years ago
HELSON
09c0102fe6
[example] fix gpt example with 0.1.10 ( #2265 )
2 years ago
Boyuan Yao
5c2ef9fc76
[autoparallel] modify comm nodes' memory cost in construct chain ( #2263 )
...
* [autoparallel] align the data_ptr with the old version of auto activation checkpoint pipeline
* [autoparallel] using fwd_time and bwd_time instead of fwd_flop and bwd_flop
* [autoparallel] specifycomm nodes' memory cost in construct chain
2 years ago
Fazzie-Maqianli
89f048a88a
[example] clear diffuser image ( #2262 )
2 years ago
Boyuan Yao
1ea99b869e
[autoparallel] align the data_ptr with the old version of auto activation checkpoint pipeline ( #2261 )
2 years ago
Super Daniel
3ccf58aa76
[autockpt] make it work. ( #2257 )
2 years ago
Boyuan Yao
ac3739930d
[autoparallel] modify construct chain in rotor solver ( #2254 )
2 years ago
Boyuan Yao
ab38aebace
[autoparallel] Hook all meta information on ResNet nodes for auto activation checkpoint ( #2248 )
...
* [autoparallel] hook node meta on graph nodes for checkpoint solver
* [autoparallel] polish code
* [autoparallel] restore some node handlers
* colossalai/auto_parallel/passes/meta_info_prop.py
* [autoparallel] remove some unused import
* [autoparallel] hook bwd_mem_out
2 years ago
Boyuan Yao
c8c79102f0
[autoparallel] patch torch.flatten metainfo for autoparallel ( #2247 )
...
* [autoparallel] patch torch.flatten
2 years ago
oahzxl
9c5e028a62
fix bug again
2 years ago
oahzxl
7fd3b45af2
fix a bug in ones like, dont gen chunk if dim size is 1
2 years ago
oahzxl
5f24f4fd55
support ones_like, add prompt if fit mode search fail
2 years ago
oahzxl
80efd70c72
improve reorder efficeincy
2 years ago
oahzxl
966e4ea0cb
add reorder in mem estimator
2 years ago
YuliangLiu0306
8897b8f753
[autoparallel] autoparallel initialize ( #2238 )
2 years ago
oahzxl
e5a5fbb8a9
update source add
2 years ago
xcnick
85178a397a
[hotfix] fix error for torch 2.0 ( #2243 )
2 years ago
Super Daniel
b7d0990c61
[autoparallel] fix construct meta info. ( #2245 )
2 years ago
Frank Lee
89542ceb44
[doc] updated the stable diffussion on docker usage ( #2244 )
...
* [doc] updated the stable diffussion on docker usage
* polish doc
2 years ago
Jiarui Fang
50cdf5430e
[example] diffusion install from docker ( #2239 )
...
* [builder] builder for scaled_upper_triang_masked_softmax
* add missing files
* fix a bug
* polish code
* [example] diffusion install from docker
2 years ago
Ziyue Jiang
57929a6210
fix type of num_worker_threads ( #2237 )
...
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2 years ago
Jiarui Fang
db4cbdc7fb
[builder] builder for scaled_upper_triang_masked_softmax ( #2234 )
2 years ago
HELSON
31fe84237b
[example] fix benchmark.sh for gpt example ( #2229 )
2 years ago
Super Daniel
78483a9fdd
[logger] hotfix, missing _FORMAT ( #2231 )
2 years ago
oahzxl
f5515e9978
use max_mem to control stratge
2 years ago
oahzxl
f7d8092c84
align openfold
2 years ago
oahzxl
5c4df01af3
update openfold
2 years ago
oahzxl
289f3a45c2
init openfold
2 years ago
oahzxl
efe6fe3a33
code style
2 years ago
oahzxl
7a23deb584
code style
2 years ago
oahzxl
5a916c0adb
add print
2 years ago
oahzxl
1d7ca02301
add benchmark
2 years ago
Jiarui Fang
2cdecc9f38
[example] make palm + GeminiDPP work ( #2227 )
2 years ago
ZijianYY
63cc77173b
[example] Palm adding gemini, still has bugs ( #2221 )
2 years ago