Commit Graph

62 Commits (50e5602c2d6c8e25ad544cbecc38649e5257e7b8)

Author SHA1 Message Date
digger yu a9d1cadc49
fix typo with colossalai/trainer utils zero (#3908) 2023-06-07 16:08:37 +08:00
Hongxin Liu dbb32692d2
[lazy] refactor lazy init (#3891)
* [lazy] remove old lazy init

* [lazy] refactor lazy init folder structure

* [lazy] fix lazy tensor deepcopy

* [test] update lazy init test
2023-06-05 14:20:47 +08:00
Hongxin Liu 4341f5e8e6
[lazyinit] fix clone and deepcopy (#3553) 2023-04-17 11:25:13 +08:00
Hongxin Liu 152239bbfa
[gemini] gemini supports lazy init (#3379)
* [gemini] fix nvme optimizer init

* [gemini] gemini supports lazy init

* [gemini] add init example

* [gemini] add fool model

* [zero] update gemini ddp

* [zero] update init example

* add chunk method

* add chunk method

* [lazyinit] fix lazy tensor tolist

* [gemini] fix buffer materialization

* [misc] remove useless file

* [booster] update gemini plugin

* [test] update gemini plugin test

* [test] fix gemini plugin test

* [gemini] fix import

* [gemini] fix import

* [lazyinit] use new metatensor

* [lazyinit] use new metatensor

* [lazyinit] fix __set__ method
2023-04-12 16:03:25 +08:00
ver217 26b7aac0be
[zero] reorganize zero/gemini folder structure (#3424)
* [zero] refactor low-level zero folder structure

* [zero] fix legacy zero import path

* [zero] fix legacy zero import path

* [zero] remove useless import

* [zero] refactor gemini folder structure

* [zero] refactor gemini folder structure

* [zero] refactor legacy zero import path

* [zero] refactor gemini folder structure

* [zero] refactor gemini folder structure

* [zero] refactor gemini folder structure

* [zero] refactor legacy zero import path

* [zero] fix test import path

* [zero] fix test

* [zero] fix circular import

* [zero] update import
2023-04-04 13:48:16 +08:00
ver217 f8289d4221
[lazyinit] combine lazy tensor with dtensor (#3204)
* [lazyinit] lazy tensor add distribute

* [lazyinit] refactor distribute

* [lazyinit] add test dist lazy init

* [lazyinit] add verbose info for dist lazy init

* [lazyinit] fix rnn flatten weight op

* [lazyinit] polish test

* [lazyinit] polish test

* [lazyinit] fix lazy tensor data setter

* [lazyinit] polish test

* [lazyinit] fix clean

* [lazyinit] make materialize inplace

* [lazyinit] refactor materialize

* [lazyinit] refactor test distribute

* [lazyinit] fix requires_grad

* [lazyinit] fix tolist after materialization

* [lazyinit] refactor distribute module

* [lazyinit] polish docstr

* [lazyinit] polish lazy init context

* [lazyinit] temporarily skip test

* [lazyinit] polish test

* [lazyinit] add docstr
2023-03-23 10:53:06 +08:00
ver217 6ae8ed0407
[lazyinit] add correctness verification (#3147)
* [lazyinit] fix shared module

* [tests] add lazy init test utils

* [tests] add torchvision for lazy init

* [lazyinit] fix pre op fn

* [lazyinit] handle legacy constructor

* [tests] refactor lazy init test models

* [tests] refactor lazy init test utils

* [lazyinit] fix ops don't support meta

* [tests] lazy init test timm models

* [lazyinit] fix set data

* [lazyinit] handle apex layers

* [tests] lazy init test transformers models

* [tests] lazy init test torchaudio models

* [lazyinit] fix import path

* [tests] lazy init test torchrec models

* [tests] update torch version in CI

* [tests] revert torch version in CI

* [tests] skip lazy init test
2023-03-17 13:49:04 +08:00
ver217 ed8f60b93b
[lazyinit] refactor lazy tensor and lazy init ctx (#3131)
* [lazyinit] refactor lazy tensor and lazy init ctx

* [lazyinit] polish docstr

* [lazyinit] polish docstr
2023-03-14 15:37:12 +08:00
ver217 823f3b9cf4
[doc] add deepspeed citation and copyright (#2996)
* [doc] add deepspeed citation and copyright

* [doc] add deepspeed citation and copyright

* [doc] add deepspeed citation and copyright
2023-03-04 20:08:11 +08:00
ver217 f0aa191f51
[gemini] fix colo_init_context (#2683) 2023-02-13 17:53:15 +08:00
HELSON 552183bb74
[polish] polish ColoTensor and its submodules (#2537) 2023-02-03 11:44:10 +08:00
Super Daniel 35c0c0006e
[utils] lazy init. (#2148)
* [utils] lazy init.

* [utils] remove description.

* [utils] complete.

* [utils] finalize.

* [utils] fix names.
2023-01-20 10:49:00 +08:00
BlueRum b3f73ce1c8
[Gemini] Update coloinit_ctx to support meta_tensor (#2147) 2022-12-19 22:37:07 +08:00
Jiarui Fang 8e14344ec9
[hotfix] fix a type in ColoInitContext (#2106) 2022-12-09 11:44:39 +08:00
Jiarui Fang 05545bfee9
[ColoTensor] throw error when ColoInitContext meets meta parameter. (#2105) 2022-12-09 11:39:46 +08:00
HELSON f6178728a0
[gemini] fix init bugs for modules (#2047)
* [gemini] fix init bugs for modules

* fix bugs
2022-11-30 17:06:10 +08:00
Jiarui Fang 31c644027b
[hotfix] hotfix Gemini for no leaf modules bug (#2043) 2022-11-30 14:53:41 +08:00
Jiarui Fang 52c6ad26e0
[ColoTensor] reconfig ColoInitContext, decouple default_pg and default_dist_spec. (#1953) 2022-11-15 16:24:16 +08:00
Jiarui Fang 9f4fb3f28a
[ColoTensor] ColoInitContext initialize parameters in shard mode. (#1937) 2022-11-14 16:05:09 +08:00
Frank Lee e6ec99d389
[utils] fixed lazy init context (#1867) 2022-11-10 15:17:20 +08:00
Jiarui Fang 3ce4463fe6
[utils] remove lazy_memory_allocate from ColoInitContext (#1844) 2022-11-09 11:50:33 +08:00
HELSON 1468e4bcfc
[zero] add constant placement policy (#1705)
* fixes memory leak when paramter is in fp16 in ZeroDDP init.
* bans chunk releasement in CUDA. Only when a chunk is about to offload, it is allowed to release.
* adds a constant placement policy. With it, users can allocate a reserved caching memory space for parameters.
2022-10-14 17:53:16 +08:00
ver217 a203b709d5
[hotfix] fix init context (#1543)
* fix init context

* fix lazy init ctx
2022-09-06 11:45:08 +08:00
Frank Lee 2cc1175c76
[fx] tested the complete workflow for auto-parallel (#1336)
* [fx] tested the complete workflow for auto-parallel

* polish code

* polish code

* polish code
2022-07-20 10:45:17 +08:00
Frank Lee 250be4d31e
[utils] integrated colotensor with lazy init context (#1324)
* [utils] integrated colotensor with lazy init context

* polish code

* polish code

* polish code
2022-07-15 17:47:12 +08:00
Jiarui Fang c92f84fcdb
[tensor] distributed checkpointing for parameters (#1240) 2022-07-12 15:51:06 +08:00
Jiarui Fang 9bcd2fd4af
[tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
Jiarui Fang 3b500984b1
[tensor] fix some unittests (#1234) 2022-07-08 14:18:30 +08:00
Jiarui Fang f38006ea83
[checkpoint] checkpoint for ColoTensor Model (#1196) 2022-07-06 17:22:03 +08:00
Jiarui Fang ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. (#1203) 2022-07-06 16:15:16 +08:00
YuliangLiu0306 63d2a93878
[context]support arbitary module materialization. (#1193)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [context]support arbitary module materialization.

* [test]add numerical check for lazy init context.
2022-07-04 10:12:02 +08:00
YuliangLiu0306 2053e138a2
[context]use meta tensor to init model lazily. (#1187)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [context]use meta tensor to init model lazily.

* polish

* make module with device kwargs bypass the normal init.

* change unit test to adapt updated context.
2022-06-29 21:02:30 +08:00
Jiarui Fang 4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168) 2022-06-24 13:08:54 +08:00
Frank Lee f8eec98ff5
[tensor] fixed non-serializable colo parameter during model checkpointing (#1153) 2022-06-22 11:43:38 +08:00
Frank Lee 73ad05fc8c
[zero] added error message to handle on-the-fly import of torch Module class (#1135)
* [zero] added error message to handle on-the-fly import of torch Module class

* polish code
2022-06-20 11:24:27 +08:00
Frank Lee 2b2dc1c86b
[pipeline] refactor the pipeline module (#1087)
* [pipeline] refactor the pipeline module

* polish code
2022-06-10 11:27:38 +08:00
Frank Lee bad5d4c0a1
[context] support lazy init of module (#1088)
* [context] support lazy init of module

* polish code
2022-06-10 10:09:48 +08:00
Frank Lee bfdc5ccb7b
[context] maintain the context object in with statement (#1073) 2022-06-07 10:48:45 +08:00
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068) 2022-06-06 15:34:41 +08:00
Jiarui Fang a00644079e
reorgnize colotensor directory (#1062)
* reorgnize colotensor directory

* polish code
2022-06-03 18:04:22 +08:00
Ziyue Jiang df9dcbbff6
[Tensor] add hybrid device demo and fix bugs (#1059) 2022-06-03 12:09:49 +08:00
Ziyue Jiang 7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter (#1041)
* add Parameter inheritance for ColoParameter

* remove tricks

* remove tricks

* polish

* polish
2022-05-30 17:23:44 +08:00
Ziyue Jiang 6c5996a56e
[Tensor] add module check and bert test (#1031)
* add Embedding

* Add bert test

* polish

* add check module test

* polish

* polish

* polish

* polish
2022-05-26 18:15:42 +08:00
Ziyue Jiang 32291dd73f
[Tensor] add module handler for linear (#1021)
* add module spec for linear

* polish

* polish

* polish
2022-05-26 11:50:44 +08:00
ver217 007ca0df92
fix colo init context (#1026) 2022-05-25 20:41:58 +08:00
ver217 ad536e308e
[tensor] refactor colo-tensor (#992)
* refactor colo-tensor and update linear op

* polish code

* polish code

* update ops and unit tests

* update unit tests

* polish code

* rename dist_spec module

* polish code

* polish code

* remove unneeded import

* fix pipelinable
2022-05-19 12:44:59 +08:00
Ziyue Jiang d73c2b1d79
[Tensor] fix init context (#931)
* change torch.Parameter to ColoParameter

* fix post assignment for init context

* polish

* polish
2022-05-11 15:48:12 +08:00
Ziyue Jiang dfc88b85ea
[Tensor] simplify named param (#928)
* simplify ColoModulize

* simplify ColoModulize

* polish

* polish
2022-05-11 10:54:19 +08:00
YuliangLiu0306 32a45cd7ef
[pipelinable]use pipelinable to support GPT model. (#903)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [pipelinable]use pipelinable to support GPT model.

* fix a bug caused by ShardedModel

* polish

* fix front func list
2022-05-11 09:23:58 +08:00
Ziyue Jiang c195d2814c
[Tensor] add from_pretrained support and bert pretrained test (#921)
* add from_pretrained support and test

* polish

* polish

* polish

* polish
2022-05-09 16:11:47 +08:00