Commit Graph

221 Commits (e5b1a0c9bee8ba6cc1fe5af99afe725aec2b6509)

Author SHA1 Message Date
Jiarui Fang 9feee6d06b
[FAW] LFU initialize with dataset freq (#1513) 2022-08-29 12:52:53 +08:00
CsRic 1b8fee8e9c
[FAW] shrink freq_cnter size (#1509) 2022-08-29 11:44:55 +08:00
Jiarui Fang ba61109b6c
[FAW] remove code related to chunk (#1501) 2022-08-26 14:23:30 +08:00
Jiarui Fang d5085bb317
[FAW] add more docs and fix a warning (#1500) 2022-08-26 14:10:21 +08:00
CsRic 0ed2f46131
[FAW] FAW embedding use LRU as eviction strategy intialized with dataset stats (#1494) 2022-08-26 11:24:12 +08:00
CsRic b8d0e39eaf
[FAW] LFU cache for the FAW 2022-08-25 13:08:46 +08:00
Jiarui Fang cde7b8a5b8
[FAW] init an LFU implementation for FAW (#1488) 2022-08-24 17:37:22 +08:00
Geng Zhang 0aad53c62b
[FCE] update interface for frequency statistics in FreqCacheEmbedding (#1462) 2022-08-23 17:38:24 +08:00
Jiarui Fang a1476ea882
[NFC] polish doc style for ColoTensor (#1457) 2022-08-16 09:21:05 +08:00
ver217 367c615818
fix nvme docstring (#1450) 2022-08-12 18:01:02 +08:00
Geng Zhang 9f3eed66eb
[FAW] reorganize the inheritance struct of FreqCacheEmbedding (#1448) 2022-08-12 15:55:46 +08:00
Frank Lee ae1b58cd16
[tensor] added linear implementation for the new sharding spec (#1416)
* [tensor] added linear implementation for the new sharding spec

* polish code
2022-08-12 11:33:09 +08:00
Jiarui Fang 30b4dd17c0
[FAW] export FAW in _ops (#1438) 2022-08-11 13:43:24 +08:00
Jiarui Fang c9427a323f
hotfix #1434 (#1437) 2022-08-11 13:14:25 +08:00
Jiarui Fang 10b3df65c8
[FAW] move coloparam setting in test code. (#1429) 2022-08-10 14:31:53 +08:00
Jiarui Fang cb98cf5558
[FAW] parallel FreqAwareEmbedding (#1424) 2022-08-10 13:44:30 +08:00
Jiarui Fang d209aff684
Add FreqAwareEmbeddingBag (#1421) 2022-08-09 16:26:12 +08:00
Jiarui Fang 504419d261
[FAW] add cache manager for the cached embedding (#1419) 2022-08-09 15:17:17 +08:00
ver217 12b4887097
[hotfix] fix CPUAdam kernel nullptr (#1410) 2022-08-05 19:45:45 +08:00
ver217 04c9a86af8
[zero] ZeroDDP supports controlling outputs' dtype (#1399) 2022-08-02 17:49:11 +08:00
HELSON 4e98e938ce
[zero] alleviate memory usage in ZeRODDP state_dict (#1398) 2022-08-02 15:49:13 +08:00
HELSON c7221cb2d4
[hotfix] adapt ProcessGroup and Optimizer to ColoTensor (#1388) 2022-07-29 19:33:24 +08:00
ver217 83328329dd
[hotfix] fix zero ddp buffer cast (#1376)
* fix zero ddp buffer cast

* fix zero ddp ignore params
2022-07-28 10:54:44 +08:00
ver217 5d5031e946
fix zero ddp state dict (#1378) 2022-07-28 09:31:42 +08:00
ver217 c415240db6
[nvme] CPUAdam and HybridAdam support NVMe offload (#1360)
* impl nvme optimizer

* update cpu adam

* add unit test

* update hybrid adam

* update docstr

* add TODOs

* update CI

* fix CI

* fix CI

* fix CI path

* fix CI path

* fix CI path

* fix install tensornvme

* fix CI

* fix CI path

* fix CI env variables

* test CI

* test CI

* fix CI

* fix nvme optim __del__

* fix adam __del__

* fix nvme optim

* fix CI env variables

* fix nvme optim import

* test CI

* test CI

* fix CI
2022-07-26 17:25:24 +08:00
HELSON 87775a0682
[colotensor] use cpu memory to store state_dict (#1367) 2022-07-26 14:13:38 +08:00
ver217 d068af81a3
[doc] update rst and docstring (#1351)
* update rst

* add zero docstr

* fix docstr

* remove fx.tracer.meta_patch

* fix docstr

* fix docstr

* update fx rst

* fix fx docstr

* remove useless rst
2022-07-21 15:54:53 +08:00
HELSON 7a8702c06d
[colotensor] add Tensor.view op and its unit test (#1343)
[colotensor] add megatron initialization for gpt2
2022-07-21 10:53:15 +08:00
ver217 0c51ff2c13
[hotfix] ZeroDDP use new process group (#1333)
* process group supports getting ranks in group

* chunk mgr receives a process group

* update unit test

* fix unit tests
2022-07-18 14:14:52 +08:00
HELSON 1b41686461
[hotfix] fix unit test test_module_spec (#1321) 2022-07-15 14:02:32 +08:00
Jiarui Fang 9e4c6449b0
[checkpoint] add ColoOptimizer checkpointing (#1316) 2022-07-15 09:52:55 +08:00
Jiarui Fang 85f933b58b
[Optimizer] Remove useless ColoOptimizer (#1312) 2022-07-14 16:57:48 +08:00
Jiarui Fang 9f10524313
[Optimizer] polish the init method of ColoOptimizer (#1310) 2022-07-14 16:37:33 +08:00
HELSON 260a55804a
[hotfix] fix shape error in backward when using ColoTensor (#1298) 2022-07-13 23:06:12 +08:00
runluo f83c4d6597
[NFC] polish colossalai/nn/layer/wrapper/pipeline_wrapper.py code style (#1303) 2022-07-13 19:01:07 +08:00
XYE e83b2ce853 [NFC] polish colossalai/nn/layer/vanilla/layers.py code style (#1295) 2022-07-13 12:08:21 +08:00
Liping233 1000a41fd5 [NFC] polish colossalai/nn/layer/vanilla/__init__.py code style (#1293) 2022-07-13 12:08:21 +08:00
Wangbo Zhao(黑色枷锁) 552667825b [NFC] polish colossalai/nn/layer/parallel_1d/layers.py code style (#1290) 2022-07-13 12:08:21 +08:00
Jiatong Han 38e3ccd1e9 [NFC] polish colossalai/nn/layer/parallel_sequence/layers.py code style (#1280)
Co-authored-by: JThh <jiatong.han@u.nus.edu>
2022-07-13 12:08:21 +08:00
Boyuan Yao b414eaa5db [NFC] polish colossalai/nn/optimizer/lamb.py code style (#1275) 2022-07-13 12:08:21 +08:00
Super Daniel 52d145a342 [NFC] polish colossalai/nn/lr_scheduler/onecycle.py code style (#1269) 2022-07-13 12:08:21 +08:00
Geng Zhang 0e06f62160 [NFC] polish colossalai/nn/layer/parallel_sequence/_operation.py code style (#1266) 2022-07-13 12:08:21 +08:00
superhao1995 f660152c73 [NFC] polish colossalai/nn/layer/parallel_3d/_operation.py code style (#1258)
Co-authored-by: Research <research@soccf-snr3-017.comp.nus.edu.sg>
2022-07-13 12:08:21 +08:00
Thunderbeee 9738fb0f78 [NFC] polish colossalai/nn/lr_scheduler/__init__.py (#1255)
code style
2022-07-13 12:08:21 +08:00
Ofey Chan 2dd4d556fb
[NFC] polish colossalai/nn/init.py code style (#1292) 2022-07-13 10:51:55 +08:00
HELSON abba4d84e1
[hotfix] fix bert model test in unitests (#1272) 2022-07-12 23:26:45 +08:00
oahzxl 0cf8e8e91c
[NFC] polish <colossalai/nn/lr_scheduler/poly.py> code style (#1267) 2022-07-12 18:18:14 +08:00
Jiarui Fang 1aad903c15
[tensor] redistribute among different process groups (#1247)
* make it faster

* [tensor] rename convert_to_dist -> redistribute

* [tensor] ShardSpec and ReplicaSpec

* [tensor] redistribute among diff pgs

* polish code
2022-07-12 10:24:05 +08:00
Jiarui Fang 9bcd2fd4af
[tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
Jiarui Fang 2699dfbbfd
[rename] convert_to_dist -> redistribute (#1243) 2022-07-11 13:05:44 +08:00
Jiarui Fang 4a76084dc9
[tensor] add zero_like colo op, important for Optimizer (#1236) 2022-07-08 14:55:27 +08:00
Jiarui Fang 3b500984b1
[tensor] fix some unittests (#1234) 2022-07-08 14:18:30 +08:00
HELSON 0453776def
[tensor] fix a assertion in colo_tensor cross_entropy (#1232) 2022-07-08 11:18:00 +08:00
HELSON 42ab36b762
[tensor] add unitest for colo_tensor 1DTP cross_entropy (#1230) 2022-07-07 19:17:23 +08:00
Yi Zhao 04537bf83e
[checkpoint]support generalized scheduler (#1222) 2022-07-07 18:16:38 +08:00
Jiarui Fang a98319f023
[tensor] torch function return colotensor (#1229) 2022-07-07 18:09:18 +08:00
Jiarui Fang ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. (#1203) 2022-07-06 16:15:16 +08:00
Jiarui Fang b5f25eb32a
[Tensor] add cpu group to ddp (#1200) 2022-07-05 14:58:28 +08:00
Jiarui Fang 060b917daf
[refactor] remove gpc dependency in colotensor's _ops (#1189) 2022-07-04 18:54:37 +08:00
Jiarui Fang 372f791444
[refactor] move chunk and chunkmgr to directory gemini (#1182) 2022-06-29 13:31:02 +08:00
ver217 6b2f2ab9bb
[ddp] ColoDDP uses bucket all-reduce (#1177)
* add reducer

* update colo ddp with reducer

* polish unit test

* polish unit test
2022-06-29 10:34:13 +08:00
Jiarui Fang 1b657f9ce1
[tensor] revert local view back (#1178) 2022-06-27 18:38:34 +08:00
Jiarui Fang 0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest (#1176) 2022-06-27 15:56:11 +08:00
Ziyue Jiang dd0420909f
[Tensor] rename parallel_action (#1174)
* rename parallel_action

* polish
2022-06-27 10:04:45 +08:00
Jiarui Fang aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel (#1169) 2022-06-27 09:45:26 +08:00
Jiarui Fang 4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168) 2022-06-24 13:08:54 +08:00
Jiarui Fang f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread (#1166) 2022-06-23 17:34:59 +08:00
Jiarui Fang 177c374401
remove gather out in parallel action (#1163) 2022-06-23 16:35:05 +08:00
Ziyue Jiang 955ac912de
remove log (#1160) 2022-06-23 10:32:42 +08:00
Jiarui Fang 07f9c781f9
[graph] improve the graph building. (#1157) 2022-06-22 16:47:20 +08:00
ver217 22717a856f
[tensor] add embedding bag op (#1156) 2022-06-22 15:54:03 +08:00
ver217 ae86151968
[tensor] add more element-wise ops (#1155)
* add more element-wise ops

* update test_op

* polish unit test
2022-06-22 15:16:47 +08:00
ver217 54aabb8da4
[gemini] refactor gemini mgr (#1151)
* refactor gemini mgr

* udpate __init__
2022-06-22 11:54:36 +08:00
ver217 8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP (#1146)
* ColoDDP supports overwriting default process group

* rename ColoDDPV2 to ZeroDDP

* add docstr for ZeroDDP

* polish docstr
2022-06-21 16:35:23 +08:00
ver217 ccf3c58c89
embedding op use gather_out (#1143) 2022-06-21 13:21:20 +08:00
Frank Lee 15aab1476e
[zero] avoid zero hook spam by changing log to debug level (#1137) 2022-06-21 10:44:01 +08:00
ver217 e4f555f29a
[optim] refactor fused sgd (#1134) 2022-06-20 11:19:38 +08:00
ver217 d26902645e
[ddp] add save/load state dict for ColoDDP (#1127)
* add save/load state dict for ColoDDP

* add unit test

* refactor unit test folder

* polish unit test

* rename unit test
2022-06-20 10:51:47 +08:00
ver217 f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP (#1122)
* add set_params_to_ignore for ColoDDP

* polish code

* fix zero hook v2

* add unit test

* polish docstr
2022-06-16 12:54:46 +08:00
ver217 e127b4375b
cast colo ddp v2 inputs/outputs (#1120) 2022-06-15 15:57:04 +08:00
ver217 7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy (#1118)
* update gemini mgr

* update chunk

* add docstr

* polish placement policy

* update test chunk

* update test zero

* polish unit test

* remove useless unit test
2022-06-15 15:05:19 +08:00
ver217 895c1c5ee7
[tensor] refactor param op hook (#1097)
* refactor param op hook

* add docstr

* fix bug
2022-06-13 16:11:53 +08:00
Frank Lee cb18922c47
[doc] added documentation to chunk and chunk manager (#1094)
* [doc] added documentation to chunk and chunk manager

* polish code

* polish code

* polish code
2022-06-10 15:33:06 +08:00
ver217 1f894e033f
[gemini] zero supports gemini (#1093)
* add placement policy

* add gemini mgr

* update mem stats collector

* update zero

* update zero optim

* fix bugs

* zero optim monitor os

* polish unit test

* polish unit test

* add assert
2022-06-10 14:48:28 +08:00
Frank Lee 2b2dc1c86b
[pipeline] refactor the pipeline module (#1087)
* [pipeline] refactor the pipeline module

* polish code
2022-06-10 11:27:38 +08:00
ver217 be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
* polish chunk manager

* polish unit test

* impl add_extern_static_tensor for chunk mgr

* add mem stats collector v2

* polish code

* polish unit test

* polish code

* polish get chunks
2022-06-09 20:56:34 +08:00
Ziyue Jiang 0653c63eaa
[Tensor] 1d row embedding (#1075)
* Add CPU 1d row embedding

* polish
2022-06-08 12:04:59 +08:00
Ziyue Jiang 4fc748f69b
[Tensor] fix optimizer for CPU parallel (#1069) 2022-06-06 17:36:11 +08:00
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068) 2022-06-06 15:34:41 +08:00
Ziyue Jiang 6754f1b77f
fix module utils bug (#1066) 2022-06-06 12:11:48 +08:00
Jiarui Fang a00644079e
reorgnize colotensor directory (#1062)
* reorgnize colotensor directory

* polish code
2022-06-03 18:04:22 +08:00
Ziyue Jiang df9dcbbff6
[Tensor] add hybrid device demo and fix bugs (#1059) 2022-06-03 12:09:49 +08:00
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
* add zero optimizer

* torch ok

* unit test ok

* polish code

* fix bugs

* polish unit test

* polish zero optim

* polish colo ddp v2

* refactor folder structure

* add comment

* polish unit test

* polish zero optim

* polish unit test
2022-06-02 12:13:15 +08:00
ver217 9492a561c3
[tensor] ColoTensor supports ZeRo (#1015)
* impl chunk manager

* impl param op hook

* add reduce_chunk

* add zero hook v2

* add zero dp

* fix TensorInfo

* impl load balancing when using zero without chunk

* fix zero hook

* polish chunk

* fix bugs

* ddp ok

* zero ok

* polish code

* fix bugs about load balancing

* polish code

* polish code

* add ene-to-end test

* polish code

* polish code

* polish code

* fix typo

* add test_chunk

* fix bugs

* fix bugs

* polish code
2022-05-31 12:00:12 +08:00
ver217 cefc29ff06
[tensor] impl ColoDDP for ColoTensor (#1009)
* impl ColoDDP for ColoTensor

* polish code
2022-05-21 13:52:04 +08:00
Ziheng Qin 571f12eff3 [NFC] polish colossalai/nn/layer/utils/common.py code style (#983) 2022-05-17 10:25:06 +08:00
shenggan 18542b47fc [NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976) 2022-05-17 10:25:06 +08:00
Zirui Zhu 598cde4a0f [NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972) 2022-05-17 10:25:06 +08:00
LuGY fb5bc6cb28 [NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966) 2022-05-17 10:25:06 +08:00
ver217 58580b50fe
Revert "[NFC] Hotfix/format (#984)" (#986)
This reverts commit 0772828fba.
2022-05-17 10:23:38 +08:00