Boyuan Yao
ac3a453a50
[fx] fix the discretize bug ( #1506 )
...
* [fx] fix wrong variable name in solver rotor
* [fx] fix wrong variable name in solver rotor
* code modification
* [fx] fix the discretize bug
2022-08-26 17:15:52 +08:00
Boyuan Yao
31fffd3fc5
[fx] fix wrong variable name in solver rotor ( #1502 )
...
* [fx] fix wrong variable name in solver rotor
* [fx] fix wrong variable name in solver rotor
* code modification
2022-08-26 15:47:08 +08:00
Jiarui Fang
ba61109b6c
[FAW] remove code related to chunk ( #1501 )
2022-08-26 14:23:30 +08:00
Jiarui Fang
d5085bb317
[FAW] add more docs and fix a warning ( #1500 )
2022-08-26 14:10:21 +08:00
Kirigaya Kazuto
5a6fd71f90
[pipeline/rpc] update outstanding mechanism | optimize dispatching strategy ( #1497 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [pipeline/rpc] implement a demo for PP with cuda rpc framework
* [pipeline/rpc] support interleaving | fix checkpoint bug | change logic when dispatch data in work_list to ensure steady 1F1B
* [pipeline/rpc] implement distributed optimizer | test with assert_close
* [pipeline/rpc] implement distributed optimizer | test with assert_close
* [pipeline/rpc] update outstanding mechanism | optimize dispatching strategy
* [pipeline/rpc] update outstanding mechanism | optimize dispatching strategy
* [pipeline/rpc] update outstanding mechanism | optimize dispatching strategy
2022-08-26 14:04:23 +08:00
CsRic
0ed2f46131
[FAW] FAW embedding use LRU as eviction strategy intialized with dataset stats ( #1494 )
2022-08-26 11:24:12 +08:00
YuliangLiu0306
8b7d6bd5be
[autoparallel] add more sharding strategies to conv ( #1487 )
2022-08-26 11:17:56 +08:00
Boyuan Yao
de1e716dc4
[fx] Add activation checkpoint solver rotor ( #1496 )
...
* [fx] fix defining ckpt functions inside forward
* [fx] Modify activation checkpoint codegen and add ColoGraphModule
* [fx] some modification
* some modifications
* some modifications
* some modifications
* some modifications
* some code modifications
* [automatic_parallel] ckpt solver rotor
* [fx] add ckpt_solver_rotor
* [fx] modification
* code refactor
* code refactor
2022-08-26 10:34:21 +08:00
Super Daniel
09c023bee2
[fx] add more op patches for profiler and error message for unsupported ops. ( #1495 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] add rules to linearize computation graphs for searching. (#2 )
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] fix test and algorithm bugs in activation checkpointing.
* [fx] polish ckpt_test.
* [fx] add rules to linearize computation graphs for searching.
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] fix inconsistencies.
* [fx] fix MetaInfoProp.
* [fx] fix MetaInfoProp.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] fix error in tests.
* [fx] unfix bug.
* [fx] unfix bug.
* [fx] patch more modules and functions.
* [fx] change name of utils.py to profiler.py
* [fx] add profiler for rnn.
* [fx] add profiler for rnn.
* [fx] polish and add more patch for profiler.
* [fx] polish and add more patch for profiler.
2022-08-25 23:11:13 +08:00
YuliangLiu0306
413c053453
[autoparallel] add cost graph class ( #1481 )
...
* [autoparallel] add cost graph class
* polish code
2022-08-25 17:19:59 +08:00
YuliangLiu0306
4b03c25f85
[tensor]add 1D device mesh ( #1492 )
2022-08-25 16:48:12 +08:00
CsRic
b8d0e39eaf
[FAW] LFU cache for the FAW
2022-08-25 13:08:46 +08:00
Kirigaya Kazuto
9145aef2b4
[pipeline/rpc] implement distributed optimizer | test with assert_close ( #1486 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [pipeline/rpc] implement a demo for PP with cuda rpc framework
* [pipeline/rpc] support interleaving | fix checkpoint bug | change logic when dispatch data in work_list to ensure steady 1F1B
* [pipeline/rpc] implement distributed optimizer | test with assert_close
* [pipeline/rpc] implement distributed optimizer | test with assert_close
2022-08-25 10:49:01 +08:00
Frank Lee
3da68d6b1b
[fx] fixed adapative pooling size concatenation error ( #1489 )
2022-08-25 09:05:07 +08:00
Jiarui Fang
cde7b8a5b8
[FAW] init an LFU implementation for FAW ( #1488 )
2022-08-24 17:37:22 +08:00
Super Daniel
32efe8e740
[fx] add profiler for fx nodes. ( #1480 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] add rules to linearize computation graphs for searching. (#2 )
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] fix test and algorithm bugs in activation checkpointing.
* [fx] polish ckpt_test.
* [fx] add rules to linearize computation graphs for searching.
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] fix inconsistencies.
* [fx] fix MetaInfoProp.
* [fx] fix MetaInfoProp.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] add profiler for fx nodes.
* [fx] fix error in tests.
* [fx] unfix bug.
* [fx] unfix bug.
2022-08-24 16:22:44 +08:00
Frank Lee
d39e11dffb
[autoparallel] added namespace constraints ( #1490 )
2022-08-24 15:44:07 +08:00
Kirigaya Kazuto
a6c8749198
[pipeline/rpc] support interleaving | fix checkpoint bug | change logic when dispatch data in work_list to ensure steady 1F1B ( #1483 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [pipeline/rpc] implement a demo for PP with cuda rpc framework
* [pipeline/rpc] support interleaving | fix checkpoint bug | change logic when dispatch data in work_list to ensure steady 1F1B
2022-08-24 11:19:46 +08:00
Geng Zhang
0aad53c62b
[FCE] update interface for frequency statistics in FreqCacheEmbedding ( #1462 )
2022-08-23 17:38:24 +08:00
Frank Lee
ede326298b
[autoparallel] integrate auto parallel with torch fx ( #1479 )
2022-08-23 14:23:08 +08:00
Boyuan Yao
1f2e547f7a
[fx] Fix ckpt functions' definitions in forward ( #1476 )
...
* [fx] fix defining ckpt functions inside forward
* [fx] Modify activation checkpoint codegen and add ColoGraphModule
* [fx] some modification
* some modifications
* some modifications
* some modifications
* some modifications
* some code modifications
2022-08-22 16:59:54 +08:00
Kirigaya Kazuto
bb5f5289e0
[pipeline/rpc] implement a demo for PP with cuda rpc framework ( #1470 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [pipeline/rpc] implement a demo for PP with cuda rpc framework
* Delete p2p_v2.py
* Delete _pipeline_schedule_v2.py
* Delete test_object_list_p2p_v2.py
* Delete test_boardcast_send_recv_v2.py
* Delete test_cifar_with_data_pipeline_tensor_v2.py
2022-08-22 10:50:51 +08:00
Frank Lee
628c7e3fc8
[autoparallel] added dot handler ( #1475 )
2022-08-22 10:32:17 +08:00
Frank Lee
9dae9bb2bc
[autoparallel] introduced baseclass for op handler and reduced code redundancy ( #1471 )
...
* [autoparallel] introduced baseclass for op handler and reduced code redundancy
* polish code
2022-08-19 16:51:38 +08:00
Frank Lee
3a54e1c9b7
[autoparallel] standardize the code structure ( #1469 )
2022-08-19 15:51:54 +08:00
YuliangLiu0306
26a37b5cd5
[autoparallel] Add conv handler to generate strategies and costs info for conv ( #1467 )
2022-08-19 14:57:23 +08:00
Jiarui Fang
1b491ad7de
[doc] update docstring in ProcessGroup ( #1468 )
2022-08-19 13:41:57 +08:00
YuliangLiu0306
b73fb7a077
[tensor] support runtime ShardingSpec apply ( #1453 )
...
* [tensor] support runtime ShardingSpec apply
* polish code
* polish code
2022-08-19 13:39:51 +08:00
Super Daniel
bbc58d881b
[fx] fix MetaInfoProp for incorrect calculations and add detections for inplace op. ( #1466 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] add rules to linearize computation graphs for searching. (#2 )
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] fix test and algorithm bugs in activation checkpointing.
* [fx] polish ckpt_test.
* [fx] add rules to linearize computation graphs for searching.
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] remove chen_sqrt for sake of simplicity
* [fx] fix inconsistencies.
* [fx] fix MetaInfoProp.
* [fx] fix MetaInfoProp.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
* [fx] consider MetaInfoProp for inplace operands.
2022-08-18 11:27:06 +08:00
Super Daniel
e7383f578b
[fx] add rules to linearize computation graphs for searching. ( #1461 )
...
* [fx] polish ckpt_test.
* [fx] add rules to linearize computation graphs for searching.
* [fx] remove chen_sqrt for sake of simplicity
* [fx] fix inconsistencies.
2022-08-17 14:47:12 +08:00
Boyuan Yao
092b9c8f49
[fx] Add use_reentrant=False to checkpoint in codegen ( #1463 )
...
* [utils] Add use_reetrant=False into colossalai checkpoint
* [utils] add some annotation in utils.activaion_checkpoint
* [test] add reset_seed at the beginning of tests in test_actiavion_checkpointing.py
* [test] modify test_activation_checkpoint.py
* [test] modify test for reentrant=False
* [fx] Add use_reentrant=False of checkpoint into codegen
2022-08-17 10:34:50 +08:00
Boyuan Yao
47fd8e4a02
[utils] Add use_reetrant=False in utils.activation_checkpoint ( #1460 )
...
* [utils] Add use_reetrant=False into colossalai checkpoint
* [utils] add some annotation in utils.activaion_checkpoint
* [test] add reset_seed at the beginning of tests in test_actiavion_checkpointing.py
* [test] modify test_activation_checkpoint.py
* [test] modify test for reentrant=False
2022-08-16 15:39:20 +08:00
Jiarui Fang
36824a304c
[Doc] add more doc for ColoTensor. ( #1458 )
2022-08-16 10:38:41 +08:00
Jiarui Fang
a1476ea882
[NFC] polish doc style for ColoTensor ( #1457 )
2022-08-16 09:21:05 +08:00
Super Daniel
0dbd61c29b
[fx] fix test and algorithm bugs in activation checkpointing. ( #1451 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] merge development into main (#1 )
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
* [fx] simplify test for ckpt.
* [fx] fix test and algorithm bugs in activation checkpointing.
* mend
[fx] fix test and algorithm bugs in activation checkpointing.
* mend
[fx] fix test and algorithm bugs in activation checkpointing.
* mend
[fx] fix test and algorithm bugs in activation checkpointing.
* mend
[fx] fix test and algorithm bugs in activation checkpointing.
* [fx] polish ckpt_test.
* [fx] polish ckpt_test.
* [fx] polish ckpt_test.
2022-08-15 19:09:19 +08:00
Jiarui Fang
b1553fdf96
[NFC] global vars should be upper case ( #1456 )
2022-08-15 09:50:29 +08:00
ver217
367c615818
fix nvme docstring ( #1450 )
2022-08-12 18:01:02 +08:00
Geng Zhang
9f3eed66eb
[FAW] reorganize the inheritance struct of FreqCacheEmbedding ( #1448 )
2022-08-12 15:55:46 +08:00
Frank Lee
5a52e21fe3
[test] fixed the activation codegen test ( #1447 )
...
* [test] fixed the activation codegen test
* polish code
2022-08-12 14:52:31 +08:00
YuliangLiu0306
0f3042363c
[tensor] shape consistency generate transform path and communication cost ( #1435 )
...
* [tensor] shape consistency output transform path and communication cost
* polish code
2022-08-12 14:02:32 +08:00
Boyuan Yao
5774fe0270
[fx] Use colossalai checkpoint and add offload recognition in codegen ( #1439 )
...
* [fx] Use colossalai.utils.checkpoint to replace torch.utils.checkpoint for offload activation and add offload annotation recognition in codegen
* [fx] Use colossalai.utils.checkpoint to replace torch.utils.checkpoint for offload activation and add offload annotation recognition in codegen
* Modification of test and add TODO in codegen
* [fx] Modification of colossal ckpt usage
* [fx] add gpc.destroy() to test_codegen
2022-08-12 12:23:30 +08:00
Kirigaya Kazuto
e9460b45c8
[engin/schedule] use p2p_v2 to recontruct pipeline_schedule ( #1408 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [communication] add p2p_v2.py to support communication with List[Any]
* Delete _pipeline_schedule_v2.py
* Delete test_cifar_with_data_pipeline_tensor_v2.py
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* Delete p2p_v2.py
* Delete test_boardcast_send_recv_v2.py
* Delete test_object_list_p2p_v2.py
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [communication] remove print code
* [communication] remove print code
* [engin/schedule] shorten the running time of testing file to prevent cancelling in CI
2022-08-12 11:33:26 +08:00
Frank Lee
ae1b58cd16
[tensor] added linear implementation for the new sharding spec ( #1416 )
...
* [tensor] added linear implementation for the new sharding spec
* polish code
2022-08-12 11:33:09 +08:00
Super Daniel
d40a9392ba
[fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 . ( #1446 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* mend
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
2022-08-12 11:28:50 +08:00
ver217
821c6172e2
[utils] Impl clip_grad_norm for ColoTensor and ZeroOptimizer ( #1442 )
2022-08-11 22:58:58 +08:00
HELSON
b80340168e
[zero] add chunk_managerV2 for all-gather chunk ( #1441 )
2022-08-11 19:17:24 +08:00
Super Daniel
3b26516c69
[fx] add vanilla activation checkpoint search with test on resnet and densenet ( #1433 )
...
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
2022-08-11 15:46:39 +08:00
Jiarui Fang
30b4dd17c0
[FAW] export FAW in _ops ( #1438 )
2022-08-11 13:43:24 +08:00
HELSON
9056677b13
[zero] add chunk size searching algorithm for parameters in different groups ( #1436 )
2022-08-11 13:32:19 +08:00
Jiarui Fang
c9427a323f
hotfix #1434 ( #1437 )
2022-08-11 13:14:25 +08:00
HELSON
039b7ed3bc
[polish] add update directory in gemini; rename AgChunk to ChunkV2 ( #1432 )
2022-08-10 16:40:29 +08:00
Super Daniel
f20cb4e893
[fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages ( #1425 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
2022-08-10 16:36:35 +08:00
Jiarui Fang
10b3df65c8
[FAW] move coloparam setting in test code. ( #1429 )
2022-08-10 14:31:53 +08:00
Jiarui Fang
cb98cf5558
[FAW] parallel FreqAwareEmbedding ( #1424 )
2022-08-10 13:44:30 +08:00
HELSON
0d212183c4
[zero] add has_inf_or_nan in AgChunk; enhance the unit test of AgChunk ( #1426 )
2022-08-10 11:37:28 +08:00
YuliangLiu0306
33f0744d51
[tensor] add shape consistency feature to support auto spec transform ( #1418 )
...
* [tensor] add shape consistency feature to supportauto sharding spec transform.
* [tensor] remove unused argument in simulator, add doc string for target pair.
2022-08-10 11:29:17 +08:00
HELSON
4fb3c52cf0
[zero] add unit test for AgChunk's append, close, access ( #1423 )
2022-08-09 18:03:10 +08:00
HELSON
c577ed016e
[zero] add AgChunk ( #1417 )
2022-08-09 16:39:48 +08:00
Jiarui Fang
d209aff684
Add FreqAwareEmbeddingBag ( #1421 )
2022-08-09 16:26:12 +08:00
ver217
6df3e19be9
[hotfix] zero optim prevents calling inner optim.zero_grad ( #1422 )
2022-08-09 16:08:12 +08:00
Jiarui Fang
504419d261
[FAW] add cache manager for the cached embedding ( #1419 )
2022-08-09 15:17:17 +08:00
Kirigaya Kazuto
44fd3c83ab
[communication] add p2p_v2.py to support communication with List[Any] ( #1407 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [communication] add p2p_v2.py to support communication with List[Any]
* Delete _pipeline_schedule_v2.py
* Delete test_cifar_with_data_pipeline_tensor_v2.py
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [communication] remove print code
* [communication] remove print code
2022-08-09 11:40:04 +08:00
YuliangLiu0306
7c96055c68
[tensor]build sharding spec to replace distspec in future. ( #1405 )
2022-08-08 11:15:57 +08:00
ver217
12b4887097
[hotfix] fix CPUAdam kernel nullptr ( #1410 )
2022-08-05 19:45:45 +08:00
YuliangLiu0306
0442f940f0
[device] add DeviceMesh class to support logical device layout ( #1394 )
...
* [device] add DeviceMesh class to support logical device layout
* polish code
* add doc string
2022-08-02 19:23:48 +08:00
ver217
04c9a86af8
[zero] ZeroDDP supports controlling outputs' dtype ( #1399 )
2022-08-02 17:49:11 +08:00
HELSON
4e98e938ce
[zero] alleviate memory usage in ZeRODDP state_dict ( #1398 )
2022-08-02 15:49:13 +08:00
ver217
56b8863b87
[zero] chunk manager allows filtering ex-large params ( #1393 )
2022-08-02 10:40:27 +08:00
Frank Lee
7d6293927f
[fx] patched torch.max and data movement operator ( #1391 )
...
* [fx] patched torch.max and data movement operator
* polish code
2022-08-01 15:31:50 +08:00
Frank Lee
89e60d1505
[fx] fixed indentation error in checkpointing codegen ( #1385 )
2022-07-30 00:27:12 +08:00
HELSON
c7221cb2d4
[hotfix] adapt ProcessGroup and Optimizer to ColoTensor ( #1388 )
2022-07-29 19:33:24 +08:00
Frank Lee
ad678921db
[fx] patched torch.full for huggingface opt ( #1386 )
2022-07-29 17:56:28 +08:00
HELSON
527758b2ae
[hotfix] fix a running error in test_colo_checkpoint.py ( #1387 )
2022-07-29 15:58:06 +08:00
Jiarui Fang
f792507ff3
[chunk] add PG check for tensor appending ( #1383 )
2022-07-29 13:27:05 +08:00
ver217
8dced41ad0
[zero] zero optim state_dict takes only_rank_0 ( #1384 )
...
* zero optim state_dict takes only_rank_0
* fix unit test
2022-07-29 13:22:50 +08:00
YuliangLiu0306
df54481473
[hotfix] fix some bugs during gpt2 testing ( #1379 )
2022-07-28 17:21:07 +08:00
ver217
828b9e5e0d
[hotfix] fix zero optim save/load state dict ( #1381 )
2022-07-28 17:19:39 +08:00
HELSON
b6fd165f66
[checkpoint] add kwargs for load_state_dict ( #1374 )
2022-07-28 15:56:52 +08:00
ver217
83328329dd
[hotfix] fix zero ddp buffer cast ( #1376 )
...
* fix zero ddp buffer cast
* fix zero ddp ignore params
2022-07-28 10:54:44 +08:00
ver217
5d5031e946
fix zero ddp state dict ( #1378 )
2022-07-28 09:31:42 +08:00
Frank Lee
0c1a16ea5b
[util] standard checkpoint function naming ( #1377 )
2022-07-28 09:29:30 +08:00
YuliangLiu0306
52bc2dc271
[fx] update split module pass and add customized policy ( #1373 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]update split module pass and add customized policy
2022-07-27 13:40:54 +08:00
Super Daniel
be229217ce
[fx] add torchaudio test ( #1369 )
...
* [fx]add torchaudio test
* [fx]add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test and test patches
* Delete ~
* [fx] add patches and patches test
* [fx] add patches and patches test
* [fx] fix patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] merge upstream
* [fx] fix import errors
2022-07-27 11:03:14 +08:00
ver217
c415240db6
[nvme] CPUAdam and HybridAdam support NVMe offload ( #1360 )
...
* impl nvme optimizer
* update cpu adam
* add unit test
* update hybrid adam
* update docstr
* add TODOs
* update CI
* fix CI
* fix CI
* fix CI path
* fix CI path
* fix CI path
* fix install tensornvme
* fix CI
* fix CI path
* fix CI env variables
* test CI
* test CI
* fix CI
* fix nvme optim __del__
* fix adam __del__
* fix nvme optim
* fix CI env variables
* fix nvme optim import
* test CI
* test CI
* fix CI
2022-07-26 17:25:24 +08:00
HELSON
8463290642
[checkpoint] use args, kwargs in save_checkpoint, load_checkpoint ( #1368 )
2022-07-26 14:41:53 +08:00
YuliangLiu0306
5542816690
[fx]add gpt2 passes for pipeline performance test ( #1366 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add gpt2 passes for pipeline performance test
2022-07-26 14:31:00 +08:00
HELSON
87775a0682
[colotensor] use cpu memory to store state_dict ( #1367 )
2022-07-26 14:13:38 +08:00
HELSON
943a96323e
[hotfix] fix no optimizer in save/load ( #1363 )
2022-07-26 10:53:53 +08:00
Frank Lee
cd063ac37f
[fx] added activation checkpoint codegen support for torch < 1.12 ( #1359 )
2022-07-25 23:35:31 +08:00
Frank Lee
644582eee9
[fx] added activation checkpoint codegen ( #1355 )
2022-07-25 09:39:10 +08:00
ver217
6b43c789fd
fix zero optim backward_by_grad and save/load ( #1353 )
2022-07-21 16:43:58 +08:00
ver217
d068af81a3
[doc] update rst and docstring ( #1351 )
...
* update rst
* add zero docstr
* fix docstr
* remove fx.tracer.meta_patch
* fix docstr
* fix docstr
* update fx rst
* fix fx docstr
* remove useless rst
2022-07-21 15:54:53 +08:00
Frank Lee
274c1a3b5f
[fx] fixed apex normalization patch exception ( #1352 )
2022-07-21 15:29:11 +08:00
ver217
ce470ba37e
[checkpoint] sharded optim save/load grad scaler ( #1350 )
2022-07-21 15:21:21 +08:00
Frank Lee
05fae1fd56
[fx] added activation checkpointing annotation ( #1349 )
...
* [fx] added activation checkpointing annotation
* polish code
* polish code
2022-07-21 11:14:28 +08:00
YuliangLiu0306
051592c64e
[fx] update MetaInforProp pass to process more complex node.meta ( #1344 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] update MetaInforProp pass to process more complex node.meta
2022-07-21 10:57:52 +08:00
HELSON
7a8702c06d
[colotensor] add Tensor.view op and its unit test ( #1343 )
...
[colotensor] add megatron initialization for gpt2
2022-07-21 10:53:15 +08:00
YuliangLiu0306
942c8cd1fb
[fx] refactor tracer to trace complete graph ( #1342 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] refactor tracer to trace complete graph
* add comments and solve conflicts.
2022-07-20 11:20:38 +08:00
Frank Lee
2cc1175c76
[fx] tested the complete workflow for auto-parallel ( #1336 )
...
* [fx] tested the complete workflow for auto-parallel
* polish code
* polish code
* polish code
2022-07-20 10:45:17 +08:00
YuliangLiu0306
4631fef8a0
[fx]refactor tracer ( #1335 )
2022-07-19 15:50:42 +08:00
HELSON
f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test ( #1339 )
2022-07-19 14:15:28 +08:00
ver217
0c51ff2c13
[hotfix] ZeroDDP use new process group ( #1333 )
...
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
2022-07-18 14:14:52 +08:00
Frank Lee
75abc75c15
[fx] fixed compatiblity issue with torch 1.10 ( #1331 )
2022-07-18 11:41:27 +08:00
ver217
7a05367101
[hotfix] shared model returns cpu state_dict ( #1328 )
2022-07-15 22:11:37 +08:00
Frank Lee
b2475d8c5c
[fx] fixed unit tests for torch 1.12 ( #1327 )
2022-07-15 18:22:15 +08:00
HELSON
d49708ae43
[hotfix] fix ddp for unit test test_gpt2 ( #1326 )
2022-07-15 18:19:52 +08:00
Frank Lee
250be4d31e
[utils] integrated colotensor with lazy init context ( #1324 )
...
* [utils] integrated colotensor with lazy init context
* polish code
* polish code
* polish code
2022-07-15 17:47:12 +08:00
YuliangLiu0306
e8acf55e8b
[fx] add balanced policy v2 ( #1251 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] add balanced policy v2
* add unittest
2022-07-15 14:54:26 +08:00
XYE
ca2d3f284f
[fx] Add unit test and fix bugs for transform_mlp_pass ( #1299 )
...
* add test and fix bugs
* add functions back
* add comments
2022-07-15 14:37:58 +08:00
HELSON
1b41686461
[hotfix] fix unit test test_module_spec ( #1321 )
2022-07-15 14:02:32 +08:00
Jiarui Fang
9e4c6449b0
[checkpoint] add ColoOptimizer checkpointing ( #1316 )
2022-07-15 09:52:55 +08:00
ver217
7c70bfbefa
[hotfix] fix PipelineSharedModuleGradientHandler ( #1314 )
2022-07-14 17:31:13 +08:00
Jiarui Fang
85f933b58b
[Optimizer] Remove useless ColoOptimizer ( #1312 )
2022-07-14 16:57:48 +08:00
Jiarui Fang
9f10524313
[Optimizer] polish the init method of ColoOptimizer ( #1310 )
2022-07-14 16:37:33 +08:00
Jiarui Fang
3ef3791a3b
[checkpoint] add test for bert and hotfix save bugs ( #1297 )
2022-07-14 15:38:18 +08:00
Frank Lee
4f4d8c3656
[fx] added apex normalization to patched modules ( #1300 )
...
* [fx] added apex normalization to patched modules
* remove unused imports
2022-07-14 14:24:13 +08:00
Jiarui Fang
4165eabb1e
[hotfix] remove potiential circle import ( #1307 )
...
* make it faster
* [hotfix] remove circle import
2022-07-14 13:44:26 +08:00
HELSON
260a55804a
[hotfix] fix shape error in backward when using ColoTensor ( #1298 )
2022-07-13 23:06:12 +08:00
runluo
f83c4d6597
[NFC] polish colossalai/nn/layer/wrapper/pipeline_wrapper.py code style ( #1303 )
2022-07-13 19:01:07 +08:00
binmakeswell
7696cead8d
Recover kernal files
2022-07-13 12:08:21 +08:00
XYE
e83b2ce853
[NFC] polish colossalai/nn/layer/vanilla/layers.py code style ( #1295 )
2022-07-13 12:08:21 +08:00
Liping233
1000a41fd5
[NFC] polish colossalai/nn/layer/vanilla/__init__.py code style ( #1293 )
2022-07-13 12:08:21 +08:00
Maruyama_Aya
87f679aeae
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/kernels.h code style ( #1291 )
2022-07-13 12:08:21 +08:00
Wangbo Zhao(黑色枷锁)
552667825b
[NFC] polish colossalai/nn/layer/parallel_1d/layers.py code style ( #1290 )
2022-07-13 12:08:21 +08:00
doubleHU
d6f5ef8860
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code style ( #1286 )
2022-07-13 12:08:21 +08:00
Ziheng Qin
6d6c01e94d
[NFC] polish colossalai/__init__.py code style ( #1285 )
2022-07-13 12:08:21 +08:00
Jiatong Han
38e3ccd1e9
[NFC] polish colossalai/nn/layer/parallel_sequence/layers.py code style ( #1280 )
...
Co-authored-by: JThh <jiatong.han@u.nus.edu>
2022-07-13 12:08:21 +08:00
Boyuan Yao
b414eaa5db
[NFC] polish colossalai/nn/optimizer/lamb.py code style ( #1275 )
2022-07-13 12:08:21 +08:00
yuxuan-lou
5f6ab35d25
Hotfix/format ( #1274 )
...
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.cpp code style
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2022-07-13 12:08:21 +08:00
Super Daniel
52d145a342
[NFC] polish colossalai/nn/lr_scheduler/onecycle.py code style ( #1269 )
2022-07-13 12:08:21 +08:00
Geng Zhang
0e06f62160
[NFC] polish colossalai/nn/layer/parallel_sequence/_operation.py code style ( #1266 )
2022-07-13 12:08:21 +08:00
binmakeswell
c95e18cdb9
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.h code style ( #1270 )
2022-07-13 12:08:21 +08:00
xyupeng
94bfd35184
[NFC] polish colossalai/builder/builder.py code style ( #1265 )
2022-07-13 12:08:21 +08:00
DouJS
db13f96333
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh code style ( #1264 )
2022-07-13 12:08:21 +08:00
shenggan
5d7366b144
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.h code style ( #1263 )
2022-07-13 12:08:21 +08:00
Zangwei Zheng
197a2c89e2
[NFC] polish colossalai/communication/collective.py ( #1262 )
2022-07-13 12:08:21 +08:00
ziyu huang
f1cafcc73a
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style ( #1261 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-07-13 12:08:21 +08:00
Sze-qq
f8b9aaef47
[NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style ( #1260 )
2022-07-13 12:08:21 +08:00
superhao1995
f660152c73
[NFC] polish colossalai/nn/layer/parallel_3d/_operation.py code style ( #1258 )
...
Co-authored-by: Research <research@soccf-snr3-017.comp.nus.edu.sg>
2022-07-13 12:08:21 +08:00
Thunderbeee
9738fb0f78
[NFC] polish colossalai/nn/lr_scheduler/__init__.py ( #1255 )
...
code style
2022-07-13 12:08:21 +08:00
Kai Wang (Victor Kai)
50f2ad213f
[NFC] polish colossalai/engine/ophooks/utils.py code style ( #1256 )
2022-07-13 12:08:21 +08:00
Ofey Chan
2dd4d556fb
[NFC] polish colossalai/nn/init.py code style ( #1292 )
2022-07-13 10:51:55 +08:00
Jiarui Fang
556b9b7e1a
[hotfix] Dist Mgr gather torch version ( #1284 )
...
* make it faster
* [hotfix] torchvison fx tests
* [hotfix] rename duplicated named test_gpt.py
* [hotfix] dist mgr torch version
2022-07-13 00:18:56 +08:00
HELSON
abba4d84e1
[hotfix] fix bert model test in unitests ( #1272 )
2022-07-12 23:26:45 +08:00
ver217
7aadcbd070
hotfix colotensor _scan_for_pg_from_args ( #1276 )
2022-07-12 20:46:31 +08:00
oahzxl
0cf8e8e91c
[NFC] polish <colossalai/nn/lr_scheduler/poly.py> code style ( #1267 )
2022-07-12 18:18:14 +08:00
Jiarui Fang
c92f84fcdb
[tensor] distributed checkpointing for parameters ( #1240 )
2022-07-12 15:51:06 +08:00
Frank Lee
fb35460595
[fx] added ndim property to proxy ( #1253 )
2022-07-12 15:27:13 +08:00
Frank Lee
4a09fc0947
[fx] fixed tracing with apex-based T5 model ( #1252 )
...
* [fx] fixed tracing with apex-based T5 model
* polish code
* polish code
2022-07-12 15:19:25 +08:00
Frank Lee
7531c6271f
[fx] refactored the file structure of patched function and module ( #1238 )
...
* [fx] refactored the file structure of patched function and module
* polish code
2022-07-12 15:01:58 +08:00
YuliangLiu0306
17ed33350b
[hotfix] fix an assertion bug in base schedule. ( #1250 )
2022-07-12 14:20:02 +08:00
YuliangLiu0306
97d713855a
[fx] methods to get fx graph property. ( #1246 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* manipulation
* [fx]add graph manipulation methods.
* [fx]methods to get fx graph property.
* add unit test
* add docstring to explain top node and leaf node in this context
2022-07-12 14:10:37 +08:00
YuliangLiu0306
30b4fc0eb0
[fx]add split module pass and unit test from pipeline passes ( #1242 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add split module pass and unit test from pipeline passes
* fix MNASNet bug
* polish
2022-07-12 13:45:01 +08:00
Jiarui Fang
1aad903c15
[tensor] redistribute among different process groups ( #1247 )
...
* make it faster
* [tensor] rename convert_to_dist -> redistribute
* [tensor] ShardSpec and ReplicaSpec
* [tensor] redistribute among diff pgs
* polish code
2022-07-12 10:24:05 +08:00
Jiarui Fang
9bcd2fd4af
[tensor] a shorter shard and replicate spec ( #1245 )
2022-07-11 15:51:48 +08:00
Jiarui Fang
2699dfbbfd
[rename] convert_to_dist -> redistribute ( #1243 )
2022-07-11 13:05:44 +08:00
HELSON
f6add9b720
[tensor] redirect .data.__get__ to a tensor instance ( #1239 )
2022-07-11 11:41:29 +08:00
Jiarui Fang
20da6e48c8
[checkpoint] save sharded optimizer states ( #1237 )
2022-07-08 16:33:13 +08:00
Jiarui Fang
4a76084dc9
[tensor] add zero_like colo op, important for Optimizer ( #1236 )
2022-07-08 14:55:27 +08:00
Jiarui Fang
3b500984b1
[tensor] fix some unittests ( #1234 )
2022-07-08 14:18:30 +08:00
ver217
a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm ( #1226 )
2022-07-08 13:34:48 +08:00
HELSON
f071b500b6
[polish] polish __repr__ for ColoTensor, DistSpec, ProcessGroup ( #1235 )
2022-07-08 13:25:57 +08:00
HELSON
0453776def
[tensor] fix a assertion in colo_tensor cross_entropy ( #1232 )
2022-07-08 11:18:00 +08:00
Jiarui Fang
0e199d71e8
[hotfix] fx get comm size bugs ( #1233 )
...
* init a checkpoint dir
* [checkpoint]support resume for cosinewarmuplr
* [checkpoint]add unit test
* fix some bugs but still not OK
* fix bugs
* make it faster
* [checkpoint]support generalized scheduler
* polish
* [tensor] torch function return colotensor
* polish
* fix bugs
* remove debug info
* polish
* polish
* [tensor] test_model pass unittests
* polish
* [hotfix] fx get comm size bug
Co-authored-by: ZhaoYi1222 <zhaoyi9499@gmail.com>
2022-07-08 10:54:41 +08:00
HELSON
42ab36b762
[tensor] add unitest for colo_tensor 1DTP cross_entropy ( #1230 )
2022-07-07 19:17:23 +08:00
Yi Zhao
04537bf83e
[checkpoint]support generalized scheduler ( #1222 )
2022-07-07 18:16:38 +08:00
Jiarui Fang
a98319f023
[tensor] torch function return colotensor ( #1229 )
2022-07-07 18:09:18 +08:00
YuliangLiu0306
2b7dca44b5
[fx]get communication size between partitions ( #1224 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]get communication size between partitions.
* polish
2022-07-07 16:22:00 +08:00
Frank Lee
84f2298a96
[fx] added patches for tracing swin transformer ( #1228 )
2022-07-07 15:20:13 +08:00
Frank Lee
b6cb5a47ad
[fx] added timm model tracing testing ( #1221 )
2022-07-07 14:02:17 +08:00
HELSON
280a81243d
[tensor] improve robustness of class 'ProcessGroup' ( #1223 )
2022-07-07 13:55:24 +08:00
Jiarui Fang
15d988f954
[tensor] sharded global process group ( #1219 )
2022-07-07 13:38:48 +08:00
Jiarui Fang
db1bef9032
[hotfix] fx shard 1d pass bug fixing ( #1220 )
2022-07-07 13:37:31 +08:00
Frank Lee
11973d892d
[fx] added torchvision model tracing testing ( #1216 )
...
* [fx] added torchvision model tracing testing
* remove unused imports
2022-07-06 21:37:56 +08:00
Jiarui Fang
52736205d9
[checkpoint] make unitest faster ( #1217 )
2022-07-06 17:39:46 +08:00
Jiarui Fang
f38006ea83
[checkpoint] checkpoint for ColoTensor Model ( #1196 )
2022-07-06 17:22:03 +08:00
XYE
291e22aac6
[fx] temporarily used ( #1215 )
2022-07-06 17:19:26 +08:00
Jiarui Fang
ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. ( #1203 )
2022-07-06 16:15:16 +08:00
Frank Lee
5da87ce35d
[fx] added testing for all albert variants ( #1211 )
2022-07-06 15:11:08 +08:00
Frank Lee
2d13a45a3b
[fx] added testing for all gpt variants ( #1210 )
...
* [fx] added testing for all gpt variants
* polish code
* polish code
2022-07-06 14:03:13 +08:00
YuliangLiu0306
189946c5c4
[fx]add uniform policy ( #1208 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add uniform policy
2022-07-06 13:48:11 +08:00
Frank Lee
426a279ce7
[fx] added testing for all bert variants ( #1207 )
...
* [fx] added testing for all bert variants
* polish code
2022-07-06 10:50:49 +08:00
Jiarui Fang
b5f25eb32a
[Tensor] add cpu group to ddp ( #1200 )
2022-07-05 14:58:28 +08:00
Frank Lee
f7878f465c
[fx] supported model tracing for huggingface bert ( #1201 )
...
* [fx] supported model tracing for huggingface bert
* polish test
2022-07-05 13:19:57 +08:00
Jiarui Fang
060b917daf
[refactor] remove gpc dependency in colotensor's _ops ( #1189 )
2022-07-04 18:54:37 +08:00
Frank Lee
abf6a262dc
[fx] added module patch for pooling layers ( #1197 )
2022-07-04 15:21:26 +08:00
YuliangLiu0306
63d2a93878
[context]support arbitary module materialization. ( #1193 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]support arbitary module materialization.
* [test]add numerical check for lazy init context.
2022-07-04 10:12:02 +08:00
Jiarui Fang
a444633d13
warmup ratio configration ( #1192 )
2022-06-30 15:23:50 +08:00
ver217
dba7e0cfb4
make AutoPlacementPolicy configurable ( #1191 )
2022-06-30 15:18:30 +08:00
YuliangLiu0306
2053e138a2
[context]use meta tensor to init model lazily. ( #1187 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]use meta tensor to init model lazily.
* polish
* make module with device kwargs bypass the normal init.
* change unit test to adapt updated context.
2022-06-29 21:02:30 +08:00
Frank Lee
2c8c05675d
[fx] patched conv and normalization ( #1188 )
2022-06-29 18:58:38 +08:00
Frank Lee
6d86f1bc91
[fx] supported data-dependent control flow in model tracing ( #1185 )
...
* [fx] supported data-dependent control flow in model tracing
* polish code
2022-06-29 15:05:25 +08:00
Jiarui Fang
c463f8adf9
[tensor] remove gpc in tensor tests ( #1186 )
2022-06-29 14:08:40 +08:00
Jiarui Fang
372f791444
[refactor] move chunk and chunkmgr to directory gemini ( #1182 )
2022-06-29 13:31:02 +08:00
ver217
6b2f2ab9bb
[ddp] ColoDDP uses bucket all-reduce ( #1177 )
...
* add reducer
* update colo ddp with reducer
* polish unit test
* polish unit test
2022-06-29 10:34:13 +08:00
Jiarui Fang
7487215b95
[ColoTensor] add independent process group ( #1179 )
2022-06-29 10:03:09 +08:00
YuliangLiu0306
26ba87272d
[hotfix]fixed p2p process send stuck ( #1181 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fixed p2p process send stuck
2022-06-28 14:41:11 +08:00
Jiarui Fang
1b657f9ce1
[tensor] revert local view back ( #1178 )
2022-06-27 18:38:34 +08:00
Jiarui Fang
0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest ( #1176 )
2022-06-27 15:56:11 +08:00
Ziyue Jiang
dd0420909f
[Tensor] rename parallel_action ( #1174 )
...
* rename parallel_action
* polish
2022-06-27 10:04:45 +08:00
YuliangLiu0306
e27645376d
[hotfix]different overflow status lead to communication stuck. ( #1175 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fix some bugs caused by refactored schedule.
* [hotfix]different overflow statu llead to communication stuck.
2022-06-27 09:53:57 +08:00
Jiarui Fang
aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel ( #1169 )
2022-06-27 09:45:26 +08:00
ver217
9e1daa63d2
[zero] sharded optim supports loading local state dict ( #1170 )
...
* sharded optim supports loading local state dict
* polish code
* add unit test
2022-06-24 18:05:16 +08:00
ver217
561e90493f
[zero] zero optim supports loading local state dict ( #1171 )
...
* zero optim supports loading local state dict
* polish code
* add unit test
2022-06-24 17:25:57 +08:00
Jiarui Fang
4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec ( #1168 )
2022-06-24 13:08:54 +08:00
Jiarui Fang
f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread ( #1166 )
2022-06-23 17:34:59 +08:00
Jiarui Fang
177c374401
remove gather out in parallel action ( #1163 )
2022-06-23 16:35:05 +08:00
ver217
634eecb98e
mark sanity_check of dist_spec_mgr as staticmethod ( #1161 )
2022-06-23 11:35:25 +08:00
Ziyue Jiang
955ac912de
remove log ( #1160 )
2022-06-23 10:32:42 +08:00
ver217
4e67b2a890
fix chunk move device ( #1158 )
2022-06-22 18:07:10 +08:00
Jiarui Fang
07f9c781f9
[graph] improve the graph building. ( #1157 )
2022-06-22 16:47:20 +08:00
ver217
22717a856f
[tensor] add embedding bag op ( #1156 )
2022-06-22 15:54:03 +08:00
ver217
ae86151968
[tensor] add more element-wise ops ( #1155 )
...
* add more element-wise ops
* update test_op
* polish unit test
2022-06-22 15:16:47 +08:00
ver217
54aabb8da4
[gemini] refactor gemini mgr ( #1151 )
...
* refactor gemini mgr
* udpate __init__
2022-06-22 11:54:36 +08:00
Frank Lee
f8eec98ff5
[tensor] fixed non-serializable colo parameter during model checkpointing ( #1153 )
2022-06-22 11:43:38 +08:00
ver217
ffa025e120
[tensor] dist spec s2s uses all-to-all ( #1136 )
...
* dist spec s2s uses all-to-all
* update unit test
* add sanity check
* polish unitest test with titans
* add sanity check for DistMgr
* add sanity check
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-06-22 11:32:38 +08:00
YuliangLiu0306
f1f51990b9
[hotfix]fix some bugs caused by refactored schedule. ( #1148 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fix some bugs caused by refactored schedule.
2022-06-21 22:46:30 +08:00
Jiarui Fang
8cdce0399c
[ColoTensor] improves init functions. ( #1150 )
2022-06-21 18:28:38 +08:00
ver217
8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP ( #1146 )
...
* ColoDDP supports overwriting default process group
* rename ColoDDPV2 to ZeroDDP
* add docstr for ZeroDDP
* polish docstr
2022-06-21 16:35:23 +08:00
Frank Lee
0e4e62d30d
[tensor] added __repr__ to spec ( #1147 )
2022-06-21 15:38:05 +08:00
YuliangLiu0306
70dd88e2ee
[pipeline]add customized policy ( #1139 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipeline]add customized policy
2022-06-21 15:23:41 +08:00
YuliangLiu0306
18091581c0
[pipeline]support more flexible pipeline ( #1138 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipeline]support more flexible pipeline
2022-06-21 14:40:50 +08:00
ver217
ccf3c58c89
embedding op use gather_out ( #1143 )
2022-06-21 13:21:20 +08:00
ver217
6690a61b4d
[hotfix] prevent nested ZeRO ( #1140 )
2022-06-21 11:33:53 +08:00
Frank Lee
15aab1476e
[zero] avoid zero hook spam by changing log to debug level ( #1137 )
2022-06-21 10:44:01 +08:00
Frank Lee
73ad05fc8c
[zero] added error message to handle on-the-fly import of torch Module class ( #1135 )
...
* [zero] added error message to handle on-the-fly import of torch Module class
* polish code
2022-06-20 11:24:27 +08:00
ver217
e4f555f29a
[optim] refactor fused sgd ( #1134 )
2022-06-20 11:19:38 +08:00
ver217
d26902645e
[ddp] add save/load state dict for ColoDDP ( #1127 )
...
* add save/load state dict for ColoDDP
* add unit test
* refactor unit test folder
* polish unit test
* rename unit test
2022-06-20 10:51:47 +08:00
YuliangLiu0306
946dbd629d
[hotfix]fix bugs caused by refactored pipeline ( #1133 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fix bugs caused by refactored pipeline
2022-06-17 17:54:15 +08:00
ver217
789cad301b
[hotfix] fix param op hook ( #1131 )
...
* fix param op hook
* update zero tp test
* fix bugs
2022-06-17 16:12:05 +08:00
ver217
a1a7899cae
[hotfix] fix zero init ctx numel ( #1128 )
2022-06-16 17:17:27 +08:00
ver217
f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP ( #1122 )
...
* add set_params_to_ignore for ColoDDP
* polish code
* fix zero hook v2
* add unit test
* polish docstr
2022-06-16 12:54:46 +08:00
YuliangLiu0306
3175bcb4d8
[pipeline]support List of Dict data ( #1125 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipeline]support List of Dict data
* polish
2022-06-16 11:19:48 +08:00
Frank Lee
91a5999825
[ddp] supported customized torch ddp configuration ( #1123 )
2022-06-15 18:11:53 +08:00
YuliangLiu0306
fcf55777dd
[fx]add autoparallel passes ( #1121 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* feature/add autoparallel passes
2022-06-15 16:36:46 +08:00
ver217
e127b4375b
cast colo ddp v2 inputs/outputs ( #1120 )
2022-06-15 15:57:04 +08:00
Frank Lee
16302a5359
[fx] added unit test for coloproxy ( #1119 )
...
* [fx] added unit test for coloproxy
* polish code
* polish code
2022-06-15 15:27:51 +08:00
ver217
7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy ( #1118 )
...
* update gemini mgr
* update chunk
* add docstr
* polish placement policy
* update test chunk
* update test zero
* polish unit test
* remove useless unit test
2022-06-15 15:05:19 +08:00
ver217
f99f56dff4
fix colo parameter torch function ( #1117 )
2022-06-15 14:23:27 +08:00
Frank Lee
e1620ddac2
[fx] added coloproxy ( #1115 )
2022-06-15 10:47:57 +08:00
Frank Lee
6f82ac9bcb
[pipeline] supported more flexible dataflow control for pipeline parallel training ( #1108 )
...
* [pipeline] supported more flexible dataflow control for pipeline parallel training
* polish code
* polish code
* polish code
2022-06-15 10:41:28 +08:00
ver217
895c1c5ee7
[tensor] refactor param op hook ( #1097 )
...
* refactor param op hook
* add docstr
* fix bug
2022-06-13 16:11:53 +08:00
YuliangLiu0306
1e9f9c227f
[hotfix]change to fit latest p2p ( #1100 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]change to fit latest p2p
* polish
* polish
2022-06-13 14:57:25 +08:00
Frank Lee
72bd7c696b
[amp] included dict for type casting of model output ( #1102 )
2022-06-13 14:18:04 +08:00
Frank Lee
7f2d2b2b5b
[engine] fixed empty op hook check ( #1096 )
...
* [engine] fixed empty op hook check
* polish code
2022-06-10 17:27:27 +08:00
Frank Lee
14e5b11d7f
[zero] fixed api consistency ( #1098 )
2022-06-10 16:59:59 +08:00
Frank Lee
cb18922c47
[doc] added documentation to chunk and chunk manager ( #1094 )
...
* [doc] added documentation to chunk and chunk manager
* polish code
* polish code
* polish code
2022-06-10 15:33:06 +08:00
ver217
1f894e033f
[gemini] zero supports gemini ( #1093 )
...
* add placement policy
* add gemini mgr
* update mem stats collector
* update zero
* update zero optim
* fix bugs
* zero optim monitor os
* polish unit test
* polish unit test
* add assert
2022-06-10 14:48:28 +08:00
Frank Lee
2b2dc1c86b
[pipeline] refactor the pipeline module ( #1087 )
...
* [pipeline] refactor the pipeline module
* polish code
2022-06-10 11:27:38 +08:00
Frank Lee
bad5d4c0a1
[context] support lazy init of module ( #1088 )
...
* [context] support lazy init of module
* polish code
2022-06-10 10:09:48 +08:00
ver217
be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 ( #1077 )
...
* polish chunk manager
* polish unit test
* impl add_extern_static_tensor for chunk mgr
* add mem stats collector v2
* polish code
* polish unit test
* polish code
* polish get chunks
2022-06-09 20:56:34 +08:00
Frank Lee
50ec3a7e06
[test] skip tests when not enough GPUs are detected ( #1090 )
...
* [test] skip tests when not enough GPUs are detected
* polish code
* polish code
2022-06-09 17:19:13 +08:00
Ziyue Jiang
0653c63eaa
[Tensor] 1d row embedding ( #1075 )
...
* Add CPU 1d row embedding
* polish
2022-06-08 12:04:59 +08:00
junxu
d66ffb4df4
Remove duplication registry ( #1078 )
2022-06-08 07:47:24 +08:00
Jiarui Fang
bcab249565
fix issue #1080 ( #1071 )
2022-06-07 17:21:11 +08:00
ver217
1b17859328
[tensor] chunk manager monitor mem usage ( #1076 )
2022-06-07 15:00:00 +08:00
ver217
98cdbf49c6
[hotfix] fix chunk comm src rank ( #1072 )
2022-06-07 11:54:56 +08:00
Frank Lee
bfdc5ccb7b
[context] maintain the context object in with statement ( #1073 )
2022-06-07 10:48:45 +08:00
ver217
c5cd3b0f35
[zero] zero optim copy chunk rather than copy tensor ( #1070 )
2022-06-07 10:30:46 +08:00
Ziyue Jiang
4fc748f69b
[Tensor] fix optimizer for CPU parallel ( #1069 )
2022-06-06 17:36:11 +08:00
Jiarui Fang
49832b2344
[refactory] add nn.parallel module ( #1068 )
2022-06-06 15:34:41 +08:00
Ziyue Jiang
6754f1b77f
fix module utils bug ( #1066 )
2022-06-06 12:11:48 +08:00
Jiarui Fang
a00644079e
reorgnize colotensor directory ( #1062 )
...
* reorgnize colotensor directory
* polish code
2022-06-03 18:04:22 +08:00
Frank Lee
3d10be33bd
[cudnn] set False to cudnn benchmark by default ( #1063 )
2022-06-03 17:58:06 +08:00
Ziyue Jiang
df9dcbbff6
[Tensor] add hybrid device demo and fix bugs ( #1059 )
2022-06-03 12:09:49 +08:00
YuliangLiu0306
b167258b6a
[pipeline]refactor ppschedule to support tensor list ( #1050 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* refactor ppschedule to support tensor list
* polish
2022-06-02 13:48:59 +08:00
ver217
e3fde4ee6b
fix import error in sharded model v2 ( #1053 )
2022-06-02 13:48:22 +08:00
ver217
e1922ea4f6
[zero] add chunk size search for chunk manager ( #1052 )
2022-06-02 13:20:20 +08:00
アマデウス
2c42b230f3
updated collective ops api ( #1054 )
2022-06-02 12:52:27 +08:00
ver217
51b9a49655
[zero] add zero optimizer for ColoTensor ( #1046 )
...
* add zero optimizer
* torch ok
* unit test ok
* polish code
* fix bugs
* polish unit test
* polish zero optim
* polish colo ddp v2
* refactor folder structure
* add comment
* polish unit test
* polish zero optim
* polish unit test
2022-06-02 12:13:15 +08:00
ver217
7faef93326
fix dist spec mgr ( #1045 )
2022-05-31 12:14:39 +08:00
ver217
9492a561c3
[tensor] ColoTensor supports ZeRo ( #1015 )
...
* impl chunk manager
* impl param op hook
* add reduce_chunk
* add zero hook v2
* add zero dp
* fix TensorInfo
* impl load balancing when using zero without chunk
* fix zero hook
* polish chunk
* fix bugs
* ddp ok
* zero ok
* polish code
* fix bugs about load balancing
* polish code
* polish code
* add ene-to-end test
* polish code
* polish code
* polish code
* fix typo
* add test_chunk
* fix bugs
* fix bugs
* polish code
2022-05-31 12:00:12 +08:00
Ziyue Jiang
7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter ( #1041 )
...
* add Parameter inheritance for ColoParameter
* remove tricks
* remove tricks
* polish
* polish
2022-05-30 17:23:44 +08:00
ver217
7cfd6c827e
[zero] add load_state_dict for sharded model ( #894 )
...
* add load_state_dict for sharded model
* fix bug
* fix bug
* fix ckpt dtype and device
* support load state dict in zero init ctx
* fix bugs
2022-05-27 10:25:08 +08:00
Ziyue Jiang
6c5996a56e
[Tensor] add module check and bert test ( #1031 )
...
* add Embedding
* Add bert test
* polish
* add check module test
* polish
* polish
* polish
* polish
2022-05-26 18:15:42 +08:00
YuliangLiu0306
7106bd671d
[p2p]add object list send/recv ( #1024 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [p2p]add object list send recv
* refactor for code reusability
* polish
2022-05-26 14:28:46 +08:00
Frank Lee
e4685832f8
[engine] fixed bug in gradient accumulation dataloader to keep the last step ( #1030 )
2022-05-26 14:28:23 +08:00
Ziyue Jiang
32291dd73f
[Tensor] add module handler for linear ( #1021 )
...
* add module spec for linear
* polish
* polish
* polish
2022-05-26 11:50:44 +08:00
Ryan Russell
9b0c037027
fix typo in constants ( #1027 )
2022-05-26 08:45:08 +08:00
ver217
007ca0df92
fix colo init context ( #1026 )
2022-05-25 20:41:58 +08:00
YuliangLiu0306
d182b0bd47
[hotfix] fix some bugs caused by size mismatch. ( #1011 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fix some bugs caused by size mismatch.
* add warning logs
* polish
2022-05-23 14:02:28 +08:00
ver217
cefc29ff06
[tensor] impl ColoDDP for ColoTensor ( #1009 )
...
* impl ColoDDP for ColoTensor
* polish code
2022-05-21 13:52:04 +08:00
zhengzangw
ae7c338105
[NFC] polish colossalai/kernel/cuda_native/csrc/colossal_C_frontend.cpp code style
2022-05-20 23:57:38 +08:00
ver217
a3b66f6def
[tensor] refactor parallel action ( #1007 )
...
* refactor parallel action
* polish unit tests
2022-05-20 20:19:58 +08:00
ver217
ad536e308e
[tensor] refactor colo-tensor ( #992 )
...
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
2022-05-19 12:44:59 +08:00
Frank Lee
1467d83edf
[cli] remove unused imports ( #1001 )
2022-05-18 23:27:18 +08:00
Frank Lee
533d0c46d8
[kernel] fixed the include bug in dropout kernel ( #999 )
2022-05-18 21:43:18 +08:00
Jiarui Fang
802ac297cc
[Tensor] remove useless import in tensor dir ( #997 )
2022-05-18 14:54:51 +08:00
Ziheng Qin
571f12eff3
[NFC] polish colossalai/nn/layer/utils/common.py code style ( #983 )
2022-05-17 10:25:06 +08:00
puck_WCR
bda70b4b66
[NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style ( #980 )
2022-05-17 10:25:06 +08:00
Kai Wang (Victor Kai)
c50c08dcbb
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style ( #979 )
2022-05-17 10:25:06 +08:00
binmakeswell
f28c021376
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style ( #978 )
2022-05-17 10:25:06 +08:00
shenggan
18542b47fc
[NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style ( #976 )
2022-05-17 10:25:06 +08:00
Jie Zhu
b67eebd20f
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style ( #977 )
2022-05-17 10:25:06 +08:00
DouJS
52705ec5c5
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style ( #974 )
2022-05-17 10:25:06 +08:00
Ofey Chan
136946422b
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style ( #973 )
2022-05-17 10:25:06 +08:00
Zirui Zhu
598cde4a0f
[NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style ( #972 )
2022-05-17 10:25:06 +08:00
Xu Kai
632e94abde
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style ( #970 )
2022-05-17 10:25:06 +08:00
ExtremeViscent
22d1df224d
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h ( #968 )
...
code style
2022-05-17 10:25:06 +08:00
LuGY
fb5bc6cb28
[NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style ( #966 )
2022-05-17 10:25:06 +08:00
lucasliunju
955463e542
[NFC] polish __init__.py code style ( #965 )
2022-05-17 10:25:06 +08:00
Yuer867
7106a399fc
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style ( #964 )
2022-05-17 10:25:06 +08:00
ziyu huang
5bd80b7dd1
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style ( #963 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-05-17 10:25:06 +08:00
superhao1995
48c4a180c7
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style ( #959 )
2022-05-17 10:25:06 +08:00
MaxT
442a2975ab
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style ( #962 )
2022-05-17 10:25:06 +08:00
runluo
89e2767a92
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style ( #958 )
2022-05-17 10:25:06 +08:00
doubleHU
1dc1b6fa00
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style ( #957 )
2022-05-17 10:25:06 +08:00
RichardoLuo
0e922da874
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style ( #956 )
...
Co-authored-by: RichardoLuo <14049555596@qq.com>
2022-05-17 10:25:06 +08:00
Wangbo Zhao(黑色枷锁)
8ca2a85682
[NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style ( #955 )
2022-05-17 10:25:06 +08:00
Luxios22
f6970ef8b1
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style ( #954 )
2022-05-17 10:25:06 +08:00
Cautiousss
0b86a6345e
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style ( #953 )
...
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
2022-05-17 10:25:06 +08:00
Sze-qq
d8d07b0e2b
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style ( #952 )
2022-05-17 10:25:06 +08:00
xyupeng
fa43bb216d
[NFC] polish colossalai/builder/pipeline.py code style ( #951 )
2022-05-17 10:25:06 +08:00
JT.Han
c3e423c8be
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style ( #949 )
...
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
2022-05-17 10:25:06 +08:00
luoling-LC
72c71b67ec
[NFC] polish colossalai/kernel/jit/bias_gelu.py code style ( #946 )
...
Co-authored-by: jnbai <897086360@qq.com>
2022-05-17 10:25:06 +08:00
bajiaoyu517
eb9a81d72a
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style ( #945 )
2022-05-17 10:25:06 +08:00
wky
8ffdc38376
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style ( #942 )
2022-05-17 10:25:06 +08:00
HaoyuQin
c0f373db5d
[NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style ( #943 )
2022-05-17 10:25:06 +08:00
XYE
5bbefeb06a
[NFC] polish moe_cuda_kernel.cu code style ( #940 )
...
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
2022-05-17 10:25:06 +08:00
Maruyama_Aya
7aa35eae6a
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style ( #938 )
2022-05-17 10:25:06 +08:00
Geng Zhang
b6cc9313ef
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style ( #936 )
2022-05-17 10:25:06 +08:00
yuxuan-lou
44b6f8947b
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style ( #939 )
2022-05-17 10:25:06 +08:00
BoxiangW
872aa413c2
[NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. ( #937 )
2022-05-17 10:25:06 +08:00
ver217
58580b50fe
Revert "[NFC] Hotfix/format ( #984 )" ( #986 )
...
This reverts commit 0772828fba
.
2022-05-17 10:23:38 +08:00
binmakeswell
0772828fba
[NFC] Hotfix/format ( #984 )
...
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#939 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style (#938 )
* [NFC] polish moe_cuda_kernel.cu code style (#940 )
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
* [NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945 )
* [NFC] polish colossalai/kernel/jit/bias_gelu.py code style (#946 )
Co-authored-by: jnbai <897086360@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949 )
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
* [NFC] polish colossalai/builder/pipeline.py code style (#951 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style (#953 )
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style (#954 )
* [NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style (#955 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style (#956 )
Co-authored-by: RichardoLuo <14049555596@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style (#957 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style (#963 )
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style (#964 )
* [NFC] polish __init__.py code style (#965 )
* [NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h (#968 )
code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style (#970 )
* [NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977 )
* [NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#979 )
* [NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style (#980 )
* [NFC] polish colossalai/nn/layer/utils/common.py code style (#983 )
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
Co-authored-by: yuxuan-lou <83441848+yuxuan-lou@users.noreply.github.com>
Co-authored-by: Geng Zhang <34452939+zxgx@users.noreply.github.com>
Co-authored-by: Maruyama_Aya <38985202+MaruyamaAya@users.noreply.github.com>
Co-authored-by: XYE <92607131+Itok2000u@users.noreply.github.com>
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
Co-authored-by: HaoyuQin <79465534+coder-chin@users.noreply.github.com>
Co-authored-by: wky <64853922+wangkuangyi@users.noreply.github.com>
Co-authored-by: bajiaoyu517 <59548007+bajiaoyu517@users.noreply.github.com>
Co-authored-by: luoling-LC <105470086+luoling-LC@users.noreply.github.com>
Co-authored-by: jnbai <897086360@qq.com>
Co-authored-by: JT.Han <59948448+JThh@users.noreply.github.com>
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
Co-authored-by: Sze-qq <68757353+Sze-qq@users.noreply.github.com>
Co-authored-by: Cautiousss <48676630+Cautiousss@users.noreply.github.com>
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
Co-authored-by: Luxios22 <67457897+Luxios22@users.noreply.github.com>
Co-authored-by: Wangbo Zhao(黑色枷锁) <56866854+wangbo-zhao@users.noreply.github.com>
Co-authored-by: RichardoLuo <50363844+RichardoLuo@users.noreply.github.com>
Co-authored-by: RichardoLuo <14049555596@qq.com>
Co-authored-by: doubleHU <98150031+huxin711@users.noreply.github.com>
Co-authored-by: runluo <68489000+run-qiao@users.noreply.github.com>
Co-authored-by: MaxT <854721132@qq.com>
Co-authored-by: superhao1995 <804673818@qq.com>
Co-authored-by: ziyu huang <huang0ziyu@gmail.com>
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
Co-authored-by: Yuer867 <62204893+Yuer867@users.noreply.github.com>
Co-authored-by: lucasliunju <lucasliunju@gmail.com>
Co-authored-by: LuGY <74758262+Gy-Lu@users.noreply.github.com>
Co-authored-by: ExtremeViscent <zhangyiqi55732@sina.com>
Co-authored-by: Xu Kai <xukai16@foxmail.com>
Co-authored-by: Zirui Zhu <zhuzr21@gmail.com>
Co-authored-by: Ofey Chan <ofey206@gmail.com>
Co-authored-by: DouJS <dujiangsu@163.com>
Co-authored-by: Jie Zhu <chore.08-protist@icloud.com>
Co-authored-by: shenggan <csg19971016@gmail.com>
Co-authored-by: Kai Wang (Victor Kai) <37533040+kaiwang960112@users.noreply.github.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: Ziheng Qin <37519855+henryqin1997@users.noreply.github.com>
2022-05-17 09:54:49 +08:00
ver217
c2fdc6a011
[tensor] derive compute pattern from dist spec ( #971 )
...
* derive compute pattern from dist spec
* polish code
2022-05-16 14:58:08 +08:00
Ziyue Jiang
797a9dc5a9
add DistSpec for loss and test_model ( #947 )
2022-05-13 20:29:50 +08:00
ver217
67c33f57eb
[tensor] design DistSpec and DistSpecManager for ColoTensor ( #934 )
...
* add dist spec
* update linear op
* polish code
* polish code
* update embedding op
* polish unit tests
* polish unit tests
* polish comments
* polish code
* add test_dist_spec_mgr
* polish code
* refactor folder structure
* polish unit tests
* add get_process_group() for TensorSpec
* polish code
2022-05-13 15:13:52 +08:00
Ziyue Jiang
d73c2b1d79
[Tensor] fix init context ( #931 )
...
* change torch.Parameter to ColoParameter
* fix post assignment for init context
* polish
* polish
2022-05-11 15:48:12 +08:00
Ziyue Jiang
dfc88b85ea
[Tensor] simplify named param ( #928 )
...
* simplify ColoModulize
* simplify ColoModulize
* polish
* polish
2022-05-11 10:54:19 +08:00
YuliangLiu0306
32a45cd7ef
[pipelinable]use pipelinable to support GPT model. ( #903 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipelinable]use pipelinable to support GPT model.
* fix a bug caused by ShardedModel
* polish
* fix front func list
2022-05-11 09:23:58 +08:00
ver217
4ca732349e
[tensor] colo tensor overrides mul ( #927 )
...
* colo tensor overrides mul
* polish code
2022-05-10 16:04:08 +08:00
ver217
45b9124df4
[tensor] hijack addmm for colo tensor ( #923 )
...
* hijack addmm for colo tensor
* fix bugs
* polish unit test
* polish comments
2022-05-09 18:55:49 +08:00
Ziyue Jiang
c195d2814c
[Tensor] add from_pretrained support and bert pretrained test ( #921 )
...
* add from_pretrained support and test
* polish
* polish
* polish
* polish
2022-05-09 16:11:47 +08:00
Jiarui Fang
845856ea29
[Graph] building computing graph with ColoTensor, Linear only ( #917 )
2022-05-07 17:10:37 +08:00
Ziyue Jiang
75d221918a
[Tensor] add 1d vocab loss ( #918 )
...
* add 1d vocab loss
* polish
2022-05-07 15:49:14 +08:00
Jiarui Fang
ab95ec9aea
[Tensor] init ColoParameter ( #914 )
2022-05-06 12:57:14 +08:00
Ziyue Jiang
f593a5637e
[Tensor] add embedding tp1d row ( #904 )
2022-04-29 14:10:05 +08:00
Ziyue Jiang
2c0d19d755
[Tensor] add ColoTensor TP1Dcol Embedding ( #899 )
2022-04-28 17:45:06 +08:00
Jiarui Fang
d16671da75
[Tensor] initialize the ColoOptimizer ( #898 )
...
* [Tensor] activation is an attr of ColoTensor
* [Tensor] add optimizer
* only detach parameters in context
* polish code
2022-04-28 15:23:40 +08:00
Jiarui Fang
676f191532
[Tensor] activation is an attr of ColoTensor ( #897 )
2022-04-28 14:43:22 +08:00
Ziyue Jiang
cb182da7c5
[tensor] refine linear and add gather for laynorm ( #893 )
...
* refine linear and add function to ColoTensor
* add gather for layernorm
* polish
* polish
2022-04-28 10:55:40 +08:00
Jiarui Fang
26c49639d8
[Tensor] overriding paramters() for Module using ColoTensor ( #889 )
2022-04-27 15:28:59 +08:00
Ziyue Jiang
1d0aba4153
[tensor] add ColoTensor 1Dcol ( #888 )
2022-04-27 14:13:55 +08:00
Jiarui Fang
72cdc06875
[Tensor] make ColoTensor more robust for getattr ( #886 )
...
* [Tensor] make ColoTensor more robust for getattr
* polish
* polish
2022-04-27 10:57:49 +08:00
Ziyue Jiang
9bc5a77c31
[tensor] wrap function in the torch_tensor to ColoTensor ( #881 )
2022-04-26 20:13:56 +08:00
ver217
4df6471f5d
fix import error ( #880 )
2022-04-26 19:28:40 +08:00
Jiarui Fang
7f76517a85
[Tensor] make a simple net works with 1D row TP ( #879 )
2022-04-26 18:11:47 +08:00
ver217
c4d903e64a
[gemini] accelerate adjust_layout() ( #878 )
...
* add lru cache
* polish code
* update unit test
* fix sharded optim
2022-04-26 18:08:31 +08:00
Jiarui Fang
909211453b
[Tensor] Add some attributes to ColoTensor ( #877 )
...
* [Tensor] add some function to ColoTensor
* torch.allclose
* rm torch.add
2022-04-26 15:10:47 +08:00
HELSON
425b4a96b8
[gemini] polish stateful_tensor_mgr ( #876 )
2022-04-26 15:05:03 +08:00
Jiarui Fang
e43f83aa5c
[Tensor] get named parameters for model using ColoTensors ( #874 )
2022-04-26 14:08:01 +08:00
Jiarui Fang
96211c2cc8
[tensor] customized op returns ColoTensor ( #875 )
...
* [tensor] customized op returns ColoTensor
* polish
* polish code
2022-04-26 13:23:59 +08:00
Ziyue Jiang
26d4ab8b03
[Tensor] Add function to spec and update linear 1Drow and unit tests ( #869 )
2022-04-26 10:15:26 +08:00
Frank Lee
11f54c7b6b
[doc] improved docstring and assertion messages for the engine module ( #871 )
2022-04-26 10:00:18 +08:00
Frank Lee
1c34382678
[doc] improved assertion messages in trainer ( #873 )
2022-04-26 10:00:12 +08:00
Frank Lee
7a64fae33a
[doc] improved error messages in initialize ( #872 )
2022-04-26 10:00:03 +08:00
Jiarui Fang
1190b2c4a4
[tensor] add cross_entrophy_loss ( #868 )
2022-04-25 16:01:52 +08:00
HELSON
3107817172
[gemini] add stateful tensor container ( #867 )
2022-04-25 14:58:16 +08:00
Jiarui Fang
d01d3b8cb0
colo init context add device attr. ( #866 )
2022-04-25 14:24:26 +08:00
Frank Lee
2238758c2e
[usability] improved error messages in the context module ( #856 )
2022-04-25 13:42:31 +08:00
Frank Lee
9fdebadd69
[doc] improved docstring in the amp module ( #857 )
2022-04-25 13:42:17 +08:00
Frank Lee
b862d89d00
[doc] improved docstring in the logging module ( #861 )
2022-04-25 13:42:00 +08:00
Frank Lee
8004c8e938
[doc] improved docstring in the communication module ( #863 )
2022-04-25 13:41:43 +08:00
Jiarui Fang
8af5f7423d
[tensor] an initial dea of tensor spec ( #865 )
...
* a initial dea of tensor spec
* polish
* polish
2022-04-25 13:33:52 +08:00
Jiarui Fang
126ba573a8
[Tensor] add layer norm Op ( #852 )
2022-04-25 11:49:20 +08:00
Frank Lee
a82da26f7e
[cli] refactored micro-benchmarking cli and added more metrics ( #858 )
2022-04-25 11:48:07 +08:00
Frank Lee
ee222dfbf3
[usability] added assertion message in registry ( #864 )
2022-04-25 11:45:15 +08:00
HELSON
f0e654558f
[gemini] polish code ( #855 )
2022-04-25 10:40:14 +08:00
Jiarui Fang
29159d9b5b
hotfix tensor unittest bugs ( #862 )
2022-04-25 10:06:53 +08:00
YuliangLiu0306
c6930d8ddf
[pipelinable]use ColoTensor to replace dummy tensor. ( #853 )
2022-04-24 18:31:22 +08:00
Ziyue Jiang
bcc8655021
[Tensor ] Add 1Drow weight reshard by spec ( #854 )
2022-04-24 18:30:20 +08:00
ver217
d7e0303d1e
[zero] use GeminiMemoryManager when sampling model data ( #850 )
2022-04-24 17:17:22 +08:00
ver217
232142f402
[utils] refactor profiler ( #837 )
...
* add model data profiler
* add a subclass of torch.profiler.profile
* refactor folder structure
* remove redundant codes
* polish code
* use GeminiMemoryManager
* fix import path
* fix stm profiler ext
* polish comments
* remove useless file
2022-04-24 17:03:59 +08:00
Jiarui Fang
62f059251b
[Tensor] init a tp network training unittest ( #849 )
2022-04-24 16:43:44 +08:00
ver217
0dea140760
[hotfix] add deconstructor for stateful tensor ( #848 )
...
* add deconstructor for stateful tensor
* fix colo init context
2022-04-24 15:03:04 +08:00
ver217
0f7ed8c192
fix _post_init_method of zero init ctx ( #847 )
2022-04-24 14:16:50 +08:00
Ziyue Jiang
2a0a427e04
[tensor]add assert for colo_tensor 1Drow ( #846 )
2022-04-24 14:12:45 +08:00
Ziyue Jiang
05023ecfee
[Tensor] TP Linear 1D row ( #843 )
2022-04-24 13:43:12 +08:00
Frank Lee
cf6d1c9284
[CLI] refactored the launch CLI and fixed bugs in multi-node launching ( #844 )
...
* [cli] fixed multi-node job launching
* [cli] fixed a bug in version comparison
* [cli] support launching with env var
* [cli] fixed multi-node job launching
* [cli] fixed a bug in version comparison
* [cli] support launching with env var
* added docstring
* [cli] added extra launch arguments
* [cli] added default launch rdzv args
* [cli] fixed version comparison
* [cli] added docstring examples and requierment
* polish docstring
* polish code
* polish code
2022-04-24 13:26:26 +08:00
HELSON
e5ea3fdeef
[gemini] add GeminiMemoryManger ( #832 )
...
* refactor StatefulTensor, tensor utilities
* add unitest for GeminiMemoryManager
2022-04-24 13:08:48 +08:00
YuliangLiu0306
35ea6e1023
[pipelinable]use pipelinable context to initialize non-pipeline model ( #816 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipeline]add module lazy init feature to support large model initization.
* [pipeline]add to_layer_list and partition method to support arbitrary non-pp model
* refactor the module structure
* polish
* [pipelinable]add unit test for pipelinable
* polish
* polish
* Fix CodeFactor issues.
2022-04-24 13:03:12 +08:00
Jiarui Fang
ea0a2ed25f
[hotfix] the bug of numel() in ColoTensor ( #845 )
2022-04-24 12:32:10 +08:00
LuGY
c1e8d2001e
modefied the pp build for ckpt adaptation ( #803 )
2022-04-24 12:23:16 +08:00
Jiarui Fang
8789850eea
Init Conext supports lazy allocate model memory ( #842 )
2022-04-22 18:03:35 +08:00
Jiarui Fang
4575a3298b
[hotfix] ColoTensor pin_memory ( #840 )
2022-04-22 17:07:46 +08:00
Frank Lee
01e9f834f5
[dependency] removed torchvision ( #833 )
...
* [dependency] removed torchvision
* fixed transforms
2022-04-22 15:24:35 +08:00
Jiarui Fang
cb5a4778e1
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. ( #831 )" ( #835 )
...
This reverts commit ac88de6dfc
.
2022-04-22 14:45:57 +08:00
Jiarui Fang
ac88de6dfc
[WIP] Applying ColoTensor on TP-1D-row Linear. ( #831 )
...
* revert zero tensors back
* [tensor] init row 1d linear
2022-04-22 14:03:26 +08:00
Jiarui Fang
595bedf767
revert zero tensors back ( #829 )
2022-04-22 12:12:35 +08:00
Jiarui Fang
294a6060d0
[tensor] ZeRO use ColoTensor as the base class. ( #828 )
...
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.
* [tensor] ZeRO use ColoTensor as the base class.
* polish
2022-04-22 12:00:48 +08:00
Ziyue Jiang
8e6fdb4f29
[tensor]fix test_linear ( #826 )
2022-04-21 17:18:56 +08:00
Ziyue Jiang
1a9e2c2dff
[tensor] fix kwargs in colo_tensor torch_funtion ( #825 )
2022-04-21 16:47:35 +08:00
Jiarui Fang
eb1b89908c
[refactor] moving InsertPostInitMethodToModuleSubClasses to utils. ( #824 )
2022-04-21 16:03:18 +08:00
Jiarui Fang
2ecc3d7a55
[tensor] lazy init ( #823 )
2022-04-21 15:40:23 +08:00
Jiarui Fang
68dcd51d41
[Tensor] update ColoTensor torch_function ( #822 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* [tensor] renaming and reorganize directory structure.
* rm useless dir
* polish
* polish
* [tensor] hander the function not wrapped
* polish
2022-04-21 14:25:27 +08:00
Jiarui Fang
0ce8924ceb
[tensor] reorganize files ( #820 )
2022-04-21 14:15:48 +08:00
Jiarui Fang
ab962b9735
[gemini] a new tensor structure ( #818 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
2022-04-21 11:42:37 +08:00
FrankLeeeee
70ed11d07e
[cli] added check installation cli
2022-04-20 12:13:27 +08:00
YuliangLiu0306
c7eca40f51
Merge pull request #812 from FrankLeeeee/feature/cli
...
[cli] fixed single-node process launching
2022-04-20 11:40:07 +08:00
Jiarui Fang
3ddbd1bce1
[gemini] collect cpu-gpu moving volume in each iteration ( #813 )
2022-04-20 11:29:48 +08:00
FrankLeeeee
d522cb704e
[cli] fixed single-node process launching
2022-04-20 10:46:51 +08:00
Jiarui Fang
61c20b44bc
[log] local throughput metrics ( #811 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
2022-04-20 10:05:39 +08:00
ver217
dd92b90a68
[DO NOT MERGE] [zero] init fp16 params directly in ZeroInitContext ( #808 )
...
* init fp16 param directly
* polish code
2022-04-19 16:16:48 +08:00
Jiarui Fang
227d1cd4b3
[gemini] APIs to set cpu memory capacity ( #809 )
2022-04-19 16:05:22 +08:00
FrankLeeeee
f63e91d280
[cli] fixed a bug in user args and refactored the module structure
2022-04-19 15:15:16 +08:00
Jiarui Fang
e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy ( #793 )" ( #806 )
2022-04-19 14:40:02 +08:00
HELSON
88759e289e
[zero] add ZeroTensorShardStrategy ( #793 )
2022-04-19 14:32:45 +08:00
Jiarui Fang
681addb512
[refactor] moving grad acc logic to engine ( #804 )
2022-04-19 14:03:21 +08:00
Frank Lee
05d9ae5999
[cli] add missing requirement ( #805 )
2022-04-19 13:56:59 +08:00
YuliangLiu0306
de2f581d43
[cli] added micro benchmarking for tp ( #789 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [CLI]add cli benchmark feature
* fix CodeFactor issues.
* refactor the module structure.
2022-04-19 12:08:28 +08:00
YuliangLiu0306
cfadc9df8e
[cli] added distributed launcher command ( #791 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [CLI]add cli launcher feature
* remove testing message used during developing
* refactor the module structure.
2022-04-19 10:59:44 +08:00
Jiarui Fang
4d9332b4c5
[refactor] moving memtracer to gemini ( #801 )
2022-04-19 10:13:08 +08:00
Jiarui Fang
8711c706f4
[hotfix] fix grad offload when enabling reuse_fp16_shard
2022-04-18 14:58:21 +08:00
ver217
f1fa1a675f
fix grad offload when enabling reuse_fp16_shard
2022-04-18 14:07:39 +08:00
HELSON
4c4388c46e
[hotfix] fix memory leak in zero ( #781 )
2022-04-18 13:57:03 +08:00
Ziyue Jiang
4b01da24cd
[TP] change the check assert in split batch 2d ( #772 )
2022-04-16 21:29:57 +08:00
ver217
846406a07a
[gemini] fix auto tensor placement policy ( #775 )
2022-04-16 21:29:31 +08:00
HELSON
a65cbb7e4e
[zero] refactor shard and gather operation ( #773 )
2022-04-15 14:41:31 +08:00
ver217
6e553748a7
polish sharded optim docstr and warning ( #770 )
2022-04-14 21:03:59 +08:00
LuGY
80e37eec42
fix the ckpt bugs when using DDP ( #769 )
2022-04-14 21:03:24 +08:00
Frank Lee
920fe31526
[compatibility] used backward-compatible API for global process group ( #758 )
2022-04-14 17:20:35 +08:00
Frank Lee
4ea49cb536
[test] added a decorator for address already in use error with backward compatibility ( #760 )
...
* [test] added a decorator for address already in use error with backward compatibility
* [test] added a decorator for address already in use error with backward compatibility
2022-04-14 16:48:44 +08:00
Jiarui Fang
10ef8afdd2
[gemini] init genimi individual directory ( #754 )
2022-04-14 16:40:26 +08:00
ver217
dcca614eee
[hotfix] fix test_stateful_tensor_mgr ( #762 )
2022-04-14 15:50:09 +08:00
ver217
a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model ( #756 )
...
* fix reuse_fp16_shard
* disable test stm
* polish code
2022-04-14 14:56:46 +08:00
ver217
8f7ce94b8e
[hotfix] fix auto tensor placement policy ( #753 )
2022-04-14 12:04:45 +08:00
HELSON
84c6700b2a
[zero] refactor memstats_collector ( #746 )
2022-04-14 12:01:12 +08:00
アマデウス
b8899e0905
[TP] allow layernorm without bias ( #750 )
2022-04-14 11:43:56 +08:00
Jiarui Fang
3d7dc46d33
[zero] use factory pattern for tensor_placement_policy ( #752 )
2022-04-14 11:07:29 +08:00
ver217
4b048a8728
fix prepare grads in sharded optim ( #749 )
2022-04-13 22:36:11 +08:00
ver217
097772546e
fix initialize about zero
2022-04-13 19:10:21 +08:00
ver217
e396bb71f2
[zero] add tensor placement policies ( #743 )
...
* add tensor placement policies
* polish comments
* polish comments
* update moe unit tests
2022-04-13 15:00:48 +08:00
HELSON
22c4b88d56
[zero] refactor ShardedParamV2 for convenience ( #742 )
2022-04-13 14:54:26 +08:00
HELSON
340e59f968
[utils] add synchronized cuda memory monitor ( #740 )
2022-04-13 10:50:54 +08:00
ver217
e6212f56cd
[hotfix] fix memory leak in backward of sharded model ( #741 )
2022-04-13 09:59:05 +08:00
Frank Lee
a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 ( #735 )
2022-04-12 16:04:21 +08:00
Jiarui Fang
53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process ( #726 )
2022-04-12 14:57:54 +08:00
Jiarui Fang
7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager ( #728 )
2022-04-12 13:55:25 +08:00
Frank Lee
1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 ( #721 )
2022-04-12 09:44:40 +08:00
Frank Lee
2412429d54
[util] fixed activation checkpointing on torch 1.9 ( #719 )
2022-04-12 09:35:45 +08:00
Frank Lee
04ff5ea546
[utils] support detection of number of processes on current node ( #723 )
2022-04-12 09:28:19 +08:00
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
2022-04-11 23:13:02 +08:00
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
2022-04-11 16:47:57 +08:00
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
2022-04-11 15:40:13 +08:00
ver217
715b86eadd
[hotfix] fix stm cuda model data size ( #710 )
2022-04-11 15:10:39 +08:00
LuGY
140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes ( #711 )
...
* fixed bugs of assigning grad states to non leaf nodes
* use detach()
2022-04-11 14:04:58 +08:00
Frank Lee
eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 ( #700 )
2022-04-11 13:44:50 +08:00
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
2022-04-11 13:38:51 +08:00
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
2022-04-11 10:46:08 +08:00
アマデウス
3fc8a204dc
[]Corrected 3d vocab parallel embedding ( #707 )
2022-04-11 10:17:55 +08:00
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
2022-04-08 20:23:26 +08:00
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
2022-04-07 17:38:45 +08:00
YuliangLiu0306
0ed7042f42
[pipeline] refactor pipeline ( #679 )
...
* refactor pipeline---put runtime schedule into engine.
* add type hint for schedule Optional[BaseSchedule]
* preprocess schedule during engine initializing
* infer pipeline schedule params from config
2022-04-07 15:54:14 +08:00
Jiarui Fang
59bf2dc590
[zero] initialize a stateful tensor manager ( #614 )
2022-04-06 16:18:49 +08:00
encmps
79ccfa4310
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style ( #667 )
2022-04-06 11:40:59 +08:00
lucasliunju
e4bcff9b0f
[NFC] polish colossalai/builder/builder.py code style ( #662 )
2022-04-06 11:40:59 +08:00
shenggan
331683bf82
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style ( #661 )
2022-04-06 11:40:59 +08:00
FredHuang99
c336cd3066
[NFC] polish colossalai/communication/utils.py code style ( #656 )
2022-04-06 11:40:59 +08:00
MaxT
5ab9a71299
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style ( #642 )
2022-04-06 11:40:59 +08:00
Xue Fuzhao
10afec728f
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style ( #641 )
2022-04-06 11:40:59 +08:00
Cautiousss
055d0270c8
[NFC] polish colossalai/context/process_group_initializer/initializer_sequence.py colossalai/context/process_group_initializer initializer_tensor.py code style ( #639 )
...
Co-authored-by: 何晓昕 <cautious@r-236-100-25-172.comp.nus.edu.sg>
2022-04-06 11:40:59 +08:00
Ziheng Qin
c7c224ee17
[NFC] polish colossalai/builder/pipeline.py code style ( #638 )
2022-04-06 11:40:59 +08:00
Sze-qq
10591ecdf9
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style ( #636 )
2022-04-06 11:40:59 +08:00
Wangbo Zhao
6fcb381801
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style ( #635 )
2022-04-06 11:40:59 +08:00
ExtremeViscent
8a5d526e95
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu and cross_entropy.cu code style ( #634 )
2022-04-06 11:40:59 +08:00
RichardoLuo
ad1e7ab2b2
'[NFC] polish <colossalai/engine/_base_engine.py> code style' ( #631 )
...
Co-authored-by: RichardoLuo <14049555596@qq.com>
2022-04-06 11:40:59 +08:00
Zangwei
2e11853d04
[NFC] polish colossalai/communication/ring.py code style ( #630 )
2022-04-06 11:40:59 +08:00
puck_WCR
01cc941e1d
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code stype ( #629 )
2022-04-06 11:40:59 +08:00
superhao1995
c1bed0d998
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code stype ( #628 )
2022-04-06 11:40:59 +08:00
Jiang Zhuo
0a96338b13
[NFC] polish <colossalai/context/process_group_initializer/initializer_data.py> code stype ( #626 )
...
Co-authored-by: 姜卓 <jiangzhuo@jiangzhuodeMacBook-Pro.local>
2022-04-06 11:40:59 +08:00
ziyu huang
701bad439b
[NFC] polish colossalai/context/process_group_initializer/process_group_initializer.py code stype ( #617 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-04-06 11:40:59 +08:00
Shawn-Kong
db54419409
fix format ( #613 )
...
Co-authored-by: evin K <evink@evins-MacBook-Air.local>
2022-04-06 11:40:59 +08:00
Yuer867
5ecef13c16
fix format ( #611 )
2022-04-06 11:40:59 +08:00
xyupeng
d3d5bedc65
fix format ( #607 )
2022-04-06 11:40:59 +08:00
xuqifan897
f2d2a1597a
fix format ( #608 )
2022-04-06 11:40:59 +08:00
doubleHU
f2da21a827
fix format ( #586 )
2022-04-06 11:40:59 +08:00
fanjinfucool
ffad81e1d1
fix format ( #585 )
...
Co-authored-by: fanjifu <FAN>
2022-04-06 11:40:59 +08:00
binmakeswell
6582aedc94
fix format ( #583 )
2022-04-06 11:40:59 +08:00
DouJS
f08fc17f2b
block_reduce.h fix format ( #581 )
2022-04-06 11:40:59 +08:00
Maruyama_Aya
d2dc6049b5
fix format ( #580 )
2022-04-06 11:40:59 +08:00
wky
174b9c1d85
fix format ( #574 )
2022-04-06 11:40:59 +08:00
BoxiangW
dfe423ae42
fix format ( #572 )
2022-04-06 11:40:59 +08:00
yuxuan-lou
cfb41297ff
'fix/format' ( #573 )
2022-04-06 11:40:59 +08:00
Kai Wang (Victor Kai)
b0f708dfc1
fix format ( #570 )
2022-04-06 11:40:59 +08:00
Xu Kai
2a915a8b62
fix format ( #568 )
2022-04-06 11:40:59 +08:00
YuliangLiu0306
9420d3ae31
fix format ( #567 )
2022-04-06 11:40:59 +08:00
Jie Zhu
0f1da44e5e
[format]colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp ( #566 )
2022-04-06 11:40:59 +08:00
coder-chin
5835631218
fix format ( #564 )
2022-04-06 11:40:59 +08:00
Luxios22
e014144c44
fix format ( #565 )
2022-04-06 11:40:59 +08:00
Ziyue Jiang
1762ba14ab
fix format ( #563 )
2022-04-06 11:40:59 +08:00
HELSON
17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data ( #664 )
2022-04-03 22:02:11 +08:00
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing ( #655 )
2022-04-03 21:48:06 +08:00
YuliangLiu0306
ade05a5d83
[refactor] pipeline, put runtime schedule into engine. ( #627 )
2022-04-03 20:46:45 +08:00
HELSON
e5d615aeee
[hotfix] fix bugs in testing ( #659 )
...
* remove hybrid adam in test_moe_zero_optim
* fix activation checkpointing and its unitest
2022-04-02 21:58:47 +08:00
Jiarui Fang
036404ca8a
Revert "[zero] polish init context ( #645 )" ( #657 )
2022-04-02 18:30:06 +08:00
HELSON
b31daed4cf
fix bugs in CPU adam ( #633 )
...
* add cpu adam counter for all cpu adam
* fixed updating error in adam kernel
2022-04-02 17:04:05 +08:00
LuGY
1e2557e801
[zero] fixed the activation offload ( #647 )
...
* fixed the activation offload
* polish
2022-04-02 16:21:32 +08:00
Liang Bowen
828e465622
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism ( #622 )
2022-04-02 16:12:04 +08:00
Jiarui Fang
67b4928244
[zero] polish init context ( #645 )
2022-04-02 15:52:04 +08:00
ver217
f5d3a9c2b0
polish checkpoint docstring ( #637 )
2022-04-02 13:34:33 +08:00
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) ( #601 )
2022-04-01 20:10:47 +08:00
KAIYUAN GAN
229382c844
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cuda_util.cu code stype ( #625 )
2022-04-01 17:45:53 +08:00
アマデウス
28b515d610
[model checkpoint] updated checkpoint hook ( #598 )
2022-04-01 16:53:03 +08:00
アマデウス
77ad24bf94
[model checkpoint] updated saving/loading for 3d layers ( #597 )
2022-04-01 16:52:47 +08:00
アマデウス
93089ed708
[model checkpoint] updated saving/loading for 2.5d layers ( #596 )
2022-04-01 16:52:33 +08:00
アマデウス
6302069c0e
[model checkpoint] updated communication ops for cpu tensors ( #590 )
2022-04-01 16:52:20 +08:00
アマデウス
c50bfb807b
[model checkpoint] updated saving/loading for 1d layers ( #594 )
2022-04-01 16:51:52 +08:00
アマデウス
7636d518e1
[model checkpoint] updated saving/loading for 2d layers ( #595 )
2022-04-01 16:50:34 +08:00
アマデウス
cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states ( #593 )
2022-04-01 16:49:56 +08:00
アマデウス
acae68eb04
[model checkpoint] updated checkpoint save/load utils ( #592 )
2022-04-01 16:49:21 +08:00
Ziyue Jiang
1c40ee8749
[TP] add assert for tp1d ( #621 )
2022-04-01 16:44:23 +08:00
ver217
369a288bf3
polish utils docstring ( #620 )
2022-04-01 16:36:47 +08:00
ver217
e619a651fb
polish optimizer docstring ( #619 )
2022-04-01 16:27:03 +08:00
ver217
8432dc7080
polish moe docsrting ( #618 )
2022-04-01 16:15:36 +08:00
ver217
c5b488edf8
polish amp docstring ( #616 )
2022-04-01 16:09:39 +08:00
ver217
0ef8819c67
polish docstring of zero ( #612 )
2022-04-01 14:50:56 +08:00
LuGY
02b187c14f
[zero] add sampling time for memstats collector ( #610 )
2022-04-01 14:03:00 +08:00
ver217
9bee119104
[hotfix] fix sharded optim zero grad ( #604 )
...
* fix sharded optim zero grad
* polish comments
2022-04-01 12:41:20 +08:00
アマデウス
297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication ( #589 )
2022-04-01 10:15:52 +08:00
アマデウス
54e688b623
moved ensure_path_exists to utils.common ( #591 )
2022-04-01 09:46:33 +08:00
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
2022-04-01 09:22:33 +08:00
ver217
104cbbb313
[hotfix] add hybrid adam to __init__ ( #584 )
2022-03-31 19:08:34 +08:00
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
2022-03-31 18:34:11 +08:00
Wesley
46c9ba33da
update code format
2022-03-31 17:15:08 +08:00
Wesley
666cfd094a
fix parallel_input flag for Linear1D_Col gather_output
2022-03-31 17:15:08 +08:00
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
2022-03-31 16:26:54 +08:00
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
2022-03-31 12:25:45 +08:00
Liang Bowen
2c45efc398
html refactor ( #555 )
2022-03-31 11:36:56 +08:00
Jiarui Fang
d1211148a7
[utils] update colo tensor moving APIs ( #553 )
2022-03-30 23:13:24 +08:00
LuGY
c44d797072
[docs] updatad docs of hybrid adam and cpu adam ( #552 )
2022-03-30 18:14:59 +08:00
ver217
014bac0c49
[zero] hijack p.grad in sharded model ( #554 )
...
* hijack p.grad in sharded model
* polish comments
* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad ( #551 )
2022-03-30 15:57:46 +08:00
Jiarui Fang
214da761d4
[zero] add stateful tensor ( #549 )
2022-03-30 13:51:37 +08:00
Jiarui Fang
107b99ddb1
[zero] dump memory stats for sharded model ( #548 )
2022-03-30 09:38:44 +08:00
Ziyue Jiang
763dc325f1
[TP] Add gather_out arg to Linear ( #541 )
2022-03-30 09:35:46 +08:00
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
2022-03-29 17:57:59 +08:00
Liang Bowen
ec5086c49c
Refactored docstring to google style
2022-03-29 17:17:47 +08:00
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
2022-03-29 15:45:48 +08:00
Jie Zhu
73d36618a6
[profiler] add MemProfiler ( #356 )
...
* add memory trainer hook
* fix bug
* add memory trainer hook
* fix import bug
* fix import bug
* add trainer hook
* fix #370 git log bug
* modify `to_tensorboard` function to support better output
* remove useless output
* change the name of `MemProfiler`
* complete memory profiler
* replace error with warning
* finish trainer hook
* modify interface of MemProfiler
* modify `__init__.py` in profiler
* remove unnecessary pass statement
* add usage to doc string
* add usage to trainer hook
* new location to store temp data file
2022-03-29 12:48:34 +08:00
ver217
fb841dd5c5
[zero] optimize grad offload ( #539 )
...
* optimize grad offload
* polish code
* polish code
2022-03-29 12:48:00 +08:00
Jiarui Fang
7d81b5b46e
[logging] polish logger format ( #543 )
2022-03-29 10:37:11 +08:00
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
2022-03-29 09:09:04 +08:00
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
2022-03-29 09:08:18 +08:00
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
2022-03-28 16:38:18 +08:00
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
2022-03-28 16:19:19 +08:00
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
2022-03-28 15:01:21 +08:00
Jiarui Fang
05e33b2578
[zero] fix grad offload ( #528 )
...
* [zero] fix grad offload
* polish code
2022-03-25 18:23:25 +08:00
LuGY
105c5301c3
[zero]added hybrid adam, removed loss scale in adam ( #527 )
...
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
2022-03-25 18:03:32 +08:00
Frank Lee
3601b2bad0
[test] fixed rerun_on_exception and adapted test cases ( #487 )
2022-03-25 17:25:12 +08:00
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
2022-03-25 14:54:39 +08:00
LuGY
6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 ( #497 )
2022-03-25 14:15:53 +08:00
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
2022-03-25 14:02:55 +08:00
ver217
7be397ca9c
[log] polish disable_existing_loggers ( #519 )
2022-03-25 12:30:55 +08:00
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
2022-03-25 12:24:18 +08:00
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
2022-03-25 11:23:35 +08:00
ver217
a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly ( #512 )
...
* enable rm_torch_payload_on_the_fly
* polish docstr
2022-03-24 23:44:00 +08:00
Jiarui Fang
81145208d1
[install] run with out rich ( #513 )
2022-03-24 17:39:50 +08:00
Jiarui Fang
bca0c49a9d
[zero] use colo model data api in optimv2 ( #511 )
2022-03-24 17:19:34 +08:00
Jiarui Fang
9330be0f3c
[memory] set cuda mem frac ( #506 )
2022-03-24 16:57:13 +08:00
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
2022-03-24 14:29:41 +08:00
Jiarui Fang
a445e118cf
[polish] polish singleton and global context ( #500 )
2022-03-23 18:03:39 +08:00
ver217
9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard ( #495 )
...
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
2022-03-23 14:59:59 +08:00
HELSON
f24b5ed201
[MOE] remove old MoE legacy ( #493 )
2022-03-22 17:37:16 +08:00
ver217
c4c02424f3
[zero] sharded model manages ophooks individually ( #492 )
2022-03-22 17:33:20 +08:00
HELSON
c9023d4078
[MOE] support PR-MOE ( #488 )
2022-03-22 16:48:22 +08:00
ver217
a9ecb4b244
[zero] polish sharded optimizer v2 ( #490 )
2022-03-22 15:53:48 +08:00
ver217
62b0a8d644
[zero] sharded optim support hybrid cpu adam ( #486 )
...
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
2022-03-22 14:36:16 +08:00
HELSON
d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context ( #480 )
2022-03-22 10:50:20 +08:00
Jiarui Fang
65c0f380c2
[format] polish name format for MOE ( #481 )
2022-03-21 23:19:47 +08:00
ver217
8d3250d74b
[zero] ZeRO supports pipeline parallel ( #477 )
2022-03-21 16:55:37 +08:00
Frank Lee
83a847d058
[test] added rerun on exception for testing ( #475 )
...
* [test] added rerun on exception function
* polish code
2022-03-21 15:51:57 +08:00
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
2022-03-21 13:35:04 +08:00
ver217
3cb3fc275e
zero init ctx receives a dp process group ( #471 )
2022-03-21 11:18:55 +08:00
HELSON
aff9d354f7
[MOE] polish moe_env ( #467 )
2022-03-19 15:36:25 +08:00
HELSON
bccbc15861
[MOE] changed parallelmode to dist process group ( #460 )
2022-03-19 13:46:29 +08:00
ver217
fc8e6db005
[doc] Update docstring for ZeRO ( #459 )
...
* polish sharded model docstr
* polish sharded optim docstr
* polish zero docstr
* polish shard strategy docstr
2022-03-18 16:48:20 +08:00
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler ( #455 )
2022-03-18 16:38:32 +08:00
ver217
a241f61b34
[zero] Update initialize for ZeRO ( #458 )
...
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
2022-03-18 16:18:31 +08:00
ver217
642846d6f9
update sharded optim and fix zero init ctx ( #457 )
2022-03-18 15:44:47 +08:00
Jiarui Fang
e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" ( #456 )
...
* Revert "polish code"
This reverts commit 8cf7ff08cf
.
* Revert "rename variables"
This reverts commit e99af94ab8
.
* Revert "remove surplus imports"
This reverts commit 46add4a5c5
.
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768
.
2022-03-18 15:22:43 +08:00
ver217
e99af94ab8
rename variables
2022-03-18 14:25:25 +08:00
ver217
57567ee768
update sharded optim and fix zero init ctx
2022-03-18 14:25:25 +08:00
Jiarui Fang
0fcfb1e00d
[test] make zero engine test really work ( #447 )
2022-03-17 17:24:25 +08:00
Jiarui Fang
237d08e7ee
[zero] hybrid cpu adam ( #445 )
2022-03-17 15:05:41 +08:00
Frank Lee
b72b8445c6
optimized context test time consumption ( #446 )
2022-03-17 14:40:52 +08:00
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
2022-03-17 13:16:22 +08:00
Jiarui Fang
640a6cd304
[refactory] refactory the initialize method for new zero design ( #431 )
2022-03-16 19:29:37 +08:00
Frank Lee
bffd85bf34
added testing module ( #435 )
2022-03-16 17:20:05 +08:00
HELSON
dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE ( #434 )
2022-03-16 16:47:44 +08:00
Frank Lee
b03b3ae99c
fixed mem monitor device ( #433 )
...
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Frank Lee
14a7094243
fixed fp16 optimizer none grad bug ( #432 )
2022-03-16 14:35:46 +08:00
ver217
fce9432f08
sync before creating empty grad
2022-03-16 14:24:09 +08:00
ver217
ea6905a898
free param.grad
2022-03-16 14:24:09 +08:00
ver217
9506a8beb2
use double buffer to handle grad
2022-03-16 14:24:09 +08:00
Jiarui Fang
54229cd33e
[log] better logging display with rich ( #426 )
...
* better logger using rich
* remove deepspeed in zero requirements
2022-03-16 09:51:15 +08:00
HELSON
3f70a2b12f
removed noisy function during evaluation of MoE router ( #419 )
2022-03-15 12:06:09 +08:00
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
2022-03-15 12:02:19 +08:00
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
2022-03-15 11:29:46 +08:00
Jiarui Fang
23ba3fc450
[zero] refactory ShardedOptimV2 init method ( #416 )
2022-03-15 10:45:55 +08:00
Frank Lee
e79ea44247
[fp16] refactored fp16 optimizer ( #392 )
2022-03-15 10:05:38 +08:00
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
2022-03-14 22:05:30 +08:00
Jiarui Fang
370f567e7d
[zero] new interface for ShardedOptimv2 ( #406 )
2022-03-14 20:48:41 +08:00
LuGY
a9c27be42e
Added tensor detector ( #393 )
...
* Added tensor detector
* Added the - states
* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
1SAA
907ac4a2dc
fixed error when no collective communication in CommProfiler
2022-03-14 17:21:00 +08:00
Frank Lee
2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
...
[zero] Add bucket tensor shard strategy
2022-03-14 16:29:28 +08:00
HELSON
dfd0363f68
polished output format for communication profiler and pcie profiler ( #404 )
...
fixed typing error
2022-03-14 16:07:45 +08:00
ver217
63469c0f91
polish code
2022-03-14 15:48:55 +08:00
ver217
88804aee49
add bucket tensor shard strategy
2022-03-14 14:48:32 +08:00
HELSON
7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler ( #394 )
2022-03-11 18:12:46 +08:00
Frank Lee
1e4bf85cdb
fixed bug in activation checkpointing test ( #387 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Jiang Zhuo
5a4a3b77d9
fix format ( #376 )
2022-03-11 15:50:28 +08:00
LuGY
de46450461
Added activation offload ( #331 )
...
* Added activation offload
* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang
272ebfb57d
[bug] shard param during initializing the ShardedModelV2 ( #381 )
2022-03-11 15:50:28 +08:00
HELSON
8c18eb0998
[profiler] Fixed bugs in CommProfiler and PcieProfiler ( #377 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
b5f43acee3
[zero] find miss code ( #378 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
6b6002962a
[zero] zero init context collect numel of model ( #375 )
2022-03-11 15:50:28 +08:00
HELSON
1ed7c24c02
Added PCIE profiler to dectect data transmission ( #373 )
2022-03-11 15:50:28 +08:00
jiaruifang
d9217e1960
Revert "[zero] bucketized tensor cpu gpu copy ( #368 )"
...
This reverts commit bef05489b6
.
2022-03-11 15:50:28 +08:00
RichardoLuo
8539898ec6
flake8 style change ( #363 )
2022-03-11 15:50:28 +08:00
Kai Wang (Victor Kai)
53bb3bcc0a
fix format ( #362 )
2022-03-11 15:50:28 +08:00
ziyu huang
a77d73f22b
fix format parallel_context.py ( #359 )
...
Co-authored-by: huangziyu <202476410arsmart@gmail.com>
2022-03-11 15:50:28 +08:00
Zangwei
c695369af0
fix format constants.py ( #358 )
2022-03-11 15:50:28 +08:00
Yuer867
4a0f8c2c50
fix format parallel_2p5d ( #357 )
2022-03-11 15:50:28 +08:00
Liang Bowen
7eb87f516d
flake8 style ( #352 )
2022-03-11 15:50:28 +08:00
Xu Kai
54ee8d1254
Fix/format colossalai/engine/paramhooks/( #350 )
2022-03-11 15:50:28 +08:00
Maruyama_Aya
e83970e3dc
fix format ColossalAI\colossalai\context\process_group_initializer
2022-03-11 15:50:28 +08:00
yuxuan-lou
3b88eb2259
Flake8 code restyle
2022-03-11 15:50:28 +08:00
xuqifan897
148207048e
Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py ( #342 )
2022-03-11 15:50:28 +08:00
Cautiousss
3a51d909af
fix format ( #332 )
...
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
2022-03-11 15:50:28 +08:00
DouJS
cbb6436ff0
fix format for dir-[parallel_3d] ( #333 )
2022-03-11 15:50:28 +08:00
ExtremeViscent
eaac03ae1d
[formart] format fixed for kernel\cuda_native codes ( #335 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
00670c870e
[zero] bucketized tensor cpu gpu copy ( #368 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
44e4891f57
[zero] able to place params on cpu after zero init context ( #365 )
...
* place params on cpu after zero init context
* polish code
2022-03-11 15:50:28 +08:00
ver217
253e54d98a
fix grad shape
2022-03-11 15:50:28 +08:00
Jiarui Fang
ea2872073f
[zero] global model data memory tracer ( #360 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
cb34cd384d
[test] polish zero related unitest ( #351 )
2022-03-11 15:50:28 +08:00
HELSON
534e0bb118
Fixed import bug for no-tensorboard environment ( #354 )
2022-03-11 15:50:28 +08:00
HELSON
c57e089824
[profile] added example for ProfilerContext ( #349 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
10e2826426
move async memory to an individual directory ( #345 )
2022-03-11 15:50:28 +08:00
HELSON
425bb0df3f
Added Profiler Context to manage all profilers ( #340 )
2022-03-11 15:50:28 +08:00
ver217
d0ae0f2215
[zero] update sharded optim v2 ( #334 )
2022-03-11 15:50:28 +08:00
jiaruifang
5663616921
polish code
2022-03-11 15:50:28 +08:00
jiaruifang
7977422aeb
add bert for unitest and sharded model is not able to pass the bert case
2022-03-11 15:50:28 +08:00
Frank Lee
3d5d64bd10
refactored grad scaler ( #338 )
2022-03-11 15:50:28 +08:00
Frank Lee
6a3188167c
set criterion as optional in colossalai initialize ( #336 )
2022-03-11 15:50:28 +08:00
Jie Zhu
3213554cc2
[profiler] add adaptive sampling to memory profiler ( #330 )
...
* fix merge conflict
modify unit test
remove unnessesary log info
reformat file
* remove unused module
* remove unnecessary sync function
* change doc string style from Google to Sphinx
2022-03-11 15:50:28 +08:00
ver217
1388671699
[zero] Update sharded model v2 using sharded param v2 ( #323 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
2022-03-11 15:50:28 +08:00
HELSON
4f26fabe4f
fixed strings in profiler outputs ( #325 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
2022-03-11 15:50:28 +08:00
1SAA
73bff11288
Added profiler communication operations
...
Fixed bug for learning rate scheduler
2022-03-11 15:50:28 +08:00
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-11 15:50:28 +08:00
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
2022-03-11 15:50:28 +08:00
ver217
3092317b80
polish code
2022-03-11 15:50:28 +08:00
ver217
36f9a74ab2
fix sharded param hook and unit test
2022-03-11 15:50:28 +08:00
ver217
001ca624dd
impl shard optim v2 and add unit test
2022-03-11 15:50:28 +08:00
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
80364c7686
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
2022-03-11 15:50:28 +08:00
Jie Zhu
d344689274
[profiler] primary memory tracer
2022-03-11 15:50:28 +08:00
ver217
b105371ace
rename shared adam to sharded optim v2
2022-03-11 15:50:28 +08:00
ver217
70814dc22f
fix master params dtype
2022-03-11 15:50:28 +08:00
ver217
795210dd99
add fp32 master params in sharded adam
2022-03-11 15:50:28 +08:00
ver217
a109225bc2
add sharded adam
2022-03-11 15:50:28 +08:00
Jiarui Fang
e17e92c54d
Polish sharded parameter ( #297 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add more unittests to shareded param
2022-03-11 15:50:28 +08:00
ver217
7aef75ca42
[zero] add sharded grad and refactor grad hooks for ShardedModel ( #287 )
2022-03-11 15:50:28 +08:00
Frank Lee
9afb5c8b2d
fixed typo in ShardParam ( #294 )
2022-03-11 15:50:28 +08:00
Frank Lee
e17e54e32a
added buffer sync to naive amp model wrapper ( #291 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
8d653af408
add a common util for hooks registered on parameter. ( #292 )
2022-03-11 15:50:28 +08:00
Jie Zhu
f867365aba
bug fix: pass hook_list to engine ( #273 )
...
* bug fix: pass hook_list to engine
* change parameter name
2022-03-11 15:50:28 +08:00
Jiarui Fang
5a560a060a
Feature/zero ( #279 )
...
* add zero1 (#209 )
* add zero1
* add test zero1
* update zero stage 1 develop (#212 )
* Implement naive zero3 (#240 )
* naive zero3 works well
* add zero3 param manager
* add TODOs in comments
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* fix bugs of hook and add unit tests (#252 )
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* polish code and add state dict hook
* fix bug
* update unit test
* refactor reconstructed zero code
* clip_grad support zero3 and add unit test
* add unit test for Zero3ParameterManager
* [WIP] initialize the shard param class
* [WIP] Yet another sharded model implementation (#274 )
* [WIP] initialize the shard param class
* [WIP] Yes another implementation of shardModel. Using a better hook method.
* torch.concat -> torch.cat
* fix test_zero_level_1.py::test_zero_level_1 unitest
* remove deepspeed implementation and refactor for the reconstructed zero module
* polish zero dp unittests
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
1SAA
82023779bb
Added TPExpert for special situation
2022-03-11 15:50:28 +08:00
HELSON
36b8477228
Fixed parameter initialization in FFNExpert ( #251 )
2022-03-11 15:50:28 +08:00
アマデウス
e13293bb4c
fixed CI dataset directory; fixed import error of 2.5d accuracy ( #255 )
2022-03-11 15:50:28 +08:00
1SAA
219df6e685
Optimized MoE layer and fixed some bugs;
...
Decreased moe tests;
Added FFNExperts and ViTMoE model
2022-03-11 15:50:28 +08:00
zbian
3dba070580
fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
2022-03-11 15:50:28 +08:00
Frank Lee
f5ca88ec97
fixed apex import ( #227 )
2022-02-15 11:31:13 +08:00
Frank Lee
3a1a9820b0
fixed mkdir conflict and align yapf config with flake ( #220 )
2022-02-15 11:31:13 +08:00
アマデウス
9ee197d0e9
moved env variables to global variables; ( #215 )
...
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
Frank Lee
812357d63c
fixed utils docstring and add example to readme ( #200 )
2022-02-03 11:37:17 +08:00
Frank Lee
765db512b5
fixed ddp bug on torch 1.8 ( #194 )
2022-01-28 15:14:04 +08:00
Jiarui Fang
569357fea0
add pytorch hooks ( #179 )
...
* add pytorch hooks
fix #175
* remove licenses in src code
* add gpu memory tracer
* replacing print with logger in ophooks.
2022-01-25 22:20:54 +08:00
ver217
708404d5f8
fix pipeline forward return tensors ( #176 )
2022-01-21 15:46:02 +08:00
HELSON
0f8c7f9804
Fixed docstring in colossalai ( #171 )
2022-01-21 10:44:30 +08:00
Frank Lee
e2089c5c15
adapted for sequence parallel ( #163 )
2022-01-20 13:44:51 +08:00
puck_WCR
9473a1b9c8
AMP docstring/markdown update ( #160 )
2022-01-18 18:33:36 +08:00
Frank Lee
f3802d6b06
fixed jit default setting ( #154 )
2022-01-18 13:37:20 +08:00
ver217
7bf1e98b97
pipeline last stage supports multi output ( #151 )
2022-01-17 15:57:47 +08:00
ver217
f68eddfb3d
refactor kernel ( #142 )
2022-01-13 16:47:17 +08:00
BoxiangW
4a3d3446b0
Update layer integration documentations ( #108 )
...
Update the documentations of layer integration
Update _log_hook.py
Update _operation.py
2022-01-10 18:05:58 +08:00
ver217
9ef05ed1fc
try import deepspeed when using zero ( #130 )
2022-01-07 17:24:57 +08:00
HELSON
dceae85195
Added MoE parallel ( #127 )
2022-01-07 15:08:36 +08:00
ver217
293fb40c42
add scatter/gather optim for pipeline ( #123 )
2022-01-07 13:22:22 +08:00
Jiarui Fang
2c0c85d3d3
fix a bug in timer ( #114 )
2022-01-05 16:07:06 +08:00
ver217
7904baf6e1
fix layers/schedule for hybrid parallelization ( #111 ) ( #112 )
2022-01-04 20:52:31 +08:00
ver217
a951bc6089
update default logger ( #100 ) ( #101 )
2022-01-04 20:03:26 +08:00
ver217
96780e6ee4
Optimize pipeline schedule ( #94 )
...
* add pipeline shared module wrapper and update load batch
* added model parallel process group for amp and clip grad (#86 )
* added model parallel process group for amp and clip grad
* update amp and clip with model parallel process group
* remove pipeline_prev/next group (#88 )
* micro batch offload
* optimize pipeline gpu memory usage
* pipeline can receive tensor shape (#93 )
* optimize pipeline gpu memory usage
* fix grad accumulation step counter
* rename classes and functions
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
アマデウス
01a80cd86d
Hotfix/Colossalai layers ( #92 )
...
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests
* fixed 2.5d runtime issue
* reworked split batch, now called in trainer.schedule.load_batch
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-29 23:32:10 +08:00
アマデウス
0fedef4f3c
Layer integration ( #83 )
...
* integrated parallel layers for ease of building models
* integrated 2.5d layers
* cleaned codes and unit tests
* added log metric by step hook; updated imagenet benchmark; fixed some bugs
* reworked initialization; cleaned codes
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-27 15:04:32 +08:00
shenggan
5c3843dc98
add colossalai kernel module ( #55 )
2021-12-21 12:19:52 +08:00
ver217
8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer ( #80 )
2021-12-20 23:26:19 +08:00
Frank Lee
91c327cb44
fixed zero level 3 dtype bug ( #76 )
2021-12-20 17:00:53 +08:00
HELSON
632e622de8
overlap computation and communication in 2d operations ( #75 )
2021-12-16 16:05:15 +08:00
Frank Lee
cd9c28e055
added CI for unit testing ( #69 )
2021-12-16 10:32:08 +08:00
Frank Lee
35813ed3c4
update examples and sphnix docs for the new api ( #63 )
2021-12-13 22:07:01 +08:00
ver217
7d3711058f
fix zero3 fp16 and add zero3 model context ( #62 )
2021-12-10 17:48:50 +08:00
Frank Lee
9a0466534c
update markdown docs (english) ( #60 )
2021-12-10 14:37:33 +08:00
Frank Lee
da01c234e1
Develop/experiments ( #59 )
...
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000
* Integrate 1d tensor parallel in Colossal-AI (#39 )
* fixed 1D and 2D convergence (#38 )
* optimized 2D operations
* fixed 1D ViT convergence problem
* Feature/ddp (#49 )
* remove redundancy func in setup (#19 ) (#20 )
* use env to control the language of doc (#24 ) (#25 )
* Support TP-compatible Torch AMP and Update trainer API (#27 )
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29 )
* add explanation for ViT example (#35 ) (#36 )
* support torch ddp
* fix loss accumulation
* add log for ddp
* change seed
* modify timing hook
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* Feature/pipeline (#40 )
* remove redundancy func in setup (#19 ) (#20 )
* use env to control the language of doc (#24 ) (#25 )
* Support TP-compatible Torch AMP and Update trainer API (#27 )
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29 )
* add explanation for ViT example (#35 ) (#36 )
* optimize communication of pipeline parallel
* fix grad clip for pipeline
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51 )
* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset
* update api for better usability (#58 )
update api for better usability
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00
ver217
dbe62c67b8
add an example of ViT-B/16 and remove w_norm clipping in LAMB ( #29 )
2021-11-18 23:45:09 +08:00
Frank Lee
3defa32aee
Support TP-compatible Torch AMP and Update trainer API ( #27 )
...
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
2021-11-18 19:45:06 +08:00
ver217
3c7604ba30
update documentation
2021-10-29 09:29:20 +08:00
zbian
404ecbdcc6
Migrated project
2021-10-28 18:21:23 +02:00