Commit Graph

116 Commits (fe0c342f9dfd99fe1ac7e828e68a45f6db68356e)

Author SHA1 Message Date
Wenwen Qu fe0c342f9d get moe setting from gpc 2024-01-09 15:26:13 +08:00
Wenwen Qu f5226b5152 refactor code 2024-01-08 16:23:53 +08:00
Wenwen Qu 41f8283a3e refactor code 2024-01-08 16:03:55 +08:00
Wenwen Qu c3854f924a refactor code 2024-01-08 14:33:19 +08:00
Wenwen Qu fdd60691d3 move all2all to utils 2024-01-08 13:16:17 +08:00
Wenwen Qu 07c98c4a39 remove suffix for gate key 2024-01-04 10:51:56 +08:00
Wenwen Qu 196514d87f refactor code 2024-01-03 17:39:37 +08:00
Yang Gao 5539f9db50
fix when resuming lr_scheduler without loading optimizer (#565) 2023-12-29 20:22:39 +08:00
Guoteng 220953d7e5
fix(metrics): remove redundant cuda memory in metric calculations (#557) 2023-12-29 20:21:24 +08:00
Guoteng c39d758a8a
feat(logger): add tensorboard key value buffer (#549)
* feat(logger): add tensorboard key value buffer

* fix
2023-12-29 16:23:47 +08:00
jiaxingli d418eba094
fix(model): add ckpt_type constraint when loading ckpts (#542)
* support hf llama

* support hf llama

* support hf llama

* support hf llama

* importerror

* importerror

* modeling

* modeling

* fix bug

* add assert
2023-12-20 16:43:27 +08:00
jiaopenglong de53b17506
fix token grad norm with tp (#547) 2023-12-18 18:33:28 +08:00
Wenwen Qu 513ebb9c3a
fix(moe): fix moe zero mode bug (#548)
* fix moe zero mode bugs

* update moe config to fit training on 8 GPU
2023-12-18 14:39:42 +08:00
jiaxingli bbb5651582
fix(model): change model_type `LLAMA` to `LLAMA2` (#539)
* support hf llama

* support hf llama

* support hf llama

* support hf llama

* importerror

* importerror

* modeling

* modeling

* fix bug
2023-12-13 17:24:45 +08:00
Guoteng 5ecb6aa712
fix(pp): fix no-packed dataset load micro batch error (#538)
* fix(pp): fix no-packed dataset load micro batch error

* fix based on comment
2023-12-13 14:48:32 +08:00
ytxiong 432bd5ee9f
fix the bug so that the sequence parallel norm is all-reduced when overlap is False (#534) 2023-12-12 16:22:39 +08:00
jiaxingli d904730be7
feat(ckpt): support auto resume in Volc and Ali (#529)
* multipart upload

* upload

* storage

* storage

* storage

* storage

* change ak sk name

* change ak sk name

* change ak sk name

* change ak sk name

* storage

* storage

* auto resume

* auto resume

* auto resume

* bug
2023-12-12 13:27:24 +08:00
Pryest cc5b15349d
fix(metric): add metric dtype control (#533)
* fix(metric): add metric dtype control

* fix demo config to avoid implicity

* fix default behavior
2023-12-11 19:36:31 +08:00
jiaxingli 6c0ff4820f
feat(model): support llama model with checkpoint loading (#532)
* support hf llama

* support hf llama

* support hf llama

* support hf llama

* importerror

* importerror

* modeling

* modeling
2023-12-11 16:25:24 +08:00
jiaxingli 828033aed5
fix(storage): unify the name of ak & sk (#527)
* multipart upload

* upload

* storage

* storage

* storage

* storage

* change ak sk name

* change ak sk name

* change ak sk name

* change ak sk name

* storage

* storage
2023-12-06 15:31:44 +08:00
ytxiong 809ad9ebc8
fix the type_ids when micro_num=1 and use_flash_attn=False (#516) 2023-12-06 14:38:28 +08:00
jiaopenglong 112c34ae09
feat(grad_norm): vocab grad norm profiling (#519)
* compute vocab grad norm && save pt

* add grad_norm profiling interval && refactor save grad norm

* fix ci test_pipeline
2023-12-06 13:52:42 +08:00
jiaopenglong 9fc252f40e
add output embedding tf32 option (#523) 2023-12-06 13:50:59 +08:00
ytxiong c581cc4c02
fix(model): add IS_SEQUENCE_PARALLEL check for norm module (#528)
* add IS_SEQUENCE_PARALLEL check for norm module

* fix lint

* remove comments

* replace the named_children by named_modules

* fix lint

* fix the spell bug and move the sequence judge to training_internlm
2023-12-06 12:06:22 +08:00
jiaxingli 1738bee002
feat(storage): use multipart upload when using oss (#520)
* multipart upload

* upload

* storage

* storage

* storage

* storage
2023-12-01 17:05:58 +08:00
Guoteng 757e19e01a
1. fix(config): rampup_batch_size defalut value BC. (#515)
2. fix(config): standardize config parameter access.
3. feat(launch): add warmup_process_group
4. feat(memory): add cuda_memory_analyze
2023-11-28 19:33:46 +08:00
Shuo Zhang 0d3811c029
feat(model): add rope_base interface (#512) 2023-11-23 16:30:14 +08:00
jiaopenglong f5aea7e08c
fix(timeout): larger timeout (#495)
* larger initialize timeout

* unify time format

* update timeout thresholds
2023-11-21 19:19:22 +08:00
jiaxingli eba2b859fc
feat(seed): set global seed for every model initialization (#496)
* bind seed

* bind seed
2023-11-17 14:42:50 +08:00
Guoteng 0bfc86205e
feat(train): support_rampup_batch_size and fix bugs (#493) 2023-11-16 19:51:01 +08:00
YWMditto be5b9ea2fa
feat(train): update get_train_data_loader to make logic clearer (#498)
* update get_train_data_loader

* update get_train_data_loader, del old doc

---------

Co-authored-by: YWMditto <862779238@qq.com>
2023-11-14 17:05:15 +08:00
jiaopenglong 626ed0fc5e
fix(train): unify the exp paths (#492) 2023-11-11 20:15:59 +08:00
jiaopenglong 3418898cbe
fix(alert): send exception of all ranks (#491)
* catch exception of all ranks

* monitor task only if DO_ALERT is True
2023-11-10 19:04:31 +08:00
Yang Gao 07026d1821
fix dataset types when using random dataset (#489) 2023-11-10 15:08:22 +08:00
Guoteng b7ecdba617
feat(ckpt): save ckpt when reach total step count (#486) 2023-11-09 21:07:16 +08:00
Pryest 5b67db33d0
fix(metric): use float32 to compute ppl (#481) 2023-11-09 20:26:46 +08:00
jiaopenglong a435980e0c
rename vars (#468) 2023-11-09 20:04:35 +08:00
jiaopenglong 0763bf3972
init light monitoring on all ranks (#462) 2023-11-09 20:04:21 +08:00
YWMditto 0218e3131c
feat(tools): support origin internlm architecture in web_demo (#478)
* debug for web_demo_internlm

* support web_demo_internlm

* update readme.md

* update web_demo.py

* update InternLM/tools/load_internlm_model.py

* update apis/inference.py

* update apis/inference.py

* update tools/load_internlm_model

* del private info in load_internlm_model.py

* fix some info

* fix some info

---------

Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 20:01:55 +08:00
Yang Gao 6f69bd2087
feat(data): walk folder to get dataset_type_ids_map (#477)
* walk folder to get dataset_type_ids_map

* fix a bug
2023-11-07 19:21:10 +08:00
Yang Gao 4d1b1cd5f1
fix(data): broadcast list when walking folders (#475) 2023-11-07 13:12:35 +08:00
jiaopenglong debb7e77b9
refactor grad norm profiling (#466) 2023-11-03 10:55:26 +08:00
jiaopenglong d537e45456
send exception to light monitor only if the server is available (#465) 2023-11-03 10:55:16 +08:00
Wenwen Qu 21624f6f81
fix(moe): remove norm&gate force sync (#448)
* add zero broadcast_sync

* delete old sync logic

* fix merged error

* refactor code

* remove some unused function (is norm/gate group)
2023-11-01 11:29:55 +08:00
Yang Gao f77f376fd6
fix(os): fix FileNotFoundError in storage_manager (#455)
* use rank0 to makedirs

* use try-except to handle file error

* fix ci
2023-10-27 22:32:46 +08:00
jiaxingli 4995060d84
feat(storage): support ali oss ckpt saving (#439) 2023-10-27 22:32:10 +08:00
jiaxingli e6d8ebc3e5
volc_path (#454) 2023-10-27 18:53:06 +08:00
jiaopenglong 87a3c5c374
feat(optimizer): zero gradient count (#449)
* add zero grad count

* fix layer norm with pp

* fix layer norm with pp

* add zero_grad_profiling option

* fix param_metrics is not a tensor
2023-10-27 16:26:55 +08:00
ytxiong ad70e323eb
fix(optimizer):broadcast (#453)
* fix broadcast synchronize()

* fix synchronize
2023-10-26 17:54:54 +08:00
ytxiong aeee9fd2a9
fix broadcast synchronize() (#450) 2023-10-26 17:33:00 +08:00