ColossalAI/examples/community/roberta/pretraining
pre-commit-ci[bot] 7c2f79fa98
[pre-commit.ci] pre-commit autoupdate (#5572)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/PyCQA/autoflake: v2.2.1 → v2.3.1](https://github.com/PyCQA/autoflake/compare/v2.2.1...v2.3.1)
- [github.com/pycqa/isort: 5.12.0 → 5.13.2](https://github.com/pycqa/isort/compare/5.12.0...5.13.2)
- [github.com/psf/black-pre-commit-mirror: 23.9.1 → 24.4.2](https://github.com/psf/black-pre-commit-mirror/compare/23.9.1...24.4.2)
- [github.com/pre-commit/mirrors-clang-format: v13.0.1 → v18.1.7](https://github.com/pre-commit/mirrors-clang-format/compare/v13.0.1...v18.1.7)
- [github.com/pre-commit/pre-commit-hooks: v4.3.0 → v4.6.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.3.0...v4.6.0)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-01 17:16:41 +08:00
..
model [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
utils [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
README.md fix typo examples/community/roberta (#3925) 2023-06-08 14:28:34 +08:00
arguments.py [bug] fix get_default_parser in examples (#4764) 2023-09-21 10:42:25 +08:00
bert_dataset_provider.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
evaluation.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
hostfile [example] reorganize for community examples (#3557) 2023-04-14 16:27:48 +08:00
loss.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
nvidia_bert_dataset_provider.py [pre-commit.ci] pre-commit autoupdate (#5572) 2024-07-01 17:16:41 +08:00
pretrain_utils.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
run_pretrain.sh [example] reorganize for community examples (#3557) 2023-04-14 16:27:48 +08:00
run_pretrain_resume.sh [example] reorganize for community examples (#3557) 2023-04-14 16:27:48 +08:00
run_pretraining.py [misc] refactor launch API and tensor constructor (#5666) 2024-04-29 10:40:11 +08:00

README.md

Pretraining

  1. Pretraining roberta through running the script below. Detailed parameter descriptions can be found in the arguments.py. data_path_prefix is absolute path specifies output of preprocessing. You have to modify the hostfile according to your cluster.
bash run_pretrain.sh
  • --hostfile: servers' host name from /etc/hosts
  • --include: servers which will be used
  • --nproc_per_node: number of process(GPU) from each server
  • --data_path_prefix: absolute location of train data, e.g., /h5/0.h5
  • --eval_data_path_prefix: absolute location of eval data
  • --tokenizer_path: tokenizer path contains huggingface tokenizer.json, e.g./tokenizer/tokenizer.json
  • --bert_config: config.json which represent model
  • --mlm: model type of backbone, bert or deberta_v2
  1. if resume training from earlier checkpoint, run the script below.
bash run_pretrain_resume.sh
  • --resume_train: whether to resume training
  • --load_pretrain_model: absolute path which contains model checkpoint
  • --load_optimizer_lr: absolute path which contains optimizer checkpoint