mirror of https://github.com/hpcaitech/ColossalAI
![]() * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/PyCQA/autoflake: v2.2.1 → v2.3.1](https://github.com/PyCQA/autoflake/compare/v2.2.1...v2.3.1) - [github.com/pycqa/isort: 5.12.0 → 5.13.2](https://github.com/pycqa/isort/compare/5.12.0...5.13.2) - [github.com/psf/black-pre-commit-mirror: 23.9.1 → 24.4.2](https://github.com/psf/black-pre-commit-mirror/compare/23.9.1...24.4.2) - [github.com/pre-commit/mirrors-clang-format: v13.0.1 → v18.1.7](https://github.com/pre-commit/mirrors-clang-format/compare/v13.0.1...v18.1.7) - [github.com/pre-commit/pre-commit-hooks: v4.3.0 → v4.6.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.3.0...v4.6.0) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> |
||
---|---|---|
.. | ||
model | ||
utils | ||
README.md | ||
arguments.py | ||
bert_dataset_provider.py | ||
evaluation.py | ||
hostfile | ||
loss.py | ||
nvidia_bert_dataset_provider.py | ||
pretrain_utils.py | ||
run_pretrain.sh | ||
run_pretrain_resume.sh | ||
run_pretraining.py |
README.md
Pretraining
- Pretraining roberta through running the script below. Detailed parameter descriptions can be found in the arguments.py.
data_path_prefix
is absolute path specifies output of preprocessing. You have to modify the hostfile according to your cluster.
bash run_pretrain.sh
--hostfile
: servers' host name from /etc/hosts--include
: servers which will be used--nproc_per_node
: number of process(GPU) from each server--data_path_prefix
: absolute location of train data, e.g., /h5/0.h5--eval_data_path_prefix
: absolute location of eval data--tokenizer_path
: tokenizer path contains huggingface tokenizer.json, e.g./tokenizer/tokenizer.json--bert_config
: config.json which represent model--mlm
: model type of backbone, bert or deberta_v2
- if resume training from earlier checkpoint, run the script below.
bash run_pretrain_resume.sh
--resume_train
: whether to resume training--load_pretrain_model
: absolute path which contains model checkpoint--load_optimizer_lr
: absolute path which contains optimizer checkpoint