mirror of https://github.com/hpcaitech/ColossalAI
aibig-modeldata-parallelismdeep-learningdistributed-computingfoundation-modelsheterogeneous-traininghpcinferencelarge-scalemodel-parallelismpipeline-parallelism
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
pre-commit-ci[bot]
7c2f79fa98
|
5 months ago | |
---|---|---|
.. | ||
model | 1 year ago | |
utils | 1 year ago | |
README.md | 1 year ago | |
arguments.py | 1 year ago | |
bert_dataset_provider.py | 1 year ago | |
evaluation.py | 1 year ago | |
hostfile | ||
loss.py | 1 year ago | |
nvidia_bert_dataset_provider.py | 5 months ago | |
pretrain_utils.py | 1 year ago | |
run_pretrain.sh | ||
run_pretrain_resume.sh | ||
run_pretraining.py | 7 months ago |
README.md
Pretraining
- Pretraining roberta through running the script below. Detailed parameter descriptions can be found in the arguments.py.
data_path_prefix
is absolute path specifies output of preprocessing. You have to modify the hostfile according to your cluster.
bash run_pretrain.sh
--hostfile
: servers' host name from /etc/hosts--include
: servers which will be used--nproc_per_node
: number of process(GPU) from each server--data_path_prefix
: absolute location of train data, e.g., /h5/0.h5--eval_data_path_prefix
: absolute location of eval data--tokenizer_path
: tokenizer path contains huggingface tokenizer.json, e.g./tokenizer/tokenizer.json--bert_config
: config.json which represent model--mlm
: model type of backbone, bert or deberta_v2
- if resume training from earlier checkpoint, run the script below.
bash run_pretrain_resume.sh
--resume_train
: whether to resume training--load_pretrain_model
: absolute path which contains model checkpoint--load_optimizer_lr
: absolute path which contains optimizer checkpoint