You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/examples/language/roberta/pretraining
mandoxzhang 52bd106627
add RoBERTa (#1980)
2 years ago
..
model add RoBERTa (#1980) 2 years ago
utils add RoBERTa (#1980) 2 years ago
README.md add RoBERTa (#1980) 2 years ago
arguments.py add RoBERTa (#1980) 2 years ago
bert_dataset_provider.py add RoBERTa (#1980) 2 years ago
evaluation.py add RoBERTa (#1980) 2 years ago
hostfile add RoBERTa (#1980) 2 years ago
loss.py add RoBERTa (#1980) 2 years ago
nvidia_bert_dataset_provider.py add RoBERTa (#1980) 2 years ago
pretrain_utils.py add RoBERTa (#1980) 2 years ago
run_pretrain.sh add RoBERTa (#1980) 2 years ago
run_pretrain_resume.sh add RoBERTa (#1980) 2 years ago
run_pretraining.py add RoBERTa (#1980) 2 years ago

README.md

Pretraining

  1. Pretraining roberta through running the script below. Detailed parameter descriptions can be found in the arguments.py. data_path_prefix is absolute path specifies output of preprocessing. You have to modify the hostfile according to your cluster.
bash run_pretrain.sh
  • --hostfile: servers' host name from /etc/hosts
  • --include: servers which will be used
  • --nproc_per_node: number of process(GPU) from each server
  • --data_path_prefix: absolute location of train data, e.g., /h5/0.h5
  • --eval_data_path_prefix: absolute location of eval data
  • --tokenizer_path: tokenizer path contains huggingface tokenizer.json, e.g./tokenizer/tokenizer.json
  • --bert_config: config.json which represent model
  • --mlm: model type of backbone, bert or deberta_v2
  1. if resume training from earylier checkpoint, run the script below.
bash run_pretrain_resume.sh
  • --resume_train: whether to resume training
  • --load_pretrain_model: absolute path which contains model checkpoint
  • --load_optimizer_lr: absolute path which contains optimizer checkpoint