You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/examples/language/opt
Hongxin Liu 079bf3cb26
[misc] update pre-commit and run all files (#4752)
1 year ago
..
README.md [example] llama2 add fine-tune example (#4673) 1 year ago
args.py [misc] update pre-commit and run all files (#4752) 1 year ago
data.py [misc] update pre-commit and run all files (#4752) 1 year ago
opt_benchmark.py [misc] update pre-commit and run all files (#4752) 1 year ago
opt_train_demo.py [misc] update pre-commit and run all files (#4752) 1 year ago
requirements.txt [example] llama2 add fine-tune example (#4673) 1 year ago
run_benchmark.sh [misc] update pre-commit and run all files (#4752) 1 year ago
run_demo.sh [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) 1 year ago
test_ci.sh [example] update opt example using booster api (#3918) 1 year ago

README.md

OPT

Meta recently released Open Pretrained Transformer (OPT), a 175-Billion parameter AI language model, which stimulates AI programmers to perform various downstream tasks and application deployments.

The following example of Colossal-AI demonstrates fine-tuning Casual Language Modelling at low cost.

Our Modifications

We are using the pre-training weights of the OPT model provided by Hugging Face Hub on the raw WikiText-2 (no tokens were replaced before the tokenization).

We adapt the OPT training code to ColossalAI by leveraging Boosting API loaded with a chosen plugin, where each plugin corresponds to a specific kind of training strategy. This example supports plugins including TorchDDPPlugin, LowLevelZeroPlugin, HybridParallelPlugin and GeminiPlugin.

Run Demo

By running the following script:

bash run_demo.sh

You will finetune a facebook/opt-350m model on this dataset, which contains more than 8000 comments on Netflix shows.

The script can be modified if you want to try another set of hyperparameters or change to another OPT model with different size.

The demo code is adapted from this blog and the HuggingFace Language Modelling examples.

Run Benchmark

You can run benchmark for OPT model by running the following script:

bash run_benchmark.sh

The script will test performance (throughput & peak memory usage) for each combination of hyperparameters. You can also play with this script to configure your set of hyperparameters for testing.