ColossalAI/examples/language/opt
flybird11111 7486ed7d3a
[shardformer] update llama2/opt finetune example and fix llama2 policy (#4645)
* [shardformer] update shardformer readme

[shardformer] update shardformer readme

[shardformer] update shardformer readme

* [shardformer] update llama2/opt finetune example and shardformer update to llama2

* [shardformer] update llama2/opt finetune example and shardformer update to llama2

* [shardformer] update llama2/opt finetune example and shardformer update to llama2

* [shardformer] change dataset

* [shardformer] change dataset

* [shardformer] fix CI

* [shardformer] fix

* [shardformer] fix

* [shardformer] fix

* [shardformer] fix

* [shardformer] fix

[example] update opt example

[example] resolve comments

fix

fix
2023-09-09 22:45:36 +08:00
..
README.md [example] update opt example using booster api (#3918) 2023-06-08 11:27:05 +08:00
args.py [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) 2023-09-09 22:45:36 +08:00
data.py [example] update opt example using booster api (#3918) 2023-06-08 11:27:05 +08:00
opt_benchmark.py [gemini] improve compatibility and add static placement policy (#4479) 2023-08-24 09:29:25 +08:00
opt_train_demo.py [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) 2023-09-09 22:45:36 +08:00
requirements.txt [example] update opt example using booster api (#3918) 2023-06-08 11:27:05 +08:00
run_benchmark.sh [example] update opt example using booster api (#3918) 2023-06-08 11:27:05 +08:00
run_demo.sh [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) 2023-09-09 22:45:36 +08:00
test_ci.sh [example] update opt example using booster api (#3918) 2023-06-08 11:27:05 +08:00

README.md

OPT

Meta recently released Open Pretrained Transformer (OPT), a 175-Billion parameter AI language model, which stimulates AI programmers to perform various downstream tasks and application deployments.

The following example of Colossal-AI demonstrates fine-tuning Casual Language Modelling at low cost.

Our Modifications

We are using the pre-training weights of the OPT model provided by Hugging Face Hub on the raw WikiText-2 (no tokens were replaced before the tokenization).

We adapt the OPT training code to ColossalAI by leveraging Boosting API loaded with a chosen plugin, where each plugin corresponds to a specific kind of training strategy. This example supports plugins including TorchDDPPlugin, LowLevelZeroPlugin, and GeminiPlugin.

Run Demo

By running the following script:

bash run_demo.sh

You will finetune a facebook/opt-350m model on this dataset, which contains more than 8000 comments on Netflix shows.

The script can be modified if you want to try another set of hyperparameters or change to another OPT model with different size.

The demo code is adapted from this blog and the HuggingFace Language Modelling examples.

Run Benchmark

You can run benchmark for OPT model by running the following script:

bash run_benchmark.sh

The script will test performance (throughput & peak memory usage) for each combination of hyperparameters. You can also play with this script to configure your set of hyperparameters for testing.