2023-06-07 07:51:00 +00:00
## Overview
2023-06-08 08:09:32 +00:00
This directory includes two parts: Using the Booster API finetune Huggingface Bert and AlBert models and benchmarking Bert and AlBert models with different Booster Plugin.
2023-06-07 07:51:00 +00:00
## Finetune
```
bash test_ci.sh
```
2023-09-05 05:14:41 +00:00
### Bert-Finetune Results
| Plugin | Accuracy | F1-score | GPU number |
| -------------- | -------- | -------- | -------- |
| torch_ddp | 84.4% | 88.6% | 2 |
| torch_ddp_fp16 | 84.7% | 88.8% | 2 |
| gemini | 84.0% | 88.4% | 2 |
| hybrid_parallel | 84.5% | 88.6% | 4 |
2023-08-24 01:29:25 +00:00
2023-06-07 07:51:00 +00:00
## Benchmark
```
bash benchmark.sh
```
Now include these metrics in benchmark: CUDA mem occupy, throughput and the number of model parameters. If you have custom metrics, you can add them to benchmark_util.
2023-08-24 01:29:25 +00:00
### Results
2023-06-07 07:51:00 +00:00
2023-08-24 01:29:25 +00:00
#### Bert
2023-06-07 07:51:00 +00:00
| | max cuda mem | throughput(sample/s) | params |
| :-----| -----------: | :--------: | :----: |
| ddp | 21.44 GB | 3.0 | 82M |
| ddp_fp16 | 16.26 GB | 11.3 | 82M |
| gemini | 11.0 GB | 12.9 | 82M |
| low_level_zero | 11.29 G | 14.7 | 82M |
2023-08-24 01:29:25 +00:00
#### AlBert
2023-06-07 07:51:00 +00:00
| | max cuda mem | throughput(sample/s) | params |
| :-----| -----------: | :--------: | :----: |
| ddp | OOM | | |
| ddp_fp16 | OOM | | |
| gemini | 69.39 G | 1.3 | 208M |
2023-08-24 01:29:25 +00:00
| low_level_zero | 56.89 G | 1.4 | 208M |