mirror of https://github.com/InternLM/InternLM
update llamafactory part
parent
ad035eb8bd
commit
2e11edfc97
|
@ -42,10 +42,10 @@ This is a guide to using Ascend NPU to train and infer the InternLM series model
|
||||||
## Model Zoo
|
## Model Zoo
|
||||||
|
|
||||||
### InternLM3
|
### InternLM3
|
||||||
| Model | Transformers(HF) | ModelScope(HF) | Release Date |
|
|
||||||
|---------------------------| ------------------------------------------ | ---------------------------------------- |--------------|
|
|
||||||
| **InternLM3-8B-Instruct** | [🤗internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [<img src="./assets/modelscope_logo.png" width="20px" /> internlm3-8b-instruct](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm3-8b-instruct) | 2025-01-15 |
|
|
||||||
|
|
||||||
|
| Model | Transformers(HF) | ModelScope(HF) | Modelers(HF) | Release Date |
|
||||||
|
| ------------------------- | -------------------------------------------------------- | ------------------------------------------------------ | ----------------------------------------------------- | ------------ |
|
||||||
|
| **InternLM3-8B-Instruct** | [🤗internlm3_8B_instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [<img src="./assets/modelscope_logo.png" width="20px" /> internlm3_8b_instruct](https://www.modelscope.cn/models/Shanghai_AI_Laboratory/internlm3-8b-instruct/summary) | [](https://modelers.cn/models/Intern/internlm3-8b-instruct) | 2025-01-15 |
|
||||||
## Environment Setup
|
## Environment Setup
|
||||||
|
|
||||||
### Installing Ascend CANN Toolkit and Kernels
|
### Installing Ascend CANN Toolkit and Kernels
|
||||||
|
@ -156,6 +156,9 @@ NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspe
|
||||||
```
|
```
|
||||||
|
|
||||||
The fine-tuning results are saved in the directory `./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`.
|
The fine-tuning results are saved in the directory `./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`.
|
||||||
|
The comparison of loss between NPU and GPU is as follows:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
### Model Convert
|
### Model Convert
|
||||||
|
|
||||||
|
@ -198,75 +201,82 @@ pip install -e ".[torch-npu,metrics]"
|
||||||
|
|
||||||
### Inference
|
### Inference
|
||||||
|
|
||||||
Create the `examples/inference/internlm2_5_7b_chat.yaml` inference configuration file in the LLaMa-Factory directory:
|
Create the `examples/inference/internlm3_8b_instruct.yaml` inference configuration file in the LLaMa-Factory directory:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM2.5-7B-Chat.
|
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM3-8B-Instruct.
|
||||||
template: intern2
|
trust_remote_code: true
|
||||||
|
template: intern3
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the following command to interact with the model:
|
Run the following command to interact with the model:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
llamafactory-cli chat examples/inference/internlm2_5_7b_chat.yaml
|
llamafactory-cli chat examples/inference/internlm3_8b_instruct.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
### Fine-tuning
|
### Fine-tuning
|
||||||
|
|
||||||
Create the `examples/train_lora/internlm2_5_7b_chat_lora_sft.yaml` configuration file in the LLaMa-Factory directory. The fine-tuning configuration file is as follows:
|
Create the `examples/train_full/internlm3_8b_instruct_full_sft.yaml` configuration file in the LLaMa-Factory directory. The fine-tuning configuration file is as follows:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
### model
|
### model
|
||||||
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM2.5-7B-Chat.
|
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM3-8B-Instruct.
|
||||||
|
trust_remote_code: true
|
||||||
|
|
||||||
### method
|
### method
|
||||||
stage: sft
|
stage: sft
|
||||||
do_train: true
|
do_train: true
|
||||||
finetuning_type: lora
|
finetuning_type: full
|
||||||
lora_target: all
|
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
|
||||||
|
|
||||||
### dataset
|
### dataset
|
||||||
dataset: identity
|
dataset: alpaca_data
|
||||||
template: intern2
|
template: intern3
|
||||||
cutoff_len: 128
|
cutoff_len: 4096
|
||||||
|
max_samples: 10000
|
||||||
|
overwrite_cache: true
|
||||||
preprocessing_num_workers: 16
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
### output
|
### output
|
||||||
output_dir: saves/internlm2_5_7b_chat/lora/sft
|
output_dir: saves/interlm3/full/sft
|
||||||
logging_steps: 5
|
logging_steps: 10
|
||||||
save_steps: 20
|
save_steps: 500
|
||||||
plot_loss: true
|
plot_loss: true
|
||||||
overwrite_output_dir: true
|
overwrite_output_dir: true
|
||||||
|
|
||||||
### train
|
### train
|
||||||
per_device_train_batch_size: 8
|
per_device_train_batch_size: 1
|
||||||
gradient_accumulation_steps: 1
|
gradient_accumulation_steps: 1
|
||||||
learning_rate: 1.0e-4
|
learning_rate: 1.0e-6
|
||||||
num_train_epochs: 5.0
|
num_train_epochs: 1.0
|
||||||
lr_scheduler_type: cosine
|
lr_scheduler_type: cosine
|
||||||
warmup_ratio: 0.1
|
warmup_ratio: 0.1
|
||||||
bf16: true
|
bf16: true
|
||||||
ddp_timeout: 180000000
|
ddp_timeout: 180000000
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
eval_strategy: steps
|
||||||
|
eval_steps: 5000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the following commands to start fine-tuning:
|
Run the following commands to start fine-tuning:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
export ASCEND_RT_VISIBLE_DEVICES=0
|
llamafactory-cli train examples/train_full/internlm3_8b_instruct_full_sft.yaml
|
||||||
llamafactory-cli train examples/train_lora/internlm2_5_7b_chat_lora_sft.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Accuracy
|
### Accuracy
|
||||||
|
|
||||||
The loss curve obtained after finetuning is as follows:
|
The loss curve obtained after finetuning is as follows:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Performance
|
The loss curve compared with GPU is as follows:
|
||||||
|
|
||||||
| Chip Type | train_samples_per_second |
|

|
||||||
|-------------------|--------------------------|
|
|
||||||
| Atlas 900 A2 PODc | 49.662 |
|
|
||||||
|
|
||||||
## Transformers
|
## Transformers
|
||||||
|
|
||||||
|
|
|
@ -42,11 +42,12 @@
|
||||||
## Model Zoo
|
## Model Zoo
|
||||||
|
|
||||||
### InternLM3
|
### InternLM3
|
||||||
| Model | Transformers(HF) | ModelScope(HF) | Release Date |
|
|
||||||
|---------------------------| ------------------------------------------ | ---------------------------------------- |--------------|
|
|
||||||
| **InternLM3-8B-Instruct** | [🤗internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [<img src="./assets/modelscope_logo.png" width="20px" /> internlm3-8b-instruct](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm3-8b-instruct) | 2025-01-15 |
|
|
||||||
|
|
||||||
## 环境准备l
|
| Model | Transformers(HF) | ModelScope(HF) | Modelers(HF) | Release Date |
|
||||||
|
| ------------------------- | -------------------------------------------------------- | ------------------------------------------------------ | ----------------------------------------------------- | ------------ |
|
||||||
|
| **InternLM3-8B-Instruct** | [🤗internlm3_8B_instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [<img src="./assets/modelscope_logo.png" width="20px" /> internlm3_8b_instruct](https://www.modelscope.cn/models/Shanghai_AI_Laboratory/internlm3-8b-instruct/summary) | [](https://modelers.cn/models/Intern/internlm3-8b-instruct) | 2025-01-15 |
|
||||||
|
|
||||||
|
## 环境准备
|
||||||
|
|
||||||
### 安装Ascend CANN Toolkit和Kernels
|
### 安装Ascend CANN Toolkit和Kernels
|
||||||
|
|
||||||
|
@ -155,7 +156,9 @@ randomness = dict(seed=123, deterministic=True)
|
||||||
NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspeed deepspeed_zero2
|
NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspeed deepspeed_zero2
|
||||||
```
|
```
|
||||||
|
|
||||||
微调后结果保存在`./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`下。
|
微调后结果保存在`./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`,NPU与GPU的loss对比如下:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
### 模型转换
|
### 模型转换
|
||||||
|
|
||||||
|
@ -195,75 +198,82 @@ pip install -e ".[torch-npu,metrics]"
|
||||||
|
|
||||||
### 推理
|
### 推理
|
||||||
|
|
||||||
在 LLaMa-Factory 路径下新建`examples/inference/internlm2_5_7b_chat.yaml`推理配置文件,文件内容为:
|
在 LLaMa-Factory 路径下新建`examples/inference/internlm3_8b_instruct.yaml`推理配置文件,文件内容为:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM2.5-7B-Chat.
|
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM3-8B-Instruct.
|
||||||
template: intern2
|
trust_remote_code: true
|
||||||
|
template: intern3
|
||||||
```
|
```
|
||||||
|
|
||||||
使用以下命令与模型进行交互:
|
使用以下命令与模型进行交互:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
llamafactory-cli chat examples/inference/internlm2_5_7b_chat.yaml
|
llamafactory-cli chat examples/inference/internlm3_8b_instruct.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
### 微调
|
### 微调
|
||||||
|
|
||||||
在 LLaMa-Factory 路径下新建`examples/train_lora/internlm2_5_7b_chat_lora_sft.yaml`微调配置文件,微调配置文件如下:
|
在 LLaMa-Factory 路径下新建`examples/train_full/internlm3_8b_instruct_full_sft.yaml`微调配置文件,微调配置文件如下:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
### model
|
### model
|
||||||
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM2.5-7B-Chat.
|
model_name_or_path: xxx # Support only local loading. Set this parameter to the local weight path of InternLM3-8B-Instruct.
|
||||||
|
trust_remote_code: true
|
||||||
|
|
||||||
### method
|
### method
|
||||||
stage: sft
|
stage: sft
|
||||||
do_train: true
|
do_train: true
|
||||||
finetuning_type: lora
|
finetuning_type: full
|
||||||
lora_target: all
|
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
|
||||||
|
|
||||||
### dataset
|
### dataset
|
||||||
dataset: identity
|
dataset: alpaca_data
|
||||||
template: intern2
|
template: intern3
|
||||||
cutoff_len: 128
|
cutoff_len: 4096
|
||||||
|
max_samples: 10000
|
||||||
|
overwrite_cache: true
|
||||||
preprocessing_num_workers: 16
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
### output
|
### output
|
||||||
output_dir: saves/internlm2_5_7b_chat/lora/sft
|
output_dir: saves/interlm3/full/sft
|
||||||
logging_steps: 5
|
logging_steps: 10
|
||||||
save_steps: 20
|
save_steps: 500
|
||||||
plot_loss: true
|
plot_loss: true
|
||||||
overwrite_output_dir: true
|
overwrite_output_dir: true
|
||||||
|
|
||||||
### train
|
### train
|
||||||
per_device_train_batch_size: 8
|
per_device_train_batch_size: 1
|
||||||
gradient_accumulation_steps: 1
|
gradient_accumulation_steps: 1
|
||||||
learning_rate: 1.0e-4
|
learning_rate: 1.0e-6
|
||||||
num_train_epochs: 5.0
|
num_train_epochs: 1.0
|
||||||
lr_scheduler_type: cosine
|
lr_scheduler_type: cosine
|
||||||
warmup_ratio: 0.1
|
warmup_ratio: 0.1
|
||||||
bf16: true
|
bf16: true
|
||||||
ddp_timeout: 180000000
|
ddp_timeout: 180000000
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
eval_strategy: steps
|
||||||
|
eval_steps: 5000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
通过下面的命令启动微调:
|
通过下面的命令启动微调:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
export ASCEND_RT_VISIBLE_DEVICES=0
|
llamafactory-cli train examples/train_full/internlm3_8b_instruct_full_sft.yaml
|
||||||
llamafactory-cli train examples/train_lora/internlm2_5_7b_chat_lora_sft.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 精度
|
### 精度
|
||||||
|
|
||||||
微调后得到的loss曲线如下:
|
微调后得到的loss曲线如下:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### 性能
|
与GPU对比的loss曲线如下:
|
||||||
|
|
||||||
| 芯片型号 | train_samples_per_second |
|

|
||||||
|-------------------|--------------------------|
|
|
||||||
| Atlas 900 A2 PODc | 49.662 |
|
|
||||||
|
|
||||||
## Transformers
|
## Transformers
|
||||||
|
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 363 KiB |
Binary file not shown.
After Width: | Height: | Size: 41 KiB |
Binary file not shown.
After Width: | Height: | Size: 315 KiB |
Loading…
Reference in New Issue