Update directory structure for NPU-related files

- Create a new `npu` directory under `assets` to store NPU-related images.
- Move NPU-related `README` files to the `ecosystem` directory.
pull/816/head
liutongtong27 2025-01-20 11:44:10 +08:00
parent 2958e62164
commit 15f70457d6
8 changed files with 12 additions and 12 deletions

View File

Before

Width:  |  Height:  |  Size: 363 KiB

After

Width:  |  Height:  |  Size: 363 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 8.9 KiB

After

Width:  |  Height:  |  Size: 8.9 KiB

View File

Before

Width:  |  Height:  |  Size: 212 KiB

After

Width:  |  Height:  |  Size: 212 KiB

View File

Before

Width:  |  Height:  |  Size: 6.9 KiB

After

Width:  |  Height:  |  Size: 6.9 KiB

View File

Before

Width:  |  Height:  |  Size: 315 KiB

After

Width:  |  Height:  |  Size: 315 KiB

View File

@ -14,8 +14,8 @@
<div> </div>
</div>
[![license](./assets/license.svg)](./LICENSE)
[![evaluation](./assets/compass_support.svg)](https://github.com/internLM/OpenCompass/)
[![license](../assets/license.svg)](../LICENSE)
[![evaluation](../assets/compass_support.svg)](https://github.com/internLM/OpenCompass/)
<!-- [![Documentation Status](https://readthedocs.org/projects/internlm/badge/?version=latest)](https://internlm.readthedocs.io/zh_CN/latest/?badge=latest) -->
@ -28,8 +28,8 @@
[🔗API](https://internlm.intern-ai.org.cn/api/document) |
[🧩Modelers](https://modelers.cn/spaces/MindSpore-Lab/INTERNLM2-20B-PLAN)
[English](./README_npu.md) |
[简体中文](./README_npu_zh-CN.md)
[English](README_npu.md) |
[简体中文](README_npu_zh-CN.md)
</div>
@ -140,7 +140,7 @@ NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspe
The fine-tuning results are saved in the directory `./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`.
The comparison of loss between NPU and GPU is as follows:
![xtuner_training_loss](assets/xtuner_training_loss_compare.png)
![xtuner_training_loss](../assets/npu/xtuner_training_loss_compare.png)
### Model Convert
@ -254,11 +254,11 @@ llamafactory-cli train examples/train_full/internlm3_8b_instruct_full_sft.yaml
The loss curve obtained after finetuning is as follows:
![training_loss](assets/lf_training_loss_npu.png)
![training_loss](../assets/npu/lf_training_loss_npu.png)
The loss curve compared with GPU is as follows:
![training_loss_compare](assets/lf_training_loss_compare.png)
![training_loss_compare](../assets/npu/lf_training_loss_compare.png)
## Transformers

View File

@ -28,8 +28,8 @@
[🔗API](https://internlm.intern-ai.org.cn/api/document) |
[🧩魔乐社区](https://modelers.cn/spaces/MindSpore-Lab/INTERNLM2-20B-PLAN)
[English](./README_npu.md) |
[简体中文](./README_npu_zh-CN.md)
[English](README_npu.md) |
[简体中文](README_npu_zh-CN.md)
</div>
@ -139,7 +139,7 @@ NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspe
微调后结果保存在`./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`NPU与GPU的loss对比如下
![xtuner_training_loss](assets/xtuner_training_loss_compare.png)
![xtuner_training_loss](../assets/npu/xtuner_training_loss_compare.png)
### 模型转换
@ -250,11 +250,11 @@ llamafactory-cli train examples/train_full/internlm3_8b_instruct_full_sft.yaml
微调后得到的loss曲线如下
![training_loss](assets/lf_training_loss_npu.png)
![training_loss](../assets/npu/lf_training_loss_npu.png)
与GPU对比的loss曲线如下
![training_loss_compare](assets/lf_training_loss_compare.png)
![training_loss_compare](../assets/npu/lf_training_loss_compare.png)
## Transformers