diff --git a/assets/lf_training_loss_compare.png b/assets/npu/lf_training_loss_compare.png
similarity index 100%
rename from assets/lf_training_loss_compare.png
rename to assets/npu/lf_training_loss_compare.png
diff --git a/assets/lf_training_loss_npu.png b/assets/npu/lf_training_loss_npu.png
similarity index 100%
rename from assets/lf_training_loss_npu.png
rename to assets/npu/lf_training_loss_npu.png
diff --git a/assets/openmind_fused_ops.png b/assets/npu/openmind_fused_ops.png
similarity index 100%
rename from assets/openmind_fused_ops.png
rename to assets/npu/openmind_fused_ops.png
diff --git a/assets/openmind_train_loss_compare.png b/assets/npu/openmind_train_loss_compare.png
similarity index 100%
rename from assets/openmind_train_loss_compare.png
rename to assets/npu/openmind_train_loss_compare.png
diff --git a/assets/openmind_train_memory.png b/assets/npu/openmind_train_memory.png
similarity index 100%
rename from assets/openmind_train_memory.png
rename to assets/npu/openmind_train_memory.png
diff --git a/assets/xtuner_training_loss_compare.png b/assets/npu/xtuner_training_loss_compare.png
similarity index 100%
rename from assets/xtuner_training_loss_compare.png
rename to assets/npu/xtuner_training_loss_compare.png
diff --git a/README_npu.md b/ecosystem/README_npu.md
similarity index 97%
rename from README_npu.md
rename to ecosystem/README_npu.md
index 74a5690..a314643 100644
--- a/README_npu.md
+++ b/ecosystem/README_npu.md
@@ -14,8 +14,8 @@
-[](./LICENSE)
-[](https://github.com/internLM/OpenCompass/)
+[](../LICENSE)
+[](https://github.com/internLM/OpenCompass/)
@@ -28,8 +28,8 @@
[🔗API](https://internlm.intern-ai.org.cn/api/document) |
[🧩Modelers](https://modelers.cn/spaces/MindSpore-Lab/INTERNLM2-20B-PLAN)
-[English](./README_npu.md) |
-[简体中文](./README_npu_zh-CN.md)
+[English](README_npu.md) |
+[简体中文](README_npu_zh-CN.md)
@@ -140,7 +140,7 @@ NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspe
The fine-tuning results are saved in the directory `./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`.
The comparison of loss between NPU and GPU is as follows:
-
+
### Model Convert
@@ -254,11 +254,11 @@ llamafactory-cli train examples/train_full/internlm3_8b_instruct_full_sft.yaml
The loss curve obtained after finetuning is as follows:
-
+
The loss curve compared with GPU is as follows:
-
+
## Transformers
diff --git a/README_npu_zh-CN.md b/ecosystem/README_npu_zh-CN.md
similarity index 98%
rename from README_npu_zh-CN.md
rename to ecosystem/README_npu_zh-CN.md
index 6ce1a37..cf8e223 100644
--- a/README_npu_zh-CN.md
+++ b/ecosystem/README_npu_zh-CN.md
@@ -28,8 +28,8 @@
[🔗API](https://internlm.intern-ai.org.cn/api/document) |
[🧩魔乐社区](https://modelers.cn/spaces/MindSpore-Lab/INTERNLM2-20B-PLAN)
-[English](./README_npu.md) |
-[简体中文](./README_npu_zh-CN.md)
+[English](README_npu.md) |
+[简体中文](README_npu_zh-CN.md)
@@ -139,7 +139,7 @@ NPROC_PER_NODE=8 xtuner train internlm3_8b_instruct_lora_oasst1_e10.py --deepspe
微调后结果保存在`./work_dirs/internlm3_8b_instruct_lora_oasst1_e10/iter_xxx.pth`,NPU与GPU的loss对比如下:
-
+
### 模型转换
@@ -250,11 +250,11 @@ llamafactory-cli train examples/train_full/internlm3_8b_instruct_full_sft.yaml
微调后得到的loss曲线如下:
-
+
与GPU对比的loss曲线如下:
-
+
## Transformers