Fix invalid urls of InternEvo (#635)

pull/638/head
Kai Chen 2024-01-19 14:12:21 +08:00 committed by GitHub
parent 56939d7589
commit 4fd9391594
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 12 additions and 7 deletions

View File

@ -6,8 +6,7 @@ We recommend two projects to fine-tune InternLM.
1. [XTuner](https://github.com/InternLM/xtuner) is an efficient, flexible and full-featured toolkit for fine-tuning large models. 1. [XTuner](https://github.com/InternLM/xtuner) is an efficient, flexible and full-featured toolkit for fine-tuning large models.
2. [InternLM-Train](): brief introduction 2. [InternEvo](https://github.com/InternLM/InternEvo/) is a powerful training framework that supports large-scale pre-training and finetuning.
## XTuner ## XTuner
@ -95,3 +94,7 @@ LLaVA-InternLM2-7B:
```shell ```shell
xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image $IMAGE_PATH xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image $IMAGE_PATH
``` ```
## InternEvo
[TODO]

View File

@ -2,12 +2,11 @@
[English](./README.md) | 简体中文 [English](./README.md) | 简体中文
我们推荐以下两种框架微调 InternLM 我们推荐以下两种框架微调 InternLM
1. [XTuner](https://github.com/InternLM/xtuner) 是一个高效、灵活、全能的轻量化大模型微调工具库。 1. [XTuner](https://github.com/InternLM/xtuner) 是一个高效、灵活、全能的轻量化大模型微调工具库。
2. [InternLM-Train](): brief introduction 2. [InternEvo](https://github.com/InternLM/InternEvo/) 是一个支持大规模预训练和微调的训练框架。
## XTuner ## XTuner
@ -18,7 +17,6 @@
3. 兼容 [DeepSpeed](https://github.com/microsoft/DeepSpeed) 🚀,轻松应用各种 ZeRO 训练优化策略。 3. 兼容 [DeepSpeed](https://github.com/microsoft/DeepSpeed) 🚀,轻松应用各种 ZeRO 训练优化策略。
4. 训练所得模型可无缝接入部署工具库 [LMDeploy](https://github.com/InternLM/lmdeploy)、大规模评测工具库 [OpenCompass](https://github.com/open-compass/opencompass) 及 [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)。 4. 训练所得模型可无缝接入部署工具库 [LMDeploy](https://github.com/InternLM/lmdeploy)、大规模评测工具库 [OpenCompass](https://github.com/open-compass/opencompass) 及 [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)。
### 安装 ### 安装
- 借助 conda 准备虚拟环境 - 借助 conda 准备虚拟环境
@ -36,7 +34,6 @@
### 微调 ### 微调
- **步骤 0**准备配置文件。XTuner 提供多个开箱即用的配置文件,用户可以通过下列命令查看所有 InternLM2 的预置配置文件: - **步骤 0**准备配置文件。XTuner 提供多个开箱即用的配置文件,用户可以通过下列命令查看所有 InternLM2 的预置配置文件:
```shell ```shell
@ -91,6 +88,11 @@ xtuner chat internlm/internlm2-chat-7b --adapter xtuner/internlm2-chat-7b-qlora-
``` ```
与 LLaVA-InternLM2-7B 对话: 与 LLaVA-InternLM2-7B 对话:
```shell ```shell
xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image $IMAGE_PATH xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image $IMAGE_PATH
``` ```
## InternEvo
[TODO]