mirror of https://github.com/hpcaitech/ColossalAI
[doc] put individual plugin explanation in front
parent
10513f203c
commit
a04337bfc3
|
@ -19,15 +19,6 @@ We currently provide the following plugins:
|
|||
|
||||
More plugins are coming soon.
|
||||
|
||||
## Choosing Your Plugin
|
||||
|
||||
Generally only one plugin is used to train a model. Our recommended use case for each plugin is as follows.
|
||||
|
||||
- [Torch DDP Plugin](#torch-ddp-plugin): It is suitable for models with less than 2 billion parameters.
|
||||
- [Torch FSDP Plugin](#torch-fsdp-plugin) / [Low Level Zero Plugin](#low-level-zero-plugin): It is suitable for models with less than 10 billion parameters.
|
||||
- [Gemini Plugin](#gemini-plugin): it is suitable for models with more than 10 billion parameters and is ideal for scenarios with high cross-node bandwidth and medium to small-scale clusters (below a thousand cards).
|
||||
- [Hybrid Pararllel Plugin](#hybrid-parallel-plugin): It is suitable for models with more than 60 billion parameters, exceptionally long sequences, very large vocabularies, and is best suited for scenarios with low cross-node bandwidth and large-scale clusters (a thousand cards or more).
|
||||
|
||||
## Plugins
|
||||
|
||||
### Torch DDP Plugin
|
||||
|
@ -96,4 +87,13 @@ This plugin implements the combination of various parallel training strategies a
|
|||
|
||||
{{ autodoc:colossalai.booster.plugin.HybridParallelPlugin }}
|
||||
|
||||
## Choosing Your Plugin
|
||||
|
||||
Generally only one plugin is used to train a model. Our recommended use case for each plugin is as follows.
|
||||
|
||||
- [Torch DDP Plugin](#torch-ddp-plugin): It is suitable for models with less than 2 billion parameters.
|
||||
- [Torch FSDP Plugin](#torch-fsdp-plugin) / [Low Level Zero Plugin](#low-level-zero-plugin): It is suitable for models with less than 10 billion parameters.
|
||||
- [Gemini Plugin](#gemini-plugin): It is suitable for models with more than 10 billion parameters and is ideal for scenarios with high cross-node bandwidth and medium to small-scale clusters (below a thousand cards).
|
||||
- [Hybrid Pararllel Plugin](#hybrid-parallel-plugin): It is suitable for models with more than 60 billion parameters, or special models such as those with exceptionally long sequences, very large vocabularies, and is best suited for scenarios with low cross-node bandwidth and large-scale clusters (a thousand cards or more).
|
||||
|
||||
<!-- doc-test-command: echo -->
|
||||
|
|
|
@ -19,12 +19,6 @@
|
|||
|
||||
更多插件即将推出。
|
||||
|
||||
## 插件选择
|
||||
- [Torch DDP 插件](#torch-ddp-插件): 适用于参数少于 20 亿的模型。
|
||||
- [Torch FSDP 插件](#torch-fsdp-插件) / [Low Level Zero 插件](#low-level-zero-插件): 适用于参数少于 100 亿的模型。
|
||||
- [Gemini 插件](#gemini-插件): 适合参数超过 100 亿的模型,且跨节点带宽高、中小规模集群(千卡以下)的场景。
|
||||
- [Hybrid Pararllel 插件](#hybrid-parallel-插件): 适合参数超过 600 亿的模型、超长序列、超大词表等特殊模型,且跨节点带宽低、大规模集群(千卡以上)的场景。
|
||||
|
||||
## 插件
|
||||
|
||||
### Torch DDP 插件
|
||||
|
@ -93,4 +87,10 @@ Zero-2 不支持局部梯度累积。如果您坚持使用,虽然可以积累
|
|||
|
||||
{{ autodoc:colossalai.booster.plugin.HybridParallelPlugin }}
|
||||
|
||||
## 插件选择
|
||||
- [Torch DDP 插件](#torch-ddp-插件): 适用于参数少于 20 亿的模型。
|
||||
- [Torch FSDP 插件](#torch-fsdp-插件) / [Low Level Zero 插件](#low-level-zero-插件): 适用于参数少于 100 亿的模型。
|
||||
- [Gemini 插件](#gemini-插件): 适合参数超过 100 亿的模型,且跨节点带宽高、中小规模集群(千卡以下)的场景。
|
||||
- [Hybrid Pararllel 插件](#hybrid-parallel-插件): 适合参数超过 600 亿的模型、超长序列、超大词表等特殊模型,且跨节点带宽低、大规模集群(千卡以上)的场景。
|
||||
|
||||
<!-- doc-test-command: echo -->
|
||||
|
|
Loading…
Reference in New Issue