From c425a69d52c714423bbc5a55f6f3c609723993d9 Mon Sep 17 00:00:00 2001 From: jiangmingyan <1829166702@qq.com> Date: Tue, 23 May 2023 16:42:36 +0800 Subject: [PATCH] [doc] add removed change of config.py --- docs/source/en/basics/define_your_config.md | 20 ++++++++++--------- .../zh-Hans/basics/define_your_config.md | 19 +++++++++--------- 2 files changed, 20 insertions(+), 19 deletions(-) diff --git a/docs/source/en/basics/define_your_config.md b/docs/source/en/basics/define_your_config.md index 46b7112b7..048ffcacb 100644 --- a/docs/source/en/basics/define_your_config.md +++ b/docs/source/en/basics/define_your_config.md @@ -2,7 +2,8 @@ Author: Guangyang Lu, Shenggui Li, Siqi Mai -- > ⚠️ The information on this page is outdated and will be deprecated. Please check [Booster API](../basics/booster_api.md) for more information. +> ⚠️ The information on this page is outdated and will be deprecated. Please check [Booster API](../basics/booster_api.md) for more information. + **Prerequisite:** - [Distributed Training](../concepts/distributed_training.md) @@ -23,7 +24,8 @@ In this tutorial, we will cover how to define your configuration file. ## Configuration Definition In a configuration file, there are two types of variables. One serves as feature specification and the other serves -as hyper-parameters. All feature-related variables are reserved keywords. For example, if you want to use 1D tensor parallelism, you need to use the variable name `parallel` in the config file and follow a pre-defined format. +as hyper-parameters. All feature-related variables are reserved keywords. For example, if you want to use mixed precision +training, you need to use the variable name `fp16` in the config file and follow a pre-defined format. ### Feature Specification @@ -35,13 +37,14 @@ To illustrate the use of config file, we use mixed precision training as an exam follow the steps below. 1. create a configuration file (e.g. `config.py`, the file name can be anything) -2. define the hybrid parallelism configuration in the config file. For example, in order to use 1D tensor parallel, you can just write these lines of code below into your config file. +2. define the mixed precision configuration in the config file. For example, in order to use mixed precision training +natively provided by PyTorch, you can just write these lines of code below into your config file. ```python - parallel = dict( - data=1, - pipeline=1, - tensor=dict(size=2, mode='1d'), + from colossalai.amp import AMP_TYPE + + fp16 = dict( + mode=AMP_TYPE.TORCH ) ``` @@ -54,7 +57,7 @@ the current directory. colossalai.launch(config='./config.py', ...) ``` -In this way, Colossal-AI knows what features you want to use and will inject this feature. +In this way, Colossal-AI knows what features you want to use and will inject this feature during `colossalai.initialize`. ### Global Hyper-parameters @@ -80,4 +83,3 @@ colossalai.launch(config='./config.py', ...) print(gpc.config.BATCH_SIZE) ``` - diff --git a/docs/source/zh-Hans/basics/define_your_config.md b/docs/source/zh-Hans/basics/define_your_config.md index d1de085e3..720e75805 100644 --- a/docs/source/zh-Hans/basics/define_your_config.md +++ b/docs/source/zh-Hans/basics/define_your_config.md @@ -2,7 +2,7 @@ 作者: Guangyang Lu, Shenggui Li, Siqi Mai -- > ⚠️ 此页面上的信息已经过时并将被废弃。请在[Booster API](../basics/booster_api.md)页面查阅更新。 +> ⚠️ 此页面上的信息已经过时并将被废弃。请在[Booster API](../basics/booster_api.md)页面查阅更新。 **预备知识:** - [分布式训练](../concepts/distributed_training.md) @@ -20,7 +20,7 @@ ## 配置定义 -在一个配置文件中,有两种类型的变量。一种是作为特征说明,另一种是作为超参数。所有与特征相关的变量都是保留关键字。例如,如果您想使用`1D`张量并行,需要在 config 文件中使用变量名`fp16`,并遵循预先定义的格式。 +在一个配置文件中,有两种类型的变量。一种是作为特征说明,另一种是作为超参数。所有与特征相关的变量都是保留关键字。例如,如果您想使用混合精度训练,需要在 config 文件中使用变量名`fp16`,并遵循预先定义的格式。 ### 功能配置 @@ -29,13 +29,13 @@ Colossal-AI 提供了一系列的功能来加快训练速度。每个功能都 为了说明配置文件的使用,我们在这里使用混合精度训练作为例子。您需要遵循以下步骤。 1. 创建一个配置文件(例如 `config.py`,您可以指定任意的文件名)。 -2. 在配置文件中定义混合并行的配置。例如,为了使用`1D`张量并行,您只需将下面这几行代码写入您的配置文件中。 +2. 在配置文件中定义混合精度的配置。例如,为了使用 PyTorch 提供的原始混合精度训练,您只需将下面这几行代码写入您的配置文件中。 - ```python - parallel = dict( - data=1, - pipeline=1, - tensor=dict(size=2, mode='1d'), + ```python + from colossalai.amp import AMP_TYPE + + fp16 = dict( + mode=AMP_TYPE.TORCH ) ``` @@ -47,7 +47,7 @@ Colossal-AI 提供了一系列的功能来加快训练速度。每个功能都 colossalai.launch(config='./config.py', ...) ``` -这样,Colossal-AI 便知道您想使用什么功能,并注入您所需要的功能。 +这样,Colossal-AI 便知道您想使用什么功能,并会在 `colossalai.initialize` 期间注入您所需要的功能。 ### 全局超参数 @@ -71,4 +71,3 @@ colossalai.launch(config='./config.py', ...) print(gpc.config.BATCH_SIZE) ``` -