diff --git a/doc/code-docs/locales/en/LC_MESSAGES/parallel.po b/doc/code-docs/locales/en/LC_MESSAGES/parallel.po index 15a8d23..5e359c9 100644 --- a/doc/code-docs/locales/en/LC_MESSAGES/parallel.po +++ b/doc/code-docs/locales/en/LC_MESSAGES/parallel.po @@ -455,3 +455,51 @@ msgstr "" msgid "Whether the gradient is success updated, and the gradient." msgstr "" +#: ../../source/parallel.rst:155 +msgid "混合精度" +msgstr "Mixed Precision" + +#: ../../source/parallel.rst:156 +msgid "" +"混合精度是指在模型训练的过程中同时使用16位和32位浮点类型,是一种在最小化精度损失的前提下加速模型训练的方法。 " +"混合精度通过让模型的某些部分使用32位浮点数以保持数值稳定性,并在其余部分利用半精度浮点数加速训练并减少内存使用,在评估指标(如准确率)方面仍可以获得同等的训练效果。" +msgstr "" +"Mixed precision refers to using both 16-bit and 32-bit floating-point types to train model, which can accelerate the model training while minimizing the accuracy loss. " +"Mixed precision training uses 32-bit floating-point types in certain parts of the model to maintain numerical stability, and accelerate training and reduce memory usage by using 16-bit floating-point types in other parts. Mixed precision can achieve the same training effect in evaluating indicators such as accuracy." + +#: internlm.core.naive_amp.NaiveAMPModel:1 of +msgid "" +"This is a wrapper class for a model that automatically casts the model, " +"its inputs, and outputs into fp16. It also provides options to cast the " +"output back to fp32 and to synchronize buffers." +msgstr "" + +#: internlm.core.naive_amp.NaiveAMPModel:4 of +msgid "The model to be wrapped and cast into fp16." +msgstr "" + +#: internlm.core.naive_amp.NaiveAMPModel:6 of +msgid "If True, the output of this module is cast into fp32. Defaults to True." +msgstr "" + +#: internlm.core.naive_amp.NaiveAMPModel:8 of +msgid "" +"The parallel group mode used in this module. Defaults to " +"``ParallelMode.DATA``." +msgstr "" + +#: internlm.core.naive_amp.NaiveAMPModel:11 of +msgid "If True, the buffers are synchronized. Defaults to True." +msgstr "" + +#: ../../source/parallel.rst:161 +msgid "InternLM默认将模型转换为16位精度进行训练(在配置文件中可以设置默认类型为其他数据类型)。在使用混合精度时,需要在构建模型时使用" +msgstr "InternLM converts the model to 16-bit floating-point types for model training by default (the default type can be set to other data types in the configuration file). When using mixed precision, it is necessary to use " + +#: ../../source/parallel.rst:167 +msgid "将模型的某个子模块设置为32精度进行训练,InternLM会在模型训练时自动将数据类型转换成需要的精度。" +msgstr "to set a sub-module of the model to 16-bit floating-point types for training, and InternLM will automatically convert the data type to the required precision during model training." + +#: ../../source/parallel.rst:169 +msgid "例如:" +msgstr "For example:" diff --git a/doc/code-docs/source/parallel.rst b/doc/code-docs/source/parallel.rst index 8a1b942..8b6121e 100644 --- a/doc/code-docs/source/parallel.rst +++ b/doc/code-docs/source/parallel.rst @@ -154,7 +154,7 @@ ZeRO1.5 的实现使用了分层分片的概念,通过配置值 ``parallel.zer 混合精度 ----------------- 混合精度是指在模型训练的过程中同时使用16位和32位浮点类型,是一种在最小化精度损失的前提下加速模型训练的方法。 -混合精度通过让模型的某些部分使用32位浮点数以保持数值稳定性,并在其余部分利用半精度浮点数加速训练并减少内存使用,在评估指标(如准确率)方面仍可以获得同等的训练效果。 +混合精度通过让模型的某些部分使用32位浮点数以保持数值稳定性,并在其余部分利用半精度浮点数加速训练并可以减少内存使用,在评估指标(如准确率)方面仍可以获得同等的训练效果。 .. autoclass:: internlm.core.naive_amp.NaiveAMPModel @@ -177,10 +177,10 @@ InternLM默认将模型转换为16位精度进行训练(在配置文件中可 self.linear2 = nn.Linear(1, 4, bias=False) model = MlpModel() - # 将model.linear2设置为fp32模块 + # set model.linear2 as fp32 module set_fp32_attr_to_module(model.linear2) - # 混合精度模型 + # apply mixed precision model = NaiveAMPModel( model=model, output_to_fp32=True,