mirror of https://github.com/InternLM/InternLM
Update mixed_precision.po
parent
9d0c41e85b
commit
4f40b43b09
|
@ -25,7 +25,7 @@ msgstr "Mixed Precision"
|
|||
|
||||
#: ../../source/mixed_precision.rst:3
|
||||
msgid ""
|
||||
"混合精度是指在模型训练的过程中同时使用16位和32位浮点类型,是一种在最小化精度损失的前提下加速模型训练的方法。 "
|
||||
"混合精度是指在模型训练的过程中同时使用16位和32位浮点数类型,是一种在最小化精度损失的前提下加速模型训练的方法。"
|
||||
"混合精度通过让模型的某些部分使用32位浮点数以保持数值稳定性,并在其余部分利用半精度浮点数加速训练并可以减少内存使用,在评估指标(如准确率)方面仍可以获得同等的训练效果。"
|
||||
msgstr ""
|
||||
"Mixed precision refers to using both 16-bit and 32-bit floating-point types to train model, which can accelerate the model training while minimizing the accuracy loss. "
|
||||
|
@ -61,11 +61,11 @@ msgid "If True, the buffers are synchronized. Defaults to True."
|
|||
msgstr ""
|
||||
|
||||
#: ../../source/mixed_precision.rst:8
|
||||
msgid "InternLM默认将模型转换为16位精度进行训练(在配置文件中可以设置默认类型为其他数据类型)。在使用混合精度时,需要在构建模型时使用"
|
||||
msgid "InternLM默认将模型转换为16位浮点数类型进行训练(在配置文件中可以设置默认类型为其他数据类型)。在使用混合精度时,需要在构建模型时使用"
|
||||
msgstr "InternLM converts the model to 16-bit floating-point types for model training by default (the default type can be set to other data types in the configuration file). When using mixed precision, it is necessary to use "
|
||||
|
||||
#: ../../source/mixed_precision.rst:14
|
||||
msgid "将模型的某个子模块设置为32精度进行训练,InternLM会在模型训练时自动将数据类型转换成需要的精度。"
|
||||
msgid "将模型的某个子模块设置为32位浮点数类型进行训练,InternLM会在模型训练时自动将数据类型转换成需要的精度。"
|
||||
msgstr "to set a sub-module of the model to 16-bit floating-point types for training, and InternLM will automatically convert the data type to the required precision during model training."
|
||||
|
||||
#: ../../source/mixed_precision.rst:16
|
||||
|
|
Loading…
Reference in New Issue