mirror of https://github.com/InternLM/InternLM
117 lines
5.3 KiB
Plaintext
117 lines
5.3 KiB
Plaintext
# SOME DESCRIPTIVE TITLE.
|
||
# Copyright (C) 2023, InternLM Team
|
||
# This file is distributed under the same license as the InternLM package.
|
||
# FIRST AUTHOR <EMAIL@ADDRESS>, 2023.
|
||
#
|
||
msgid ""
|
||
msgstr ""
|
||
"Project-Id-Version: InternLM \n"
|
||
"Report-Msgid-Bugs-To: \n"
|
||
"POT-Creation-Date: 2023-09-27 10:59+0800\n"
|
||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||
"Language: en\n"
|
||
"Language-Team: en <LL@li.org>\n"
|
||
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
|
||
"MIME-Version: 1.0\n"
|
||
"Content-Type: text/plain; charset=utf-8\n"
|
||
"Content-Transfer-Encoding: 8bit\n"
|
||
"Generated-By: Babel 2.12.1\n"
|
||
|
||
#: ../../source/mixed_precision.rst:2
|
||
msgid "混合精度"
|
||
msgstr "Mixed Precision"
|
||
|
||
#: ../../source/mixed_precision.rst:3
|
||
msgid ""
|
||
"混合精度是指在模型训练的过程中同时使用16位和32位浮点数类型,是一种在最小化精度损失的前提下加速模型训练的方法。 "
|
||
"混合精度通过让模型的某些部分使用32位浮点数以保持数值稳定性,并在其余部分利用半精度浮点数加速训练并可以减少内存使用,在评估指标(如准确率)方面仍可以获得同等的训练效果。"
|
||
msgstr ""
|
||
"Mixed precision refers to using both 16-bit and 32-bit floating-point "
|
||
"types to train model, which can accelerate the model training while "
|
||
"minimizing the accuracy loss. Mixed precision training uses 32-bit "
|
||
"floating-point types in certain parts of the model to maintain numerical "
|
||
"stability, and accelerate training and reduce memory usage by using "
|
||
"16-bit floating-point types in other parts. Mixed precision can achieve "
|
||
"the same training effect in evaluating indicators such as accuracy."
|
||
|
||
#: internlm.core.naive_amp.NaiveAMPModel:1 of
|
||
msgid ""
|
||
"This is a wrapper class for a model that automatically casts the model, "
|
||
"its inputs, and outputs into fp16. It also provides options to cast the "
|
||
"output back to fp32 and to synchronize buffers."
|
||
msgstr ""
|
||
|
||
#: internlm.core.naive_amp.NaiveAMPModel of
|
||
msgid "参数"
|
||
msgstr ""
|
||
|
||
#: internlm.core.naive_amp.NaiveAMPModel:4 of
|
||
msgid "The model to be wrapped and cast into fp16."
|
||
msgstr ""
|
||
|
||
#: internlm.core.naive_amp.NaiveAMPModel:6 of
|
||
msgid "If True, the output of this module is cast into fp32. Defaults to True."
|
||
msgstr ""
|
||
|
||
#: internlm.core.naive_amp.NaiveAMPModel:8 of
|
||
msgid ""
|
||
"The parallel group mode used in this module. Defaults to "
|
||
"``ParallelMode.DATA``."
|
||
msgstr ""
|
||
|
||
#: internlm.core.naive_amp.NaiveAMPModel:11 of
|
||
msgid "If True, the buffers are synchronized. Defaults to True."
|
||
msgstr ""
|
||
|
||
#: ../../source/mixed_precision.rst:8
|
||
msgid "InternLM默认将模型转换为16位浮点数类型进行训练(在配置文件中可以设置默认类型为其他数据类型)。在使用混合精度时,需要在构建模型时使用"
|
||
msgstr ""
|
||
"InternLM converts the model to 16-bit floating-point types for model "
|
||
"training by default (the default type can be set to other data types in "
|
||
"the configuration file). When using mixed precision, it is necessary to "
|
||
"use "
|
||
|
||
#: ../../source/mixed_precision.rst:14
|
||
msgid "将模型的某个子模块设置为32位浮点数类型进行训练,InternLM会在模型训练时自动将数据类型转换成需要的精度。"
|
||
msgstr ""
|
||
"to set a sub-module of the model to 16-bit floating-point types for "
|
||
"training, and InternLM will automatically convert the data type to the "
|
||
"required precision during model training."
|
||
|
||
#: ../../source/mixed_precision.rst:16
|
||
msgid "例如:"
|
||
msgstr "For example:"
|
||
|
||
#: ../../source/mixed_precision.rst:40
|
||
msgid "TF32训练"
|
||
msgstr "TF32 Training"
|
||
|
||
#: ../../source/mixed_precision.rst:41
|
||
msgid "TensorFloat-32(TF32)是Nvidia在Ampere架构GPU上推出的专门运用于TensorCore的一种计算格式。其与其他常用数据格式的比较如下图:"
|
||
msgstr "TensorFloat-32 (TF32) is a computational format introduced by Nvidia on Ampere Architecture GPUs for TensorCore. A comparison with other data formats is shown below."
|
||
|
||
#: ../../source/mixed_precision.rst:47
|
||
msgid "使用TF32的前置条件:"
|
||
msgstr "Prerequisites for using TF32."
|
||
|
||
#: ../../source/mixed_precision.rst:49
|
||
msgid "输入数据类型为FP32,且计算为矩阵乘法及卷积相关运算,才可以使用TF32作为TensorCore的中间计算类型。"
|
||
msgstr "The input data type should be FP32 and TF32 is designed for matrix multiplication, convolutions, and other relative computations."
|
||
|
||
#: ../../source/mixed_precision.rst:51
|
||
msgid "Ampere架构的GPU。"
|
||
msgstr "Ampere Architecture GPU"
|
||
|
||
#: ../../source/mixed_precision.rst:53
|
||
msgid "InternLM支持使用TF32训练模型,允许用户在config文件中将 ``dtype`` 设置为 ``torch.tf32``。"
|
||
msgstr "InternLM supports training model in TF32 and allows user to set the ``dtype`` in config as ``torch.tf32``."
|
||
|
||
#: ../../source/mixed_precision.rst:75
|
||
msgid ""
|
||
"值得注意的是,TF32仅仅是在使用TensorCore时的一种中间计算格式,并不是一个完全的数据类型。因此,在InternLM中,尽管用户将 "
|
||
"``dtype`` 设置成了 ``torch.tf32``,模型的数据类型依旧是 ``torch.float32``。InternLM会针对 "
|
||
"``dtype`` 为 ``torch.tf32`` 的情况,设置以下变量来开启TF32训练。"
|
||
msgstr "It is noticed that TF32 is an intermediate format in TensorCore instead of a data type. Therefore, InternLM could set the following environment variables to enable TF32 when the ``dtype`` is ``torch.tf32``, which is actually ``torch.float32``."
|
||
|