add docs for tf32 in mix precision

pull/374/head
yingtongxiong 2023-09-27 11:17:03 +08:00
parent 27930174ae
commit e57a99a810
4 changed files with 101 additions and 54 deletions

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: InternLM \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-09-26 17:04+0800\n"
"POT-Creation-Date: 2023-09-27 10:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
@ -83,3 +83,35 @@ msgstr ""
#: ../../source/mixed_precision.rst:16
msgid "例如:"
msgstr "For example:"
#: ../../source/mixed_precision.rst:40
msgid "TF32训练"
msgstr ""
#: ../../source/mixed_precision.rst:41
msgid "TensorFloat-32TF32是Nvidia在Ampere架构GPU上推出的专门运用于TensorCore的一种计算格式。其与其他常用数据格式的比较如下图"
msgstr "TensorFloat-32 (TF32) is a computational format introduced by Nvidia on Ampere Architecture GPUs for TensorCore. A comparison with other data formats is shown below."
#: ../../source/mixed_precision.rst:47
msgid "使用TF32的前置条件"
msgstr "Prerequisites for using TF32."
#: ../../source/mixed_precision.rst:49
msgid "输入数据类型为FP32且计算为矩阵乘法及卷积相关运算才可以使用TF32作为TensorCore的中间计算类型。"
msgstr "The input data type should be FP32 and TF32 is designed for matrix multiplication, convolutions, and other relative computations."
#: ../../source/mixed_precision.rst:51
msgid "Ampere架构的GPU。"
msgstr "Ampere Architecture GPU"
#: ../../source/mixed_precision.rst:53
msgid "InternLM支持使用TF32训练模型允许用户在config文件中将 ``dtype`` 设置为 ``torch.tf32``。"
msgstr "InternLM supports training model in TF32 and allows user to set the ``dtype`` in config as ``torch.tf32``."
#: ../../source/mixed_precision.rst:75
msgid ""
"值得注意的是TF32仅仅是在使用TensorCore时的一种中间计算格式并不是一个完全的数据类型。因此在InternLM中尽管用户将 "
"``dtype`` 设置成了 ``torch.tf32``,模型的数据类型依旧是 ``torch.float32``。InternLM会针对 "
"``dtype`` 为 ``torch.tf32`` 的情况设置以下变量来开启TF32训练。"
msgstr "It is noticed that TF32 is an intermediate format in TensorCore instead of a data type. Therefore, InternLM could set the following environment variables to enable TF32 when the ``dtype`` is ``torch.tf32``, which is actually ``torch.float32``."

View File

@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: InternLM \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-09-11 14:25+0800\n"
"POT-Creation-Date: 2023-09-27 10:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en\n"
@ -360,6 +360,28 @@ msgstr ""
"Taking the configuration of the demo training on a single machine with 8 "
"GPUs on slurm as an example, the training result log is shown below:"
#: ../../../usage.md:373
msgid "长文本生成"
msgstr ""
#: ../../../usage.md:375
msgid ""
"在推理阶段,您可以在模型配置中通过设置 `use_dynamic_ntk_rope=True` 开启 RoPE 的 Dynamic NTK "
"选项,从而使得模型适应长文本输入输出,达到 16K 的外推效果:"
msgstr ""
#: ../../../usage.md:401
msgid "关于 Dyanmic NTK 的原理,详细请参考"
msgstr ""
#: ../../../usage.md:403
msgid "https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases"
msgstr ""
#: ../../../usage.md:404
msgid "https://kexue.fm/archives/9675"
msgstr ""
#~ msgid "`load_model_only_folder`与`load_ckpt_folder`不能同时设置"
#~ msgstr ""
#~ "`load_model_only_folder` and `load_ckpt_folder` "

View File

@ -34,3 +34,48 @@ InternLM默认将模型转换为16位浮点数类型进行训练在配置文
dtype=torch.bfloat16(),
sync_buffer=False,
)
TF32训练
-----------------
TensorFloat-32TF32是Nvidia在Ampere架构GPU上推出的专门运用于TensorCore的一种计算格式。其与其他常用数据格式的比较如下图
.. figure:: ../../imgs/tf32.png
:scale: 50%
:class: with-border
使用TF32的前置条件
1. 输入数据类型为FP32且计算为矩阵乘法及卷积相关运算才可以使用TF32作为TensorCore的中间计算类型。
2. Ampere架构的GPU。
InternLM支持使用TF32训练模型允许用户在config文件中将 ``dtype`` 设置为 ``torch.tf32``
.. code-block:: python
model = dict(
checkpoint=False, # The proportion of layers for activation aheckpointing, the optional value are True/False/[0-1]
num_attention_heads=NUM_ATTENTION_HEAD,
embed_split_hidden=True,
vocab_size=VOCAB_SIZE,
embed_grad_scale=1,
parallel_output=True,
hidden_size=HIDDEN_SIZE,
num_layers=NUM_LAYER,
mlp_ratio=MLP_RATIO,
apply_post_layer_norm=False,
dtype="torch.tf32", # Support: "torch.float16", "torch.half", "torch.bfloat16", "torch.float32", "torch.tf32"
norm_type="rmsnorm",
layer_norm_epsilon=1e-5,
use_flash_attn=True,
num_chunks=1, # if num_chunks > 1, interleaved pipeline scheduler is used.
)
值得注意的是TF32仅仅是在使用TensorCore时的一种中间计算格式并不是一个完全的数据类型。因此在InternLM中尽管用户将 ``dtype`` 设置成了 ``torch.tf32``,模型的数据类型依旧是 ``torch.float32``。InternLM会针对 ``dtype````torch.tf32`` 的情况设置以下变量来开启TF32训练。
.. code-block:: python
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True

View File

@ -1,52 +0,0 @@
TF32训练
==================
InternLM支持使用TF32训练模型。TensorFloat-32TF32是Nvidia在Ampere架构GPU上推出的专门运用于TensorCore的一种计算格式。其与其他常用数据格式的比较如下图
InternLM supports training models using TF32. TensorFloat-32 (TF32) is a computational format introduced by Nvidia for TensorCores on Ampere architecture GPUs. Here's a comparison of TF32 with other data formats:
.. figure:: ../../imgs/tf32.png
:scale: 50%
:class: with-border
使用TF32的前置条件
Prerequisites for using TF32:
input data must be of type FP32 (single-precision floating-point) and the computations should be matrix multiplication, convolution and so on.
1. 输入数据类型为FP32且计算为矩阵乘法及卷积相关运算才可以使用TF32作为TensorCore的中间计算类型。
Ampere GPU
2. Ampere架构的GPU。
值得注意的是TF32仅仅是在使用TensorCore时的一种中间计算格式并不是一个完全的数据类型。因此为了区分不同的精度与计算格式 ``BF16````FP16````FP32````TF32`` InternLM支持用户在 ``model config`` 中传入 ``torch.tf32`` 来表示想要使用TF32加速运算本质上数据类型依旧为 ``FP32``
It is noticed that TF32 is an intermediate calculation format when employing TensorCores. InternLM allows users to speficy ``torch.tf32`` in the model config to using TF32 acceleration while dtype is still ``torch.float32``.
.. code-block:: python
model = dict(
checkpoint=False, # The proportion of layers for activation aheckpointing, the optional value are True/False/[0-1]
num_attention_heads=NUM_ATTENTION_HEAD,
embed_split_hidden=True,
vocab_size=VOCAB_SIZE,
embed_grad_scale=1,
parallel_output=True,
hidden_size=HIDDEN_SIZE,
num_layers=NUM_LAYER,
mlp_ratio=MLP_RATIO,
apply_post_layer_norm=False,
dtype="torch.tf32", # Support: "torch.float16", "torch.half", "torch.bfloat16", "torch.float32", "torch.tf32"
norm_type="rmsnorm",
layer_norm_epsilon=1e-5,
use_flash_attn=True,
num_chunks=1, # if num_chunks > 1, interleaved pipeline scheduler is used.
)
InternLM会根据 ``model config`` 中的 ``dtype`` 字符串来判断真正的数据类型。InternLM通过设置以下变量来开启TF32训练。
InternLM enables TF32 training by setting the following variables.
.. code-block:: python
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True