diff --git a/README.md b/README.md index 13757eece..3963fe2fb 100644 --- a/README.md +++ b/README.md @@ -398,10 +398,10 @@ pip install colossalai **Note: only Linux is supported for now.** -However, if you want to build the PyTorch extensions during installation, you can set `CUDA_EXT=1`. +However, if you want to build the PyTorch extensions during installation, you can set `BUILD_EXT=1`. ```bash -CUDA_EXT=1 pip install colossalai +BUILD_EXT=1 pip install colossalai ``` **Otherwise, CUDA kernels will be built during runtime when you actually need them.** @@ -429,7 +429,7 @@ By default, we do not compile CUDA/C++ kernels. ColossalAI will build them durin If you want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer): ```shell -CUDA_EXT=1 pip install . +BUILD_EXT=1 pip install . ``` For Users with CUDA 10.2, you can still build ColossalAI from source. However, you need to manually download the cub library and copy it to the corresponding directory. @@ -445,7 +445,7 @@ unzip 1.8.0.zip cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/ # install -CUDA_EXT=1 pip install . +BUILD_EXT=1 pip install . ```
diff --git a/docs/source/en/get_started/installation.md b/docs/source/en/get_started/installation.md index 18607a34c..f9c8fe475 100644 --- a/docs/source/en/get_started/installation.md +++ b/docs/source/en/get_started/installation.md @@ -23,7 +23,7 @@ pip install colossalai If you want to build PyTorch extensions during installation, you can use the command below. Otherwise, the PyTorch extensions will be built during runtime. ```shell -CUDA_EXT=1 pip install colossalai +BUILD_EXT=1 pip install colossalai ``` @@ -39,7 +39,7 @@ cd ColossalAI pip install -r requirements/requirements.txt # install colossalai -CUDA_EXT=1 pip install . +BUILD_EXT=1 pip install . ``` If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer), just don't specify the `CUDA_EXT`: @@ -61,7 +61,7 @@ unzip 1.8.0.zip cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/ # install -CUDA_EXT=1 pip install . +BUILD_EXT=1 pip install . ``` diff --git a/docs/source/zh-Hans/get_started/installation.md b/docs/source/zh-Hans/get_started/installation.md index e75e42530..9e4f34707 100755 --- a/docs/source/zh-Hans/get_started/installation.md +++ b/docs/source/zh-Hans/get_started/installation.md @@ -20,10 +20,10 @@ pip install colossalai **注:现在只支持Linux。** -如果你想同时安装PyTorch扩展的话,可以添加`CUDA_EXT=1`。如果不添加的话,PyTorch扩展会在运行时自动安装。 +如果你想同时安装PyTorch扩展的话,可以添加`BUILD_EXT=1`。如果不添加的话,PyTorch扩展会在运行时自动安装。 ```shell -CUDA_EXT=1 pip install colossalai +BUILD_EXT=1 pip install colossalai ``` ## 从源安装 @@ -38,10 +38,10 @@ cd ColossalAI pip install -r requirements/requirements.txt # install colossalai -CUDA_EXT=1 pip install . +BUILD_EXT=1 pip install . ``` -如果您不想安装和启用 CUDA 内核融合(使用融合优化器时强制安装),您可以不添加`CUDA_EXT=1`: +如果您不想安装和启用 CUDA 内核融合(使用融合优化器时强制安装),您可以不添加`BUILD_EXT=1`: ```shell pip install . @@ -60,7 +60,7 @@ unzip 1.8.0.zip cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/ # install -CUDA_EXT=1 pip install . +BUILD_EXT=1 pip install . ```