Browse Source

[doc] updated installation command (#5389)

pull/5313/head^2
Frank Lee 9 months ago committed by GitHub
parent
commit
705a62a565
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 8
      README.md
  2. 6
      docs/source/en/get_started/installation.md
  3. 10
      docs/source/zh-Hans/get_started/installation.md

8
README.md

@ -398,10 +398,10 @@ pip install colossalai
**Note: only Linux is supported for now.**
However, if you want to build the PyTorch extensions during installation, you can set `CUDA_EXT=1`.
However, if you want to build the PyTorch extensions during installation, you can set `BUILD_EXT=1`.
```bash
CUDA_EXT=1 pip install colossalai
BUILD_EXT=1 pip install colossalai
```
**Otherwise, CUDA kernels will be built during runtime when you actually need them.**
@ -429,7 +429,7 @@ By default, we do not compile CUDA/C++ kernels. ColossalAI will build them durin
If you want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):
```shell
CUDA_EXT=1 pip install .
BUILD_EXT=1 pip install .
```
For Users with CUDA 10.2, you can still build ColossalAI from source. However, you need to manually download the cub library and copy it to the corresponding directory.
@ -445,7 +445,7 @@ unzip 1.8.0.zip
cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/
# install
CUDA_EXT=1 pip install .
BUILD_EXT=1 pip install .
```
<p align="right">(<a href="#top">back to top</a>)</p>

6
docs/source/en/get_started/installation.md

@ -23,7 +23,7 @@ pip install colossalai
If you want to build PyTorch extensions during installation, you can use the command below. Otherwise, the PyTorch extensions will be built during runtime.
```shell
CUDA_EXT=1 pip install colossalai
BUILD_EXT=1 pip install colossalai
```
@ -39,7 +39,7 @@ cd ColossalAI
pip install -r requirements/requirements.txt
# install colossalai
CUDA_EXT=1 pip install .
BUILD_EXT=1 pip install .
```
If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer), just don't specify the `CUDA_EXT`:
@ -61,7 +61,7 @@ unzip 1.8.0.zip
cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/
# install
CUDA_EXT=1 pip install .
BUILD_EXT=1 pip install .
```
<!-- doc-test-command: echo "installation.md does not need test" -->

10
docs/source/zh-Hans/get_started/installation.md

@ -20,10 +20,10 @@ pip install colossalai
**注:现在只支持Linux。**
如果你想同时安装PyTorch扩展的话,可以添加`CUDA_EXT=1`。如果不添加的话,PyTorch扩展会在运行时自动安装。
如果你想同时安装PyTorch扩展的话,可以添加`BUILD_EXT=1`。如果不添加的话,PyTorch扩展会在运行时自动安装。
```shell
CUDA_EXT=1 pip install colossalai
BUILD_EXT=1 pip install colossalai
```
## 从源安装
@ -38,10 +38,10 @@ cd ColossalAI
pip install -r requirements/requirements.txt
# install colossalai
CUDA_EXT=1 pip install .
BUILD_EXT=1 pip install .
```
如果您不想安装和启用 CUDA 内核融合(使用融合优化器时强制安装),您可以不添加`CUDA_EXT=1`:
如果您不想安装和启用 CUDA 内核融合(使用融合优化器时强制安装),您可以不添加`BUILD_EXT=1`:
```shell
pip install .
@ -60,7 +60,7 @@ unzip 1.8.0.zip
cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/
# install
CUDA_EXT=1 pip install .
BUILD_EXT=1 pip install .
```
<!-- doc-test-command: echo "installation.md does not need test" -->

Loading…
Cancel
Save