[doc] moved doc test command to bottom (#3075)

pull/3080/head
Frank Lee 2023-03-09 18:10:45 +08:00 committed by GitHub
parent 91ccf97514
commit 416a50dbd7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 14 additions and 7 deletions

View File

@ -72,7 +72,7 @@ Meanwhile, you need to ensure the `sidebars.json` is updated such that it contai
### 🧹 Doc Testing
Every documentation is tested to ensure it works well. You need to add the following line to the top of your file and replace `$command` with the actual command. Do note that the markdown will be converted into a Python file. Assuming you have a `demo.md` file, the test file generated will be `demo.py`. Therefore, you should use `demo.py` in your command, e.g. `python demo.py`.
Every documentation is tested to ensure it works well. You need to add the following line to the **bottom of your file** and replace `$command` with the actual command. Do note that the markdown will be converted into a Python file. Assuming you have a `demo.md` file, the test file generated will be `demo.py`. Therefore, you should use `demo.py` in your command, e.g. `python demo.py`.
```markdown
<!-- doc-test-command: $command -->

View File

@ -1,4 +1,3 @@
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 nvme_offload.py -->
# NVMe offload
Author: Hongxin Liu
@ -259,3 +258,6 @@ NVME offload saves about 294 MB memory. Note that enabling `pin_memory` of Gemin
{{ autodoc:colossalai.nn.optimizer.HybridAdam }}
{{ autodoc:colossalai.nn.optimizer.CPUAdam }}
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 nvme_offload.py -->

View File

@ -1,12 +1,10 @@
<!-- doc-test-command: echo "installation.md does not need test" -->
# Setup
Requirements:
- PyTorch >= 1.11 (PyTorch 2.x in progress)
- Python >= 3.7
- CUDA >= 11.0
If you encounter any problem about installation, you may want to raise an [issue](https://github.com/hpcaitech/ColossalAI/issues/new/choose) in this repository.
@ -47,3 +45,6 @@ If you don't want to install and enable CUDA kernel fusion (compulsory installat
```shell
CUDA_EXT=1 pip install .
```
<!-- doc-test-command: echo "installation.md does not need test" -->

View File

@ -1,4 +1,3 @@
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 nvme_offload.py -->
# NVMe offload
作者: Hongxin Liu
@ -247,3 +246,6 @@ NVME 卸载节省了大约 294 MB 内存。注意使用 Gemini 的 `pin_memory`
{{ autodoc:colossalai.nn.optimizer.HybridAdam }}
{{ autodoc:colossalai.nn.optimizer.CPUAdam }}
<!-- doc-test-command: torchrun --standalone --nproc_per_node=1 nvme_offload.py -->

View File

@ -5,7 +5,7 @@
- PyTorch >= 1.11 (PyTorch 2.x 正在适配中)
- Python >= 3.7
- CUDA >= 11.0
如果你遇到安装问题,可以向本项目 [反馈](https://github.com/hpcaitech/ColossalAI/issues/new/choose)。
## 从PyPI上安装
@ -44,3 +44,5 @@ pip install .
```shell
NO_CUDA_EXT=1 pip install .
```
<!-- doc-test-command: echo "installation.md does not need test" -->