fix typo docs/ (#4033)

pull/4109/head
digger yu 2023-06-28 15:30:30 +08:00 committed by GitHub
parent 2d40759a53
commit 769cddcb2c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 4 additions and 4 deletions

View File

@ -9,7 +9,7 @@ When you only have a few GPUs for large model training tasks, **heterogeneous tr
## Usage
At present, Gemini supports compatibility with ZeRO parallel mode, and it is really simple to use Gemini: Inject the feathures of `GeminiPlugin` into training components with `booster`. More instructions of `booster` please refer to [**usage of booster**](../basics/booster_api.md).
At present, Gemini supports compatibility with ZeRO parallel mode, and it is really simple to use Gemini: Inject the features of `GeminiPlugin` into training components with `booster`. More instructions of `booster` please refer to [**usage of booster**](../basics/booster_api.md).
```python
from torchvision.models import resnet18

View File

@ -150,7 +150,7 @@ Colossal-AI 提供了自己的优化器、损失函数和学习率调度器。Py
optimizer = colossalai.nn.Lamb(model.parameters(), lr=1.8e-2, weight_decay=0.1)
# build loss
criterion = torch.nn.CrossEntropyLoss()
# lr_scheduelr
# lr_scheduler
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS)
```

View File

@ -303,7 +303,7 @@ colossalai.launch_from_torch(config=args.config)
# build loss
criterion = torch.nn.CrossEntropyLoss()
# lr_scheduelr
# lr_scheduler
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS)
```

View File

@ -181,7 +181,7 @@ optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, weight_decay=0.1)
# build loss
criterion = torch.nn.CrossEntropyLoss()
# lr_scheduelr
# lr_scheduler
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=NUM_EPOCHS)
```