mirror of https://github.com/hpcaitech/ColossalAI
fix typo docs/ (#4033)
parent
2d40759a53
commit
769cddcb2c
|
@ -9,7 +9,7 @@ When you only have a few GPUs for large model training tasks, **heterogeneous tr
|
|||
|
||||
## Usage
|
||||
|
||||
At present, Gemini supports compatibility with ZeRO parallel mode, and it is really simple to use Gemini: Inject the feathures of `GeminiPlugin` into training components with `booster`. More instructions of `booster` please refer to [**usage of booster**](../basics/booster_api.md).
|
||||
At present, Gemini supports compatibility with ZeRO parallel mode, and it is really simple to use Gemini: Inject the features of `GeminiPlugin` into training components with `booster`. More instructions of `booster` please refer to [**usage of booster**](../basics/booster_api.md).
|
||||
|
||||
```python
|
||||
from torchvision.models import resnet18
|
||||
|
|
|
@ -150,7 +150,7 @@ Colossal-AI 提供了自己的优化器、损失函数和学习率调度器。Py
|
|||
optimizer = colossalai.nn.Lamb(model.parameters(), lr=1.8e-2, weight_decay=0.1)
|
||||
# build loss
|
||||
criterion = torch.nn.CrossEntropyLoss()
|
||||
# lr_scheduelr
|
||||
# lr_scheduler
|
||||
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS)
|
||||
```
|
||||
|
||||
|
|
|
@ -303,7 +303,7 @@ colossalai.launch_from_torch(config=args.config)
|
|||
# build loss
|
||||
criterion = torch.nn.CrossEntropyLoss()
|
||||
|
||||
# lr_scheduelr
|
||||
# lr_scheduler
|
||||
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=gpc.config.NUM_EPOCHS)
|
||||
```
|
||||
|
||||
|
|
|
@ -181,7 +181,7 @@ optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, weight_decay=0.1)
|
|||
# build loss
|
||||
criterion = torch.nn.CrossEntropyLoss()
|
||||
|
||||
# lr_scheduelr
|
||||
# lr_scheduler
|
||||
lr_scheduler = LinearWarmupLR(optimizer, warmup_steps=50, total_steps=NUM_EPOCHS)
|
||||
```
|
||||
|
||||
|
|
Loading…
Reference in New Issue