.. |
_C
|
Clean up
|
2024-06-07 09:09:29 +00:00 |
_analyzer
|
[test] Fix/fix testcase (#5770)
|
2024-06-03 15:26:01 +08:00 |
accelerator
|
[hotfix] fix typo change MoECheckpintIO to MoECheckpointIO (#5335)
|
2024-03-05 21:52:30 +08:00 |
amp
|
[npu] change device to accelerator api (#5239)
|
2024-01-09 10:20:05 +08:00 |
auto_parallel
|
[pre-commit.ci] pre-commit autoupdate (#5572)
|
2024-07-01 17:16:41 +08:00 |
autochunk
|
[hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606)
|
2024-04-18 18:15:50 +08:00 |
booster
|
[plugin] hotfix zero plugin (#6036)
|
2024-08-28 10:16:48 +08:00 |
checkpoint_io
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
cli
|
[devops] fix extention building (#5427)
|
2024-03-05 15:35:54 +08:00 |
cluster
|
[FP8] rebase main (#5963)
|
2024-08-06 16:29:37 +08:00 |
context
|
[Fix]: implement thread-safety singleton to avoid deadlock for very large-scale training scenarios (#5625)
|
2024-04-25 14:45:52 +08:00 |
device
|
[Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694)
|
2024-05-14 13:52:45 +08:00 |
fx
|
[test] Fix/fix testcase (#5770)
|
2024-06-03 15:26:01 +08:00 |
inference
|
[FP8] rebase main (#5963)
|
2024-08-06 16:29:37 +08:00 |
interface
|
[Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694)
|
2024-05-14 13:52:45 +08:00 |
kernel
|
[NFC] Fix code factors on inference triton kernels (#5743)
|
2024-05-21 22:12:15 +08:00 |
lazy
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
legacy
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
logging
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
moe
|
[fp8]Moe support fp8 communication (#5977)
|
2024-08-09 18:26:02 +08:00 |
nn
|
[misc] fix dist logger (#5782)
|
2024-06-05 15:04:22 +08:00 |
pipeline
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
quantization
|
[FP8] unsqueeze scale to make it compatible with torch.compile (#6040)
|
2024-08-29 14:49:23 +08:00 |
shardformer
|
Merge pull request #6012 from hpcaitech/feature/fp8_comm
|
2024-08-27 10:09:43 +08:00 |
tensor
|
[fp8] support fp8 amp for hybrid parallel plugin (#5975)
|
2024-08-07 18:21:08 +08:00 |
testing
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
utils
|
Merge pull request #5310 from hpcaitech/feature/npu
|
2024-01-29 13:49:39 +08:00 |
zero
|
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016)
|
2024-08-22 09:21:34 +08:00 |
__init__.py
|
[devops] remove post commit ci (#5566)
|
2024-04-08 15:09:40 +08:00 |
initialize.py
|
[FP8] rebase main (#5963)
|
2024-08-06 16:29:37 +08:00 |