mirror of https://github.com/hpcaitech/ColossalAI
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
36 lines
986 B
36 lines
986 B
# Change Log
|
|
|
|
All notable changes to this project will be documented in this file.
|
|
|
|
## v0.0.2 | 2022-02
|
|
|
|
### Added
|
|
|
|
- Unifed distributed layers
|
|
- MoE support
|
|
- DevOps tools such as github action, code review automation, etc.
|
|
- New project official website
|
|
|
|
### Changes
|
|
|
|
- refactored the APIs for usability, flexibility and modularity
|
|
- adapted PyTorch AMP for tensor parallel
|
|
- refactored utilities for tensor parallel and pipeline parallel
|
|
- Separated benchmarks and examples as independent repositories
|
|
- Updated pipeline parallelism to support non-interleaved and interleaved versions
|
|
- refactored installation scripts for convenience
|
|
|
|
### Fixed
|
|
|
|
- zero level 3 runtime error
|
|
- incorrect calculation in gradient clipping
|
|
|
|
|
|
## v0.0.1 beta | 2021-10
|
|
|
|
The first beta version of Colossal-AI. Thanks to all contributors for the effort to implement the system.
|
|
|
|
### Added
|
|
|
|
- Initial architecture of the system
|
|
- Features such as tensor parallelism, gradient clipping, gradient accumulation |