d202cc28c0
* update accelerator * fix timer * fix amp * update * fix * update bug * add error raise * fix autocast * fix set device * remove doc accelerator * update doc * update doc * update doc * use nullcontext * update cpu * update null context * change time limit for example * udpate * update * update * update * [npu] polish accelerator code --------- Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com> Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com> |
||
---|---|---|
.. | ||
fp8/mnist | ||
roberta | ||
README.md |
README.md
Community Examples
Community-driven Examples is an initiative that allows users to share their own examples to the Colossal-AI community, fostering a sense of community and making it easy for others to access and benefit from shared work. The primary goal with community-driven examples is to have a community-maintained collection of diverse and exotic functionalities built on top of the Colossal-AI package.
If a community example doesn't work as expected, you can open an issue and @ the author to report it.
Example | Description | Code Example | Colab | Author |
---|---|---|---|---|
RoBERTa | Adding RoBERTa for SFT and Prompts model training | RoBERTa | - | YY Lin (Moore Threads) |
TransformerEngine FP8 | Adding TransformerEngine with FP8 training | TransformerEngine FP8 | - | Kirthi Shankar Sivamani (NVIDIA) |
... | ... | ... | ... | ... |
Looking for Examples
Welcome to open an issue to share your insights and needs.
How to get involved
To join our community-driven initiative, please visit the Colossal-AI examples, review the provided information, and explore the codebase.
To contribute, create a new issue outlining your proposed feature or enhancement, and our team will review and provide feedback. If you are confident enough you can also submit a PR directly. We look forward to collaborating with you on this exciting project!