ColossalAI/colossalai
flybird11111 79718fae04
[shardformer] llama support DistCrossEntropy (#5176)
* fix

aaa

fix

fix

fix

* fix

* fix

* test ci

* fix ci

fix

* llama support dist-cross

fix

fix

fix

fix

fix

fix

fix

fix

* fix

* fix

* fix

fix

* test ci

* test ci

* fix

* [Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)

* Add finetuning Colossal-Llama-2 example

* Add finetuning Colossal-Llama-2 example 2

* Add finetuning Colossal-Llama-2 example and support NEFTuning

* Add inference example and refine neftune

* Modify readme file

* update the imports

---------

Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>

* llama support dist-cross

fix

fix

fix

fix

fix

fix

fix

fix

* fix

* fix

* fix

fix

* test ci

* test ci

* fix

* fix ci

* fix ci

---------

Co-authored-by: Yuanchen <70520919+chengeharrison@users.noreply.github.com>
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
2023-12-13 01:39:14 +08:00
..
_C [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
_analyzer [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
amp [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
auto_parallel [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
autochunk [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
booster [gemini] hotfix NaN loss while using Gemini + tensor_parallel (#5150) 2023-12-08 11:10:51 +08:00
checkpoint_io [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017) 2023-11-16 20:15:59 +08:00
cli [bug] Fix the version check bug in colossalai run when generating the cmd. (#4713) 2023-09-22 10:50:47 +08:00
cluster [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
context [moe] merge moe into main (#4978) 2023-11-02 02:21:24 +00:00
device [npu] add npu support for hybrid plugin and llama (#5090) 2023-11-22 19:23:21 +08:00
fx [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
inference [Hotfix] Fix model policy matching strategy in ShardFormer (#5064) 2023-11-22 11:19:39 +08:00
interface [lazy] support from_pretrained (#4801) 2023-09-26 11:04:11 +08:00
kernel fix thrust-transform-reduce error (#5078) 2023-11-21 15:09:35 +08:00
lazy [doc] add lazy init docs (#4808) 2023-09-27 10:24:04 +08:00
legacy [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
logging [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe [hotfix]: modify create_ep_hierarchical_group and add test (#5032) 2023-11-17 10:53:00 +08:00
nn [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
pipeline [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
shardformer [shardformer] llama support DistCrossEntropy (#5176) 2023-12-13 01:39:14 +08:00
tensor fix (#5158) 2023-12-05 14:28:36 +08:00
testing [npu] add npu support for hybrid plugin and llama (#5090) 2023-11-22 19:23:21 +08:00
utils [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
zero [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
__init__.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
initialize.py [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00