ColossalAI/colossalai/shardformer/layer
Li Xingjian 8554585a5f
[Inference] Fix flash-attn import and add model test (#5794)
* Fix torch int32 dtype

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Fix flash-attn import

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Add generalized model test

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Remove exposed path to model

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Add default value for use_flash_attn

Signed-off-by: char-1ee <xingjianli59@gmail.com>

* Rename model test

Signed-off-by: char-1ee <xingjianli59@gmail.com>

---------

Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-12 14:13:50 +08:00
..
__init__.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
_operation.py [shardformer] Sequence Parallelism Optimization (#5533) 2024-04-03 17:15:47 +08:00
attn.py [Inference] Fix flash-attn import and add model test (#5794) 2024-06-12 14:13:50 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [Inference] Fix bugs and docs for feat/online-server (#5598) 2024-05-08 15:20:53 +00:00
linear.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
loss.py [Shardformer] Add parallel output for shardformer models(bloom, falcon) (#5702) 2024-05-21 11:07:13 +08:00
normalization.py [shardformer] fix chatglm implementation (#5644) 2024-04-25 14:41:17 +08:00
parallel_module.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
qkv_fused_linear.py [shardformer] Sequence Parallelism Optimization (#5533) 2024-04-03 17:15:47 +08:00
utils.py [shardformer] Sequence Parallelism Optimization (#5533) 2024-04-03 17:15:47 +08:00