mirror of https://github.com/hpcaitech/ColossalAI
![]() * Fix torch int32 dtype Signed-off-by: char-1ee <xingjianli59@gmail.com> * Fix flash-attn import Signed-off-by: char-1ee <xingjianli59@gmail.com> * Add generalized model test Signed-off-by: char-1ee <xingjianli59@gmail.com> * Remove exposed path to model Signed-off-by: char-1ee <xingjianli59@gmail.com> * Add default value for use_flash_attn Signed-off-by: char-1ee <xingjianli59@gmail.com> * Rename model test Signed-off-by: char-1ee <xingjianli59@gmail.com> --------- Signed-off-by: char-1ee <xingjianli59@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
_operation.py | ||
attn.py | ||
dropout.py | ||
embedding.py | ||
linear.py | ||
loss.py | ||
normalization.py | ||
parallel_module.py | ||
qkv_fused_linear.py | ||
utils.py |