ColossalAI/colossalai/shardformer/layer
hxwang 3e2b6132b7 [moe] clean legacy code 2024-08-01 10:06:59 +08:00
..
__init__.py [Feature] Enable PP + SP for llama (#5868) 2024-07-09 18:05:20 +08:00
_operation.py [ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM (#5897) 2024-07-10 11:34:25 +08:00
attn.py [shardformer] hotfix attn mask (#5947) 2024-07-29 19:10:06 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [Inference] Fix bugs and docs for feat/online-server (#5598) 2024-05-08 15:20:53 +00:00
linear.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
loss.py [Feature] Enable PP + SP for llama (#5868) 2024-07-09 18:05:20 +08:00
normalization.py Remove CohereLayerNorm and use existing layernorm 2024-06-18 02:32:41 +00:00
parallel_module.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
qkv_fused_linear.py Add n_fused as an input from native_module (#5894) 2024-07-23 23:15:39 +08:00
utils.py [shardformer] Sequence Parallelism Optimization (#5533) 2024-04-03 17:15:47 +08:00