ColossalAI/tests/test_shardformer/test_layer
Kun Lin 8af29ee47a [shardformer] support vision transformer (#4096)
* first v of vit shardformer

* keep vit

* update

* vit shard add vitattention vitlayer

* update num head shard para

* finish test for vit

* add new_model_class & postprocess

* add vit readme

* delete old files & fix the conflict

* fix sth
2023-07-04 16:05:01 +08:00
..
test_dist_crossentropy.py [shardformer] refactored the shardformer layer structure (#4053) 2023-07-04 16:05:01 +08:00
test_dropout.py [shardformer] refactored the shardformer layer structure (#4053) 2023-07-04 16:05:01 +08:00
test_embedding.py [shardformer] support module saving and loading (#4062) 2023-07-04 16:05:01 +08:00
test_layernorm.py [shardformer] support vision transformer (#4096) 2023-07-04 16:05:01 +08:00
test_linear_1d.py [shardformer] supported fused qkv checkpoint (#4073) 2023-07-04 16:05:01 +08:00
test_linearconv_1d.py [shardformer] Add layernorm (#4072) 2023-07-04 16:05:01 +08:00
test_vocab_parallel_embedding_1d.py [shardformer] support module saving and loading (#4062) 2023-07-04 16:05:01 +08:00