InternLM/internlm/model
jiaxingli bbb5651582
fix(model): change model_type `LLAMA` to `LLAMA2` (#539)
* support hf llama

* support hf llama

* support hf llama

* support hf llama

* importerror

* importerror

* modeling

* modeling

* fix bug
2023-12-13 17:24:45 +08:00
..
__init__.py feat(model): support llama model with checkpoint loading (#532) 2023-12-11 16:25:24 +08:00
embedding.py fix(model/embedding.py): ci lint check error (#345) 2023-09-21 14:46:22 +08:00
linear.py feat(linear): optimize mlp by using jit (#321) 2023-09-19 14:57:43 +08:00
loss.py initial commit 2023-07-06 12:55:23 +08:00
metrics.py fix(metric): add metric dtype control (#533) 2023-12-11 19:36:31 +08:00
modeling_internlm.py add output embedding tf32 option (#523) 2023-12-06 13:50:59 +08:00
modeling_llama.py fix(model): change model_type `LLAMA` to `LLAMA2` (#539) 2023-12-13 17:24:45 +08:00
modeling_moe.py fix(moe): remove norm&gate force sync (#448) 2023-11-01 11:29:55 +08:00
moe.py Doc(moe): add documentation for moe training (#411) 2023-10-19 10:01:12 +08:00
multi_head_attention.py feat(model): add rope_base interface (#512) 2023-11-23 16:30:14 +08:00
norm.py Merge develop to main (#233) 2023-08-24 22:03:04 +08:00
utils.py fix(moe): remove norm&gate force sync (#448) 2023-11-01 11:29:55 +08:00