InternLM/internlm/model
ytxiong 853becfb6e
feat(*): support fp32 training (#155)
* support float32 training

* fix lint

* add adaptation in model/utils.py

* remove some unnecessary code

* fix lint

* feat(optim): add support for fp32 zero

* Revert "Merge pull request #2 from SolenoidWGT/fp32_zero"

This reverts commit 53fc50b0e5, reversing
changes made to 40f24d0a73.

revert commit

* merge develop

* Update utils.py

* support fp32 in zero optimizer

* modify the dtype

---------

Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-04 16:05:30 +08:00
..
__init__.py feat(model/metrics.py): support calculating accuracy and perplexity m… (#91) 2023-07-26 16:22:10 +08:00
embedding.py [Dev] Pull Main (#139) 2023-07-27 10:20:21 +08:00
linear.py feat(*): support fp32 training (#155) 2023-08-04 16:05:30 +08:00
loss.py initial commit 2023-07-06 12:55:23 +08:00
metrics.py feat(*): support not-flash-attn for pp and no-pp (#145) 2023-07-28 16:13:04 +08:00
modeling_internlm.py refactor(*): refactor the code with no-apex (#170) 2023-08-03 11:24:12 +08:00
multi_head_attention.py feat(*): support fp32 training (#155) 2023-08-04 16:05:30 +08:00
norm.py feat(*): support no apex (#166) 2023-08-02 20:32:38 +08:00
utils.py feat(*): support fp32 training (#155) 2023-08-04 16:05:30 +08:00