mirror of https://github.com/InternLM/InternLM
![]() * use fp16 in instruction (#80) * delete torch_dtype of README's example code (#100) * refactor the forward for rotary embedding --------- Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> |
||
---|---|---|
.. | ||
__init__.py | ||
embedding.py | ||
linear.py | ||
loss.py | ||
modeling_internlm.py | ||
multi_head_attention.py | ||
utils.py |