mirror of https://github.com/InternLM/InternLM
![]() * fix(chat): fix stream_chat to return generator (#123) * fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302) * fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299) --------- Co-authored-by: yingtongxiong <974106207@qq.com> Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com> Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com> |
||
---|---|---|
.. | ||
README-zh-Hans.md | ||
README.md | ||
configuration_internlm.py | ||
convert2hf.py | ||
interface.py | ||
intern_moss_example.py | ||
internlm_sft_on_moss.py | ||
modeling_internlm.py | ||
tokenization_internlm.py |
README.md
InternLM Transformers
This folder contains the InternLM
model in transformers format.
Weight Conversion
convert2hf.py
can convert saved training weights into the transformers format with a single command. Execute the command in the root directory of repository:
python tools/transformers/convert2hf.py --src_folder origin_ckpt/ --tgt_folder hf_ckpt/ --tokenizer ./tools/V7_sft.model
Then, you can load it using the from_pretrained
interface:
>>> from transformers import AutoTokenizer, AutoModel
>>> model = AutoModel.from_pretrained("hf_ckpt/", trust_remote_code=True).cuda()
intern_moss_example.py
demonstrates an example of how to use LoRA for fine-tuning on the fnlp/moss-moon-002-sft
dataset.