InternLM/tools/transformers
huangting4201 07fc5f674a
Merge main to develop (#309)
* fix(chat): fix stream_chat to return generator (#123)

* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302)

* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299)

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
2023-09-14 16:32:15 +08:00
..
README-zh-Hans.md [Develop] Pull Main Branch (#121) 2023-07-21 20:44:33 +08:00
README.md [Develop] Pull Main Branch (#121) 2023-07-21 20:44:33 +08:00
configuration_internlm.py initial commit 2023-07-06 12:55:23 +08:00
convert2hf.py Merge main to develop (#309) 2023-09-14 16:32:15 +08:00
interface.py Merge main to develop (#203) 2023-08-16 15:57:26 +08:00
intern_moss_example.py initial commit 2023-07-06 12:55:23 +08:00
internlm_sft_on_moss.py initial commit 2023-07-06 12:55:23 +08:00
modeling_internlm.py Merge main to develop (#309) 2023-09-14 16:32:15 +08:00
tokenization_internlm.py initial commit 2023-07-06 12:55:23 +08:00

README.md

InternLM Transformers

English | 简体中文

This folder contains the InternLM model in transformers format.

Weight Conversion

convert2hf.py can convert saved training weights into the transformers format with a single command. Execute the command in the root directory of repository:

python tools/transformers/convert2hf.py --src_folder origin_ckpt/ --tgt_folder hf_ckpt/ --tokenizer ./tools/V7_sft.model

Then, you can load it using the from_pretrained interface:

>>> from transformers import AutoTokenizer, AutoModel
>>> model = AutoModel.from_pretrained("hf_ckpt/", trust_remote_code=True).cuda()

intern_moss_example.py demonstrates an example of how to use LoRA for fine-tuning on the fnlp/moss-moon-002-sft dataset.