InternLM/tools/transformers
x54-729 09a2b5ba50 Add meta instruction in chat 2024-01-09 15:37:07 +08:00
..
internlm_model Add meta instruction in chat 2024-01-09 15:37:07 +08:00
README-zh-Hans.md update README & unittest 2023-12-22 21:13:09 +08:00
README.md update README & unittest 2023-12-22 21:13:09 +08:00
convert2hf.py update convert script 2023-12-12 12:23:57 +08:00
interface.py feat(tools): support origin internlm architecture in web_demo (#478) 2023-11-09 20:01:55 +08:00
intern_moss_example.py initial commit 2023-07-06 12:55:23 +08:00
internlm_sft_on_moss.py initial commit 2023-07-06 12:55:23 +08:00

README.md

InternLM Transformers

English | 简体中文

This folder contains the InternLM model in transformers format.

Weight Conversion

convert2hf.py can convert saved training weights into the transformers format with a single command. Execute the command in the root directory of repository:

python tools/transformers/convert2hf.py --src origin_ckpt/ --tgt hf_ckpt/ --tokenizer ./tools/V7_sft.model --max_pose 4096

Then, you can load it using the from_pretrained interface:

>>> from transformers import AutoTokenizer, AutoModel
>>> model = AutoModel.from_pretrained("hf_ckpt/", trust_remote_code=True).cuda()

intern_moss_example.py demonstrates an example of how to use LoRA for fine-tuning on the fnlp/moss-moon-002-sft dataset.