mirror of https://github.com/InternLM/InternLM
* move hf model to tools/transformers/internlm_model * fix stream_chat * Add stream_chat example * fix import * Add __init__ to internlm_model * Add hf link * fix import of tools/tokenizer.py * fix huggingface url in readme |
||
|---|---|---|
| .. | ||
| internlm_model | ||
| README-zh-Hans.md | ||
| README.md | ||
| convert2hf.py | ||
| interface.py | ||
| intern_moss_example.py | ||
| internlm_sft_on_moss.py | ||
README.md
InternLM Transformers
This folder contains the InternLM model in transformers format.
Weight Conversion
convert2hf.py can convert saved training weights into the transformers format with a single command. Execute the command in the root directory of repository:
python tools/transformers/convert2hf.py --src_folder origin_ckpt/ --tgt_folder hf_ckpt/ --tokenizer ./tools/V7_sft.model
Then, you can load it using the from_pretrained interface:
>>> from transformers import AutoTokenizer, AutoModel
>>> model = AutoModel.from_pretrained("hf_ckpt/", trust_remote_code=True).cuda()
intern_moss_example.py demonstrates an example of how to use LoRA for fine-tuning on the fnlp/moss-moon-002-sft dataset.