# InternLM
👋 join us on Discord and WeChat
## Introduction InternLM is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies. With a single codebase, it supports pre-training on large-scale clusters with thousands of GPUs, and fine-tuning on a single GPU while achieving remarkable performance optimizations. InternLM achieves nearly 90% acceleration efficiency during training on 1024 GPUs. Based on the InternLM training framework, we have released two open-sourced pretrained model InternLM-7B and InternLM-20B. ## News [20230920] InternLM-20B is released with base and chat versions. [20230822] InternLM-7B-Chat v1.1 is released with code interpreter and function calling capability. You can try it with [Lagent](https://github.com/InternLM/lagent). ## Model Zoo Our models are released in three platforms: Transformers, ModelScope and OpenXLab. | Model | Transformers | ModelScope | OpenXLab | Release Date | |---------------------------|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------| | **InternLM Chat 20B** | [🤗internlm/internlm-chat-20b](https://huggingface.co/internlm/internlm-20b-chat) | [