# InternLM
👋 join us on Discord and WeChat
## Introduction - **200K Context window**: Both base and chat models can work with more than 200K context after being sufficiently trained on 32K-context data. Try it with [LMDeploy](./inference/) for 200K-context inference. - **Outstanding comprehensive performance**: Significantly better than the last generation in all dimensions including reasoning, code, chat experience, instruction following, and creative writing. - **Code interpreter & Data analysis**: New state-of-the-art results in using code interpreter for math problems, also good at data analysis. - **Stronger tool use**: Excellent zero-shot and multi-step tool calling capabilities, better with [streaming](docs/chat_format.md##streaming-style) and also works with [ReAct](docs/chat_format.md##react-style) format. Try it with [Lagent](./agent/). ## News [2024.01.17] We release InternLM2-7B and InternLM2-20B and their corresponding chat models with stronger capabilities in all dimensions. See [model zoo below](#model-zoo) for download or [model cards](./model_cards/) for more details. [2023.12.13] InternLM-7B-Chat and InternLM-20B-Chat checkpoints are updated. With an improved finetuning strategy, the new chat models can generate higher quality responses with greater stylistic diversity. [2023.09.20] InternLM-20B is released with base and chat versions. ## Model Zoo | Model | Transformers(HF) | ModelScope(HF) | OpenXLab(HF) | Release Date | |---------------------------|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------| | **InternLM2 Chat 20B** | [🤗internlm/internlm-chat-20b](https://huggingface.co/internlm/internlm2-chat-20b) | [