From 58c3d98d5d5a019551c1b90533df8960c569fb82 Mon Sep 17 00:00:00 2001 From: wangzhihong Date: Thu, 4 Jul 2024 20:15:07 +0800 Subject: [PATCH] Update README.md --- ecosystem/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ecosystem/README.md b/ecosystem/README.md index 8128814..1c31bd9 100644 --- a/ecosystem/README.md +++ b/ecosystem/README.md @@ -272,7 +272,7 @@ It is worth mentioning that regardless of which model in the InternLM series you If you want to build your own RAG application, you don't need to first start the inference service and then configure the IP and port to launch the application like you would with LangChain. Refer to the code below, and with LazyLLM, you can use the internLM series models to build a highly customized RAG application in just ten lines of code, along with document management services:
-点击获取import和prompt +Click here to get imports and prompts ```python