diff --git a/ecosystem/README.md b/ecosystem/README.md index 8128814..1c31bd9 100644 --- a/ecosystem/README.md +++ b/ecosystem/README.md @@ -272,7 +272,7 @@ It is worth mentioning that regardless of which model in the InternLM series you If you want to build your own RAG application, you don't need to first start the inference service and then configure the IP and port to launch the application like you would with LangChain. Refer to the code below, and with LazyLLM, you can use the internLM series models to build a highly customized RAG application in just ten lines of code, along with document management services:
-点击获取import和prompt +Click here to get imports and prompts ```python