22 lines
1.0 KiB
Plaintext
22 lines
1.0 KiB
Plaintext
# Integrating with Ollama
|
|
|
|
Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily use the language models provided by Ollama to enhance your application within LobeChat.
|
|
|
|
This document will guide you on how to configure and deploy LobeChat to use Ollama:
|
|
|
|
## Running Ollama Locally
|
|
|
|
First, you need to install Ollama. For detailed steps on installing and configuring Ollama, please refer to the [Ollama Website](https://ollama.com).
|
|
|
|
## Running LobeChat Locally
|
|
|
|
Assuming you have already started the Ollama service locally on port `11434`. Run the following Docker command to start LobeChat locally:
|
|
|
|
```bash
|
|
docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
|
|
```
|
|
|
|
Now, you can use LobeChat to converse with the local LLM.
|
|
|
|
For more information on using Ollama in LobeChat, please refer to [Ollama Usage](/en/usage/providers/ollama).
|