import { Callout, Steps, Tabs } from 'nextra/components'; # Using Ollama in LobeChat {'Using Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily use the language models provided by Ollama to enhance your application within LobeChat. This document will guide you on how to use Ollama in LobeChat: ### Local Installation of Ollama First, you need to install Ollama, which supports macOS, Windows, and Linux systems. Depending on your operating system, choose one of the following installation methods: [Download Ollama for macOS](https://ollama.com/download) and unzip it. ````bash Install using the following command: ```bash curl -fsSL https://ollama.com/install.sh | sh ```` Alternatively, you can refer to the [Linux manual installation guide](https://github.com/jmorganca/ollama/blob/main/docs/linux.md). [Download Ollama for Windows](https://ollama.com/download) and install it. If you prefer using Docker, Ollama also provides an official Docker image, which you can pull using the following command: ```bash docker pull ollama/ollama ``` ### Pulling Models to Local with Ollama After installing Ollama, you can install models locally, for example, llama2: ```bash ollama pull llama2 ``` Ollama supports various models, and you can view the available model list in the [Ollama Library](https://ollama.com/library) and choose the appropriate model based on your needs. Next, you can start conversing with the local LLM using LobeChat.