import { Callout, Steps } from 'nextra/components'; # Using Google Gemma Model {'Using [Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat. This document will guide you on how to use Google Gemma in LobeChat: ### Install Ollama locally First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/en/usage/providers/ollama). ### Pull Google Gemma model to local using Ollama After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example: ```bash ollama pull gemma ``` {'Pulling ### Select Gemma model In the session page, open the model panel and then select the Gemma model. {'Selecting If you do not see the Ollama provider in the model selection panel, please refer to [Integrating with Ollama](/en/self-hosting/examples/ollama) to learn how to enable the Ollama provider in LobeChat. Now, you can start conversing with the local Gemma model using LobeChat.