57 lines
1.8 KiB
Plaintext
57 lines
1.8 KiB
Plaintext
|
|
import { Callout, Steps } from 'nextra/components';
|
||
|
|
|
||
|
|
# Using Google Gemma Model
|
||
|
|
|
||
|
|
<Image
|
||
|
|
alt={'Using Gemma in LobeChat'}
|
||
|
|
src={'https://github.com/lobehub/lobe-chat/assets/28616219/e636cb41-5b7f-4949-a236-1cc1633bd223'}
|
||
|
|
cover
|
||
|
|
rounded
|
||
|
|
/>
|
||
|
|
|
||
|
|
[Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.
|
||
|
|
|
||
|
|
This document will guide you on how to use Google Gemma in LobeChat:
|
||
|
|
|
||
|
|
<Steps>
|
||
|
|
### Install Ollama locally
|
||
|
|
|
||
|
|
First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/en/usage/providers/ollama).
|
||
|
|
|
||
|
|
### Pull Google Gemma model to local using Ollama
|
||
|
|
|
||
|
|
After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ollama pull gemma
|
||
|
|
```
|
||
|
|
|
||
|
|
<Image
|
||
|
|
alt={'Pulling Gemma model using Ollama'}
|
||
|
|
src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
|
||
|
|
width={791}
|
||
|
|
height={473}
|
||
|
|
/>
|
||
|
|
|
||
|
|
### Select Gemma model
|
||
|
|
|
||
|
|
In the session page, open the model panel and then select the Gemma model.
|
||
|
|
|
||
|
|
<Image
|
||
|
|
alt={'Selecting Gemma model in the model selection panel'}
|
||
|
|
src={'https://github.com/lobehub/lobe-chat/assets/28616219/c91d0c18-a21f-41f6-b5cc-94d29faeb797'}
|
||
|
|
width={791}
|
||
|
|
bordered
|
||
|
|
height={629}
|
||
|
|
/>
|
||
|
|
|
||
|
|
<Callout type={'info'}>
|
||
|
|
If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
|
||
|
|
with Ollama](/en/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
|
||
|
|
LobeChat.
|
||
|
|
</Callout>
|
||
|
|
|
||
|
|
</Steps>
|
||
|
|
|
||
|
|
Now, you can start conversing with the local Gemma model using LobeChat.
|