Skip to main content

Documentation Index

Fetch the complete documentation index at: https://omnifit-64.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The integration of Google Gemini with OmnifitAI brings cutting-edge generative AI capabilities to your chatbot experience. Gemini, Google’s state-of-the-art multimodal LLM (Large Language Model), enables natural, context-aware, and multi-turn conversations that go beyond standard automation. This guide outlines the step-by-step process to integrate Google Gemini with OmnifitAI.

Prerequisites

Before you begin, ensure the following:
  • You have admin-level access to your OmnifitAI account.
  • A valid Google Cloud Project with the Gemini API enabled.
  • Access to Gemini API credentials (API key or OAuth token).
  • Access to OmnifitAI’s LLM Integration settings.

Steps to Integration

Step 1: Enable Gemini API on Google Cloud

  1. Visit Google Cloud Console.
  2. Select or create a new project.
  3. Go to APIs & Services > Library.
  4. Search for Gemini API (or Vertex AI API) and click Enable.

Step 2: Generate API Credentials

  1. Go to APIs & Services > Credentials.
  2. Click “Create Credentials” → choose API Key.
  3. Copy the API key and keep it secure.
🔐 Tip: For production-level integrations, use service accounts and OAuth for enhanced security.

Step 3: Add Gemini in OmnifitAI

  1. Log in to your OmnifitAI dashboard.
  2. Select the “Third-Party Apps Integrations”
  3. Search for “Google Gemini”
  4. Click on Connect, and enter the API key.
Congratulations! You have successfully integrated OmnifitAI with Google Gemini!

FAQ’s

You can generate your Gemini API key from the Google Cloud Console:
  • Go to APIs & Services > Credentials
  • Click “Create Credentials” > API Key
  • Ensure that the Gemini (Vertex AI) API is enabled in the selected project.
No, one bot can only be Intgrated to one LLM model at a time.
If the key becomes invalid or quota limits are hit:
  • The bot will fall back to a default message.
  • An error notification will appear in the integration logs under Settings > LLM Integration.
  • You will receive a platform alert to take corrective action.