Skip to main content

Google Gemini

Google Gemini provides advanced multimodal AI capabilities with fast performance and efficient processing for conversational AI applications.

Sample configuration

The following example shows a starting llm parameter configuration you can use when you Start a conversational AI agent.


_20
"llm": {
_20
"url": "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:streamGenerateContent?alt=sse&key=<api_key>",
_20
"system_messages": [
_20
{
_20
"parts": [
_20
{
_20
"text": "You are a helpful chatbot"
_20
}
_20
],
_20
"role": "user"
_20
}
_20
],
_20
"max_history": 32,
_20
"greeting_message": "Good to see you!",
_20
"failure_message": "Hold on a second.",
_20
"params": {
_20
"model": "gemini-2.0-flash"
_20
},
_20
"style": "gemini"
_20
}

Key parameters

  • url: Note that the API key is passed in the URL query parameter. Get your API key from Google AI Studio.
  • system_messages: Use parts array with text objects instead of simple content string.
  • model: Refer to Gemini models for available models.
  • style: Set to "gemini" to use Gemini's message format.
  • ignore_empty: Set to true to handle empty responses appropriately.

For advanced configuration options, model capabilities, and detailed parameter descriptions, see the Google Gemini API documentation.