Generative AI with Flutter Series 1

Jesutoni Aderibigbe
6 min readJun 18, 2024


The concluded Google I/O opened my eyes to the amazing possibilities of artificial intelligence in building web or mobile applications.


I was asked at a speaking engagement if AI would take away our jobs. For a moment, I pondered but answered in a hurry. I stated that AI won’t take our jobs but rather enhance our work rates. This was because of the amazing AI tools I am privileged to use at work.

However, the question was not what got me thinking, but rather the answer my friend, Oba gave. Oba is an amazing product designer with many tools under his sleeves. He stated that while AI enhances our level of productivity, it would also limit the work rate of those who have failed to “go with the flow”, and those who have been unable to sharpen their skills with the various benefits that AI brings.

Back to Real-Life

Generative AI refers to artificial intelligence that can create new content such as texts, images, music, code, or even videos. It does this by learning patterns from existing data and then using that knowledge to generate original outputs. Examples of such are

  • Language Models: Like GPT-3, which can generate human-like text for various applications, including writing, translation, and customer service.
  • Image Generation: Models like DALL-E, can create images from textual descriptions.
  • Music Composition: AI-powered music composers that can generate melodies, harmonies, and even entire songs in different genres.
  • Code Generation: AI tools that assist developers by generating code snippets or suggesting solutions based on project requirements.

Let’s take a trip down memory lane on this concept of “Generative AI”.

While generative AI has gained significant attention recently with the rise of models like GPT-3 and DALL-E, the concept itself has been introduced previously. Early research on Markov chains and rule-based systems lays the groundwork for generative AI. This was in the 1950s. In the 60s-80s, there was the development of expert systems and rule-based approaches to generate text and other content. In 2010, Deep learning, especially with the advent of generative adversarial networks (GANs) and variational autoencoders (VAEs), significantly improved the quality and diversity of generated content.

In 2020, Large-scale language models like GPT-3 and image generation models like DALL-E demonstrated impressive capabilities, sparking widespread interest and adoption of generative AI across various industries.

What’s all the fuss about GEMINI? Who and what is Gemini? What does she want with us? How can she help me in my Flutter journey?

You would need to calm down with all these questions. I promise to answer them all.

Firstly, Gemini is not a “she.” Gemini is Google DeepMind’s most recent AI model, their answer to OpenAI’s GPT-4. It’s designed to be a highly capable AI system with a wide range of potential applications.

What is the fuss about?

  • Capability: Google claims Gemini outperforms GPT-4 on many benchmarks, potentially making it the most powerful AI model available.
  • Multimodal: Gemini is multimodal, meaning it can process text, images, and potentially other forms of data. This opens up new possibilities for AI applications.

How can Gemini help in your Flutter journey?

While Gemini isn’t specifically designed for Flutter development, its capabilities could be helpful in several ways:

  • Code Generation: Gemini could potentially generate Flutter code snippets or even entire components, speeding up development.
  • Debugging: Gemini could analyze your Flutter code and suggest potential fixes for errors or performance issues.
  • Documentation: Gemini could help you understand Flutter concepts and APIs by providing clear explanations and examples.


Google Generative AI, particularly through its Gemini models and the google_generative_ai Dart SDK, opens up a wide array of possibilities for building innovative Flutter applications:

Text Generation and Editing:

  • Creative Writing Tools: Assist users in generating stories, poems, code snippets, marketing copy, or any other form of text content.
  • Chatbots and Conversational AI: Build chatbots for customer support, virtual assistants, language learning, or entertainment purposes.
  • Language Translation and Summarization: Implement real-time translation features or generate concise summaries of long articles or documents.
  • Code Generation and Completion: Help developers write code faster by suggesting completions, generating boilerplate code, or offering code optimizations.

Multimodal Applications:

  • Image Captioning: Generate descriptive captions for images to improve accessibility and search engine optimization.
  • Text-to-Image Generation: Create images from text descriptions, opening up possibilities for creative expression and storytelling.
  • Visual Question Answering: Build applications that can understand questions about images and provide relevant answers.

Image Generation and Manipulation:

  • Image Editing and Enhancement: Create tools for automatic photo retouching, style transfer, background removal, or image upscaling.
  • Avatar and Character Creation: Generate unique avatars or characters for games, social media, or virtual environments.
  • Product Visualization: Showcase products in different styles, environments, or configurations to improve online shopping experiences.
  • Art and Design Generation: Generate creative artwork, design concepts, or marketing materials based on user input.

I will be building from beginner-friendly to advanced applications while integrating Gemini.


Today, we will be building a basic chatbot — a virtual assistant that interacts with the Gemini API.

  1. Project Setup
  • New Flutter Project: Open your terminal or command prompt and create a new Flutter project:
flutter create flutter_chatbot
cd flutter_chatbot
  • Add Dependencies: In your pubspec.yaml file, add these dependencies:
sdk: flutter
google_generative_ai: ^0.4.3
flutter_markdown: ^0.6.19

2. Obtain Gemini API Key

3. Create the Chat UI

import 'package:flutter/material.dart';
import 'package:flutter_markdown/flutter_markdown.dart';
import 'package:google_generative_ai/google_generative_ai.dart';

class ChatScreen extends StatefulWidget {
_ChatScreenState createState() => _ChatScreenState();

class _ChatScreenState extends State<ChatScreen> {
final _messages = <ChatMessage>[]; // Store chat messages
final _textController = TextEditingController();
// ... (We'll add more logic later)

4. ChatMessage Widget (chat_message.dart): Design a widget to display individual messages (user or bot).

import 'package:flutter/material.dart';
import 'package:flutter_markdown/flutter_markdown.dart';

class ChatMessage extends StatelessWidget {
final String text;
final bool isUser;

const ChatMessage({Key? key, required this.text, required this.isUser})
: super(key: key);

Widget build(BuildContext context) {
return Container(
padding: EdgeInsets.symmetric(vertical: 10.0, horizontal: 15.0),
child: Align(
alignment: isUser ? Alignment.centerRight : Alignment.centerLeft,
child: Container(
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(20.0),
color: isUser ? : Colors.grey[300],
padding: EdgeInsets.all(15.0),
child: MarkdownBody(data: text),


a. Properties:

  • text: The message text (string).
  • isUser: A boolean indicating whether the message is from the user (true) or the bot (false).

b. Builder:

  • Creates a Container with padding and alignment (right for the user, left for the bot).
  • Adds a rounded Container with different background colors for the user/bot.
  • The message text is displayed using a MarkdownBody widget, allowing you to use Markdown formatting in your responses for better readability and presentation

5. Gemini Integration

Future<void> _sendMessage() async {
final message = _textController.text;
if (message.isEmpty) return;

setState(() {
_messages.add(ChatMessage(text: message, isUser: true));

final model = GenerativeModel(
model: 'gemini-pro', // Or the appropriate Gemini model
apiKey: 'YOUR_API_KEY', // Replace with your actual key

var chatSession = model.startChat();
var content = Content.text(message);
var response = await chatSession.sendMessage(content);

setState(() {
_messages.add(ChatMessage(text: response.text, isUser: false));

The function _sendMessage:

  • Takes user input from _textController.
  • Adds the message to the _messages list.
  • Creates a GenerativeModel using the Gemini API key and model.
  • Sends the message to Gemini, and receives the response.
  • Adds the bot’s response to the _messages list.

6. Let’s finish up the UI

Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('Flutter Chatbot')),
body: Column(
children: [
child: ListView.builder(
itemCount: _messages.length,
itemBuilder: (context, index) => _messages[index],
padding: const EdgeInsets.all(8.0),
child: Row(
children: [
Expanded(child: TextField(controller: _textController)),
icon: Icon(Icons.send),
onPressed: _sendMessage,

7. Storing your API key:

The most fundamental rule: NEVER store your API key directly in your Flutter code (or any code for that matter). If your code is ever exposed publicly (e.g., on GitHub), your key is compromised.

To store your key, various ways can be used. I will be sharing the first in this series.

a. Create an environment variable on your development machine

export GEMINI_API_KEY=your_actual_api_key

b. In your Dart code

import 'dart:io' show Platform;
// ...
apiKey: Platform.environment['GEMINI_API_KEY']!,
// ...

This assumes you are using dart define to populate the environment variables when building your app.

With these methods, you have built your first virtual assistant bot!

You can check out my repo here for more information

Catch you in the next series!