Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
eed6a90
Updated AI models lesson to reference Gemini 2.5 flash over Gemini 1.…
apradoada Dec 3, 2025
2a340b2
Updated text completion module to use Gemini 2.5 Flash references and…
apradoada Dec 3, 2025
321e0c8
Fixed URL Styling
apradoada Dec 3, 2025
1f97291
Updated references to Gemini 2.5 Flash and updated code to reflect ne…
apradoada Dec 3, 2025
cef6aa8
Updated missed reference to google-generativeai package to google-genai
apradoada Dec 3, 2025
ebc4e60
Updated images and steps for Getting Ready to Use AI APIs. Added call…
apradoada Dec 4, 2025
fdccf35
Updated images for creating an environment and using the API key in P…
apradoada Dec 4, 2025
cfc2082
Update working-with-ai-apis/ai-api-models.md
apradoada Dec 4, 2025
705ab0c
Update working-with-ai-apis/ai-apis-in-projects.md
apradoada Dec 4, 2025
727d0eb
Update working-with-ai-apis/ai-apis-in-projects.md
apradoada Dec 4, 2025
73ecf49
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
33f8403
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
68e7be4
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
b1ae83f
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
7d326d0
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
221b53c
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
daa1f95
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
1d5ac4e
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
48a8275
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
e00535b
Apply suggestion from @anselrognlie
apradoada Dec 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions working-with-ai-apis/ai-api-models.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not in this file, but in intro-to-ai-apis.md there's a link to Meta's Llama API (https://docs.llama-api.com/api-reference) which seems broken now. The closest I could find to something similar was: https://llama.developer.meta.com/docs/overview/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Now that we can make calls to an AI API in Postman, we can start digging a littl

Our goals for this lesson are to:
- Get exposure to more applications of AI models.
- Understand key features and limitations of the Google Gemini 1.5 Flash model at the free tier of service.
- Understand key features and limitations of the Google Gemini 2.5 Flash model at the free tier of service.

## What is an AI Model?

Expand All @@ -31,18 +31,18 @@ These models are commonly used as query parameters within our API requests to te

### !end-callout

## Google Gemini 1.5 Flash
## Google Gemini 2.5 Flash

While we work with the Google Gemini API, we will be using the Gemini Flash 1.5 model specifically. We are using this model because its free tier fits our needs to show off some text-based request and response use cases and allows for exploration past what we will cover in class for folks who are interested. We will cover facts we think are useful below, for more information, check out the [Google Gemini API Documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
While we work with the Google Gemini API, we will be using the Gemini Flash 2.5 model specifically. We are using this model because its free tier fits our needs to show off some text-based request and response use cases and allows for exploration past what we will cover in class for folks who are interested. We will cover facts we think are useful below, for more information, check out the [Google Gemini API Documentation](https://ai.google.dev/gemini-api/docs/models/gemini).

**Gemini 1.5 Flash Model Facts**
**Gemini 2.5 Flash Model Facts**
| Category | Detail |
| -------- | ------ |
| Input Types | Text, Audio, Images, Video |
| Output Types | Text |
| Maximum Requests Per Minute | 15 Requests Per Minute |
| Maximum Tokens Per Minute | 1 million Tokens Per Minute |
| Maximum Requests Per Day | 1,500 Requests Per Day |
| Maximum Requests Per Minute | 10 Requests Per Minute |
| Maximum Tokens Per Minute | 250,000 Tokens Per Minute |
| Maximum Requests Per Day | 250 Requests Per Day |
| Supported Languages | 30+, see documentation for details |

When thinking about tokens and our processing limits, a token is equivalent to about 4 characters for Gemini models. 100 tokens are about 60-80 English words. Token limits can creep up on us if we're doing a significant amount of work. A model needs to process the data we send it into tokens, and the model will put together tokens to create its text response, all of which count towards our total processing limits. If we are working and suddenly see requests failing, we may need to look at our usage and possibly wait for a bit until we can make requests again if we are out of processing resources.
Expand Down
108 changes: 60 additions & 48 deletions working-with-ai-apis/ai-apis-in-projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

## Goals

Now that we know more about requests and responses to the `/gemini-1.5-flash:generateContent` endpoint, let's learn how to use it within our programs. To do so, we will work with a partially finished API that creates video game NPCs (Non Playable Characters). Within that program, we will install the [`google-generativeai`](https://pypi.org/project/google-generativeai/) Python package and then leverage the `gemini-1.5-flash` model's `generateContent` function to generate dialogue for characters we create.
Now that we know more about requests and responses to the `/gemini-2.5-flash:generateContent` endpoint, let's learn how to use it within our programs. To do so, we will work with a partially finished API that creates video game NPCs (Non Playable Characters). Within that program, we will install the [`google-genai`](https://pypi.org/project/google-genai/) Python package and then leverage the `gemini-2.5-flash` model's `generateContent` function to generate dialogue for characters we create.

Our goals for this lesson are to:
- Integrate the `google-generativeai` library into a Python project.
- Use the `/gemini-1.5-flash:generateContent` text completion endpoint to generate content for our project.
- Integrate the `google-genai` library into a Python project.
- Use the `/gemini-2.5-flash:generateContent` text completion endpoint to generate content for our project.

## Set Up the NPC Generator

Expand Down Expand Up @@ -189,48 +189,41 @@ Create and connect your database using the steps below:
</details>
</br>

## Install the `google-generativeai` Library and Connect Your Secret Key
## Install the `google-genai` Library and Connect Your Secret Key

Before we write our request to the Gemini API within the project, we need to install the `google-generativeai` package.
Before we write our request to the Gemini API within the project, we need to install the `google-genai` package.

1. In your terminal, run `pip install -q -U google-generativeai`
2. Run `pip freeze > requirements.txt` to add the `google-generativeai` package to your requirements.
1. In your terminal, run `pip install -q -U google-genai`
2. Run `pip freeze > requirements.txt` to add the `google-genai` package and dependencies to your requirements.
3. In your `.env` file, add the variable `GEMINI_API_KEY` and assign it the value of your secret key:
`GEMINI_API_KEY=<Your Secret Key>`
4. In the `character_routes.py` file, import the `google-generativeai` package and configure the Gemini API environment using the code below:
4. In the `character_routes.py` file, import the `google-genai` package and configure the Gemini API environment using the code below:

```python
import google.generativeai as genai
import os
from google import genai

genai.configure(api_key=os.environ.get("GEMINI_API_KEY"))
client = genai.Client()
```

| <div style="min-width:300px;">Part of Code</div> | Description |
| ------------- | ----------- |
| `import google.generativeai as genai` | Import the `google-generativeai` package and give it an alias of `genai`. |
| `import os` | Import the `os` package so we can read our `GEMINI_API_KEY` from the `.env` file. |
| `genai.configure(api_key=...)` | This code configures the Gemini API environment with your secret key from the `.env` file. We will use the `genai` import to access the `gemini-1.5-flash` model and make requests to the `/gemini-1.5-flash:generateContent` endpoint. |
| `from google import genai` | Import the `google-genai` package . |
| `client = genai.Client()` | This code creates a Gemini API client with your secret key from the `.env` file. It looks specifically for a key named `GEMINI_API_KEY`. We will use the `genai` import to access the `gemini-2.5-flash` model and make requests to the `/gemini-2.5-flash:generateContent` endpoint. |

## Make the API call

To make an API call, we'll create a helper function called `generate_greetings` that will take a `Character` in as a parameter. The first thing we will do is use the aliased `google.generativeai` import to get a reference to the AI model we want to use. To do this, we'll call the constructor for the `GenerativeModel` class and pass it the name of the model as an string argument `"gemini-1.5-flash"`.
To make an API call, we'll create a helper function called `generate_greetings` that will take a `Character` in as a parameter. The first thing we will do is construct a prompt for our request body that uses the `Character`'s attributes to describe what we are looking for.

```python
def generate_greetings(character):
model = genai.GenerativeModel("gemini-1.5-flash")
```

Then we'll use the `Character`'s attributes to construct a prompt for our request body. This may look something like:

```python
input_message = f"I am writing a fantasy RPG video game. I have an npc named {character.name} who is {character.age} years old. They are a {character.occupation} who has a {character.personality} personality. Please generate a Python style list of 10 stock phrases they might use when the main character talks to them. Please return just the list without a variable name and square brackets."
input_message = f"I am writing a fantasy RPG video game. I have an npc named {character.name} who is {character.age} years old. They are a {character.occupation} who has a {character.personality} personality. Please generate a Python style list of 10 stock phrases they might use when the main character talks to them. Please return just the list without a variable name and square brackets."
```

Once the prompt has been constructed we can use our `model` object to send our Gemini API call. To access the desired `/gemini-1.5-flash:generateContent` endpoint we will call the `generate_content` method on the `model` variable, passing our `input_message` as a parameter. The code we will use to make the call to the text completion endpoint and then store the response is:
Once the prompt has been constructed we can use our `Client` object to send our Gemini API call. To access the desired `/gemini-2.5-flash:generateContent` endpoint we will call the `generate_content()` method on the `client` variable, passing our `input_message` as a parameter. The code we will use to make the call to the text completion endpoint and then store the response is:

```python
response = model.generate_content(input_message)
response = client.models.generate_content(
model="gemini-2.5-flash", contents=input_message
)
```

Remember that the text completions endpoint returns a nested object. Let's pause here and recall what that object looks like. Using the example request data we shared to create the character Misty, we received the following response:
Expand Down Expand Up @@ -260,28 +253,36 @@ Remember that the text completions endpoint returns a nested object. Let's pause
}
```

That is a lot of information but now it's up to us to parse through that response and determine what we want to use. Ultimately, we want to just use the `text` within `candidates` > `content` > `parts`. Happily for us, the `response` object returned by `generate_content` has a convenience attribute `.text` that we can use to directly access this `text` key within the JSON.
That is a lot of information but now it's up to us to parse through that response and determine what we want to use. Ultimately, we want to just use the `text` within `candidates` > `content` > `parts`. Happily for us, the `response` object returned by `generate_content` has a convenience attribute `.text` that we can use to directly access this `text` key within the JSON. Before we use this response in an endpoint, let's print out the text to see what we get:

We have a list of generated greetings inside `text`, but they are all a single string. Fortunately, each item is separated by a newline character, so we can use that to our advantage. If we access `response.text` to get to the string itself, we can split it by the newline character to get a list of responses:
```python
response_split = response.text.split("\n")
print(response.text)
```

Since the response we're given ends with a newline character, this split operation will leave us with an empty string at the end of our list. Since we don't want to save an empty string to our NPC sayings, we can slice off the last value before we return our list. That return statement may look something like:
```python
return response_split[:-1]
```
This gives our `generate_greetings` helper method the final code of:

When we string this all together, our `generate_greetings` helper method is as follows:
```python
def generate_greetings(character):
model = genai.GenerativeModel("gemini-1.5-flash")
input_message = f"I am writing a fantasy RPG video game. I have an npc named {character.name} who is {character.age} years old. They are a {character.occupation} who has a {character.personality} personality. Please generate a Python style list of 10 stock phrases they might use when the main character talks to them. Please return just the list without a variable name and square brackets."
response = model.generate_content(input_message)
response_split = response.text.split("\n") #Splits response into a list of stock phrases, ends up with an empty string at index -1
return response_split[:-1] #Returns the stock phrases list, just without the empty string at the end
response = client.models.generate_content(
model="gemini-2.5-flash", contents=input_message
)
print(response.text)
```

In order to test out what we get, we do still need to attach the `generate_greetings(character)` method to an endpoint. We can do this quickly by adding the following code to the `add_greetings(char_id)` function in our `/<char_id>/greetings` endpoint:

```python
@bp.post("/<char_id>/generate")
def add_greetings(char_id):
character = validate_model(Character, char_id)
generate_greetings(character)

return character.to_dict()
```

This is not the final form our code for this endpoint will take, but if we make a post request to this endpoin in Postman using id 1, the terminal should display an AI generated response to our prompt! Go ahead and make this request now. If your terminal prints out the response, congratulations! You have successfully integrated an AI API into your backend!

### !callout-info

## Knowing Your Response
Expand All @@ -300,25 +301,24 @@ Now it's your turn. There are two endpoints left to write. First up is the POST
```python
@bp.post("/<char_id>/generate")
def add_greetings(char_id):
character_obj = validate_model(Character, char_id)
greetings = generate_greetings(character_obj)
character = validate_model(Character, char_id)
greetings = generate_greetings(character)

if character_obj.greetings:
return {"message": f"Greetings already generated for {character_obj.name} "}, 201
if character.greetings: # Check to see if Greetings have already been added
return {"message": f"Greetings already generated for {character.name} "}, 201

new_greetings = []

for greeting in greetings:
new_greeting = Greeting(
greeting_text = greeting.strip("\""), #Removes quotes from each string
character = character_obj
)
new_greeting = Greeting(greeting_text=greeting.strip("\","), # Strip leading and trailing quotes and commas from each greeting
character = character
)
new_greetings.append(new_greeting)

db.session.add_all(new_greetings)
db.session.commit()

return {"message": f"Greetings successfully added to {character_obj.name}"}, 201
return {"message": f"Greetings successfully added to {character.name}"}, 201
Comment on lines -303 to +321
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tidied up a bit

Suggested change
character_obj = validate_model(Character, char_id)
greetings = generate_greetings(character_obj)
character = validate_model(Character, char_id)
greetings = generate_greetings(character)
if character_obj.greetings:
return {"message": f"Greetings already generated for {character_obj.name} "}, 201
if character.greetings: # Check to see if Greetings have already been added
return {"message": f"Greetings already generated for {character.name} "}, 201
new_greetings = []
for greeting in greetings:
new_greeting = Greeting(
greeting_text = greeting.strip("\""), #Removes quotes from each string
character = character_obj
)
new_greeting = Greeting(greeting_text=greeting.strip("\","), # Strip leading and trailing quotes and commas from each greeting
character = character
)
new_greetings.append(new_greeting)
db.session.add_all(new_greetings)
db.session.commit()
return {"message": f"Greetings successfully added to {character_obj.name}"}, 201
return {"message": f"Greetings successfully added to {character.name}"}, 201
character = validate_model(Character, char_id)
if character.greetings: # Check to see if Greetings have already been added
return {"message": f"Greetings already generated for {character.name}"}, 201
greetings = generate_greetings(character)
new_greetings = []
for greeting in greetings:
new_greeting = Greeting(greeting_text=greeting.strip("\"',"), # Strip leading and trailing quotes and commas from each greeting
character=character
)
new_greetings.append(new_greeting)
db.session.add_all(new_greetings)
db.session.commit()
return {"message": f"Greetings successfully added to {character.name}"}, 201

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

```
</details>
</br>
Expand All @@ -344,6 +344,18 @@ def get_greetings(char_id):
"greeting" : greeting.greeting_text
})

character = validate_model(Character, char_id)

if not character.greetings:
return {"message": f"No greetings found for {character.name}"}, 201

response = {"character_name": character.name,
"greetings": []}
for greeting in character.greetings:
response["greetings"].append({
"greeting" : greeting.greeting_text
})

return response
```
</details>
Expand Down Expand Up @@ -373,7 +385,7 @@ Mark off each step that you have completed
##### !options

* Set up NPC Generator
* Install the `google-generativeai` package
* Install the `google-genai` package
* Complete generate_greetings
* Complete add_greetings
* Complete get_greetings
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading