Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
9b069ff
adding support to google gemini models
marlonjsilva Jun 20, 2025
f034646
Merge branch 'master' into add-google-gemini
marlonjsilva Jun 20, 2025
8207e56
Merge branch 'master' into add-google-gemini
marlonjsilva Jun 21, 2025
70c94be
Updating with current changes
marlonjsilva Jul 2, 2025
a5063bd
Merge remote-tracking branch 'origin/add-google-gemini' into gemini-s…
lukaszkorecki Aug 3, 2025
573fc36
Start sketching out support for Gemini via all possible ways it can b…
lukaszkorecki Aug 3, 2025
bce11cf
Start plugging in configuration into dispatch to gemini completion
lukaszkorecki Aug 3, 2025
533c13c
Abstract different methods of creating URLs and auth for Gemini
lukaszkorecki Aug 3, 2025
3d6130e
Add a note about reading auth from default location
lukaszkorecki Aug 3, 2025
d1fbd5a
Update docs for Gemini auth
lukaszkorecki Aug 3, 2025
ad26ef7
Merge branch 'master' into gemini-support
lukaszkorecki Aug 4, 2025
8f9a619
Merge remote-tracking branch 'main/master' into gemini-support
lukaszkorecki Aug 7, 2025
21c5d05
There is no custom provider for Gemini
lukaszkorecki Aug 7, 2025
f0a546e
make API comms work
lukaszkorecki Aug 7, 2025
95714d5
start implementing refresh
lukaszkorecki Aug 8, 2025
e3dd2f7
First pass at making chat work
lukaszkorecki Aug 8, 2025
5c5c021
Update config & auth docs
lukaszkorecki Aug 8, 2025
58f6998
Add default model detection for gemini, log auth errors
lukaszkorecki Aug 8, 2025
48734d2
Handle stream end event
lukaszkorecki Aug 8, 2025
f1b04fe
Implement off-band token refresh
lukaszkorecki Aug 8, 2025
9da48ee
Change order of default model detection
lukaszkorecki Aug 9, 2025
50299eb
Add debug logs to trace which configs are loaded when
lukaszkorecki Aug 9, 2025
d529d0e
Add defensive code for VertexAI token fetch when something gets messe…
lukaszkorecki Aug 9, 2025
1a8a649
Merge remote-tracking branch 'origin/master' into gemini-support
lukaszkorecki Aug 9, 2025
72f3d02
Merge remote-tracking branch 'origin/master' into gemini-support
lukaszkorecki Aug 9, 2025
8bb6ae3
update version
lukaszkorecki Aug 9, 2025
f0a78d7
1st working impl of tool calling in Gemini
lukaszkorecki Aug 9, 2025
026d33c
Merge branch 'master' into gemini-support
lukaszkorecki Aug 10, 2025
0697218
Merge remote-tracking branch 'main/master' into gemini-support
lukaszkorecki Aug 11, 2025
e5a2211
Merge remote-tracking branch 'main/master' into gemini-support
lukaszkorecki Aug 11, 2025
c0304df
Merge remote-tracking branch 'origin/master' into gemini-support
lukaszkorecki Aug 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .java-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
temurin-21
97 changes: 93 additions & 4 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ A `.eca/rules` folder from the workspace root containing `.mdc` files with the r

`.eca/rules/talk_funny.mdc`
```markdown
---
---
description: Use when responding anything
---

Expand All @@ -70,7 +70,7 @@ A `$XDG_CONFIG_HOME/eca/rules` or `~/.config/eca/rules` folder containing `.mdc`

`~/.config/eca/rules/talk_funny.mdc`
```markdown
---
---
description: Use when responding anything
---

Expand Down Expand Up @@ -103,6 +103,87 @@ For MCP servers configuration, use the `mcpServers` config, example:
}
```

## Google Gemini

ECA supports Google's Gemini models through two main APIs: the **Gemini Developer API** and the **Vertex AI API**. Each has distinct configuration and authentication methods. Configuration keys are set at the root of the config file.

### Gemini Developer API (API Key)

This is the simplest way to get started. It uses an API key from Google AI Studio.

- **Authentication:** API Key
- **Endpoint:** `https://generativelanguage.googleapis.com/v1beta/`
- **Configuration:**

```json
{
"geminiApiKey": "YOUR_GEMINI_API_KEY"
}
```

Alternatively, you can set the `GEMINI_API_KEY` environment variable.

### Vertex AI (API Key)

For projects integrated with Google Cloud Platform (GCP), you can use an API key associated with your GCP project.

- **Authentication:** API Key (from GCP)
- **Endpoint:** `https://<LOCATION>-aiplatform.googleapis.com/v1/`
- **Configuration:**

```json
{
"googleApiKey": "YOUR_GCP_API_KEY",
"googleProjectId": "your-gcp-project-id",
"googleProjectLocation": "your-gcp-region"
}
```
You can also use the `GOOGLE_API_KEY`, `GOOGLE_PROJECT_ID`, and `GOOGLE_PROJECT_LOCATION` environment variables.

### Vertex AI (Application Default Credentials - ADC)

For a more secure and flexible setup, you can use Application Default Credentials (ADC). This method is used when no `googleApiKey` is provided.

There are two common ways to provide these credentials:

1. **User ADC (for local development):**
Authenticate with the `gcloud` CLI. ECA will automatically pick up the credentials.
```bash
gcloud auth application-default login
```

2. **Service Account ADC (for automated environments):**
Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the absolute path of your service account JSON file.
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-file.json"
```
Alternatively, you can set this path in your configuration file:
```json
{
"googleApplicationCredentials": "/path/to/your/service-account-file.json"
}
```

In all ADC scenarios, you still need to provide the project ID and location:

```json
{
"googleProjectId": "your-gcp-project-id",
"googleProjectLocation": "your-gcp-region"
}
```
You can also set the `GOOGLE_PROJECT_ID` and `GOOGLE_PROJECT_LOCATION` environment variables.

**Summary of Configuration Options:**

| Key | Environment Variable | Description | Required For |
| -------------------------------- | ------------------------------------ | ---------------------------------------------------------------------------- | ------------------------------------------ |
| `geminiApiKey` | `GEMINI_API_KEY` | API key for the Gemini Developer API. | Gemini Developer API |
| `googleApiKey` | `GOOGLE_API_KEY` | API key for Vertex AI. If not provided, ADC will be used. | Vertex AI (API Key) |
| `googleProjectId` | `GOOGLE_PROJECT_ID` | Your Google Cloud project ID. | Vertex AI (API Key), Vertex AI (ADC) |
| `googleProjectLocation` | `GOOGLE_PROJECT_LOCATION` | The GCP region for your project (e.g., `us-central1`). | Vertex AI (API Key), Vertex AI (ADC) |


## Custom LLM providers

It's possible to configure ECA to be aware of custom LLM providers if they follow a API schema similar to currently supported ones (openai, anthropic), example for a custom hosted litellm server:
Expand Down Expand Up @@ -195,6 +276,10 @@ Example:
interface Config {
openaiApiKey?: string;
anthropicApiKey?: string;
geminiApiKey?: string;
googleApiKey?: string;
googleProjectId?: string;
googleProjectLocation?: string;
rules: [{path: string;}];
commands: [{path: string;}];
systemPromptTemplateFile?: string;
Expand All @@ -211,7 +296,7 @@ interface Config {
mcpServers: {[key: string]: {
command: string;
args?: string[];
disabled?: boolean;
disabled?: boolean;
}};
customProviders: {[key: string]: {
api: 'openai' | 'anthropic';
Expand Down Expand Up @@ -248,10 +333,14 @@ interface Config {
{
"openaiApiKey" : null,
"anthropicApiKey" : null,
"geminiApiKey": null,
"googleApiKey": null,
"googleProjectId": null,
"googleProjectLocation": null,
"rules" : [],
"commands" : [],
"nativeTools": {"filesystem": {"enabled": true},
"shell": {"enabled": true,
"shell": {"enabled": true,
"excludeCommands": []}},
"disabledTools": [],
"toolCall": {
Expand Down
5 changes: 5 additions & 0 deletions docs/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,13 @@ The models capabilities and configurations are retrieved from [models.dev](https

## Built-in providers and capabilities


| model | tools (MCP) | reasoning / thinking | prompt caching | web_search |
|-----------|-------------|----------------------|----------------|------------|
| OpenAI | √ | √ | √ | √ |
| Anthropic | √ | √ | √ | √ |
| Ollama | √ | √ | X | X |
| Google | X | √ | X | X |

### OpenAI

Expand All @@ -30,6 +32,9 @@ The models capabilities and configurations are retrieved from [models.dev](https

- [any local ollama model](https://ollama.com/search)

### Gemini

- https://ai.google.dev/
### Custom models for built-in providers

Just configure the model in your eca `models` config, for more details check its [configuration](./configuration.md#adding-models).
Expand Down
42 changes: 35 additions & 7 deletions src/eca/config.clj
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,37 @@
[clojure.core.memoize :as memoize]
[clojure.java.io :as io]
[clojure.string :as string]
[eca.logger :as logger]
[eca.shared :as shared]))

(set! *warn-on-reflection* true)

(def initial-config
{:openaiApiKey nil
{;; LLM providers authentication configuration
:openaiApiKey nil

:anthropicApiKey nil

;; Gemini can be authenticated using different methods - setting only one of them is enough
;; but will depend on the provider configuration, IAM and other bits outof the scope of this project.
;; api key for Gemini API
:geminiApiKey nil

;; Vertex AI:
:googleProjectId nil
:googleProjectLocation nil
;; auth with API key, or leave blank to use ADC (Application Default Credentials)
:googleApiKey nil

;; Ollama API
:ollama {:host "http://localhost"
:port 11434
:useTools true
:think true}

:customProviders {}

;; all other settings
:rules []
:commands []
:nativeTools {:filesystem {:enabled true}
Expand All @@ -35,13 +59,15 @@
"claude-sonnet-4-20250514" {:extraPayload {:thinking {:type "enabled" :budget_tokens 2048}}}
"claude-opus-4-1-20250805" {:extraPayload {:thinking {:type "enabled" :budget_tokens 2048}}}
"claude-opus-4-20250514" {:extraPayload {:thinking {:type "enabled" :budget_tokens 2048}}}
"claude-3-5-haiku-20241022" {:extraPayload {:thinking {:type "enabled" :budget_tokens 2048}}}}
:ollama {:host "http://localhost"
:port 11434
:useTools true
:think true}
"claude-3-5-haiku-20241022" {:extraPayload {:thinking {:type "enabled" :budget_tokens 2048}}}

"gemni-2.0-flash" {}
"gemini-2.0-flash-pro" {}
"gemini-2.0-pro" {}}


:chat {:welcomeMessage "Welcome to ECA!\n\nType '/' for commands\n\n"}
:customProviders {}

:index {:ignoreFiles [{:type :gitignore}]}})

(defn get-env [env] (System/getenv env))
Expand All @@ -68,6 +94,7 @@
(io/file (get-property "user.home") ".config"))
config-file (io/file xdg-config-home "eca" "config.json")]
(when (.exists config-file)
(logger/debug "[CONFIG]" (format "Loading global config from %s" config-file))
(safe-read-json-string (slurp config-file)))))

(def ^:private config-from-global-file (memoize/ttl config-from-global-file* :ttl/threshold ttl-cache-config-ms))
Expand All @@ -79,6 +106,7 @@
final-config
(let [config-file (io/file (shared/uri->filename uri) ".eca" "config.json")]
(when (.exists config-file)
(logger/debug "[CONFIG]" (format "Loading project config from %s" config-file))
(safe-read-json-string (slurp config-file))))))
{}
roots))
Expand Down
44 changes: 44 additions & 0 deletions src/eca/llm_api.clj
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
[clojure.string :as string]
[eca.config :as config]
[eca.llm-providers.anthropic :as llm-providers.anthropic]
[eca.llm-providers.google :as llm-providers.google]
[eca.llm-providers.ollama :as llm-providers.ollama]
[eca.llm-providers.openai :as llm-providers.openai]
[eca.logger :as logger]))
Expand Down Expand Up @@ -50,6 +51,28 @@
(or (config/get-env "OPENAI_API_URL")
llm-providers.openai/base-url))

;; Google Gemini auth
(defn ^:private gemini-api-key [config]
(or (:geminiApiKey config)
(config/get-env "GEMINI_API_KEY")))

(defn ^:private google-api-key [config]
(or (:googleApiKey config)
(config/get-env "GOOGLE_API_KEY")))

(defn ^:private google-project-id [config]
(or (:googleProjectId config)
(config/get-env "GOOGLE_PROJECT_ID")))

(defn ^:private google-project-location [config]
(or (:googleProjectLocation config)
(config/get-env "GOOGLE_PROJECT_LOCATION")))

(defn ^:private google-any-auth? [config]
(or (gemini-api-key config)
(and (google-project-id config) (google-project-location config) (google-api-key config))
(and (google-project-id config) (google-project-location config))))

(defn default-model
"Returns the default LLM model checking this waterfall:
- Any custom provider with defaultModel set
Expand All @@ -69,6 +92,8 @@
[:api-key-found "claude-sonnet-4-20250514"])
(when (openai-api-key config)
[:api-key-found "gpt-5"])
(when (google-any-auth? config)
[:google-auth-found "gemini-2.5-pro"])
(when-let [ollama-model (first (filter #(string/starts-with? % config/ollama-model-prefix) (keys (:models db))))]
[:ollama-running ollama-model])
[:default "claude-sonnet-4-20250514"])]
Expand Down Expand Up @@ -148,6 +173,25 @@
:api-key (anthropic-api-key config)}
callbacks)


(= "google" provider)
(llm-providers.google/completion!
{:model model
:instructions instructions
:user-messages user-messages
:max-output-tokens max-output-tokens
:reason? (and reason? (:reason? model-config))
:past-messages past-messages
:tools tools
:web-search web-search
;; NOTE: no :api-url here, because Google provider figures it out
;; because it depends on how we're authenticated
:gemini-api-key (gemini-api-key config)
:google-api-key (google-api-key config)
:google-project-id (google-project-id config)
:google-project-location (google-project-location config)}
callbacks)

(string/starts-with? model config/ollama-model-prefix)
(llm-providers.ollama/completion!
{:host (-> config :ollama :host)
Expand Down
Loading
Loading