Replies: 2 comments 2 replies
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You can use your Confluence-connected Data Store directly with the Hi @this-dave, Based on the ADK documentation, you should be able to use your Vertex AI Data Store created with the Confluence connector directly in your agent without being forced into the "Agentspace" product tier. The standard way to connect an ADK agent to a Vertex AI Search data store is by using the built-in This tool only requires the Here is an example of how you would configure your agent: from google.adk.agents import Agent
from google.adk.tools import VertexAiSearchTool
# The full resource ID of your Vertex AI Search Data Store
# It should be in the format: projects/YOUR_PROJECT_ID/locations/global/collections/default_collection/dataStores/YOUR_DATASTORE_ID
YOUR_DATASTORE_ID = "projects/..." # Replace with your data store ID
# Instantiate the VertexAiSearchTool with your data store ID
vertex_search_tool = VertexAiSearchTool(data_store_id=YOUR_DATASTORE_ID)
# Create your agent and pass the tool
my_agent = Agent(
name="confluence_agent",
model="gemini-1.5-flash",
instruction="Answer questions using our internal Confluence knowledge base.",
tools=[vertex_search_tool]
) This approach allows you to leverage the search capabilities of your connector-based data store in a cost-effective, developer-led way, which seems to be exactly what you're looking for. For more details, you can refer to the official documentation on I hope this helps clarify the best practice and provides a path forward for your project. |
Beta Was this translation helpful? Give feedback.
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You should be able to use your Confluence-based Data Store directly with the ADK's Hi @this-dave, I understand you're facing a challenge using a Vertex AI Data Store created with the Confluence connector in your custom ADK agent. The issue with The good news is that when using the Google ADK, you likely don't need to manually create a "Search App" or worry about the You can directly use your Data Store's ID with this tool. Here is how you would configure it in your agent's code: from google.adk.agents import Agent
from google.adk.tools import VertexAiSearchTool
# Replace with the full resource ID of your Confluence Data Store
DATASTORE_ID = "projects/YOUR_PROJECT_ID/locations/global/collections/default_collection/dataStores/YOUR_CONFLUENCE_DATASTORE_ID"
# Your agent definition
my_agent = Agent(
# ... other agent parameters
tools=[
VertexAiSearchTool(data_store_id=DATASTORE_ID)
]
) By providing the For more details, you can refer to the official documentation on Vertex AI Search grounding [1] and the code example for using Regarding your second question, building a custom RAG pipeline is indeed a powerful alternative, but it might not be necessary in your case. The I hope this helps you move forward with your project! [1] https://google.github.io/adk-docs/grounding/vertex_ai_search_grounding/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I'm trying to figure out the best practice for connecting our Confluence instance to a custom agent built with the Google ADK.
The goal is simple: I want to use the Search tool in my ADK agent to query our internal Confluence knowledge base.
It seemed straightforward at first: I created a Data Store in Vertex AI and used the built-in Confluence connector. This worked perfectly and the data was ingested.
However, I've hit a roadblock trying to use this Data Store. To query it via the API, my agent needs a serving_config, which requires linking the Data Store to a "Search App". The standard "Custom Search" app explicitly rejects my connector-based Data Store. The only path the Google Cloud Console allows is to create a full "Agentspace" app, which is too expensive for our needs (we don't need the full Agentspace product, just the underlying search capability).
So my questions are:
Is there a way to use a Data Store created with a managed connector (like Confluence) for search without being forced to use the expensive Agentspace product tier?
If the answer is no, what is the recommended best practice for this? Is the only alternative to build our own RAG loading from scratch (scripting the Confluence export etc.) and then using the RAG-Engine?
It feels like there's a gap between the simple, managed connectors and the ability to use them in a cost-effective, developer-led way. Any clarification or guidance would be greatly appreciated.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions