Skip to content

Conversation

jeremykpark
Copy link

Description

The current version of the Mem0 plugin located in the packages/aiqtoolkit_mem0ai folder was built for Mem0 v1 API, and some commands have been depreciated in Mem0's V2 API. This prevents it from working correctly. This PR proposes changes to the mem0_editor file, needed to support Mem0 v2 API.
Additionally, we have added a plugin function in a separate file to support a local installation of Mem0, using Ollama and other local services for local development and testing. The cloud version Mem0 plugin file remains unmodified, so developers can choose the best option for them, if they need to switch to a cloud Mem0 version down the road.
More testing should be done with the cloud version (using memory.py), as I have not tested it extensively with these changes. Also mem0_editor.py could be reduced to have less verbose commenting and it may not need all of the field validation provided.
If you wish to test the local version, I have provided a config file here: https://github.com/jeremykpark/aiq-mem0ai-config-samples/tree/main/sample_configs
Closes #384

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Copy link

copy-pr-bot bot commented Jun 19, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@yczhang-nv yczhang-nv added improvement Improvement to existing functionality non-breaking Non-breaking change labels Jul 9, 2025
@yczhang-nv
Copy link
Contributor

/ok to test

Copy link

copy-pr-bot bot commented Jul 9, 2025

/ok to test

@yczhang-nv, there was an error processing your request: E1

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/

@yczhang-nv
Copy link
Contributor

/ok to test fe7c551

Copy link
Contributor

@dnandakumar-nv dnandakumar-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes to the mem0_editor look great! I have a few questions/recommendations about the memory_local implementation. I'm wondering if we can generalize further by using the Builder to work with any embedder and LLM supported by the toolkit.

vec_store_collection_name: str = "DefaultAIQCollectionNew"
vec_store_url: str = "http://localhost:19530" # Default Local Milvus URL, change if needed
vec_store_embedding_model_dims: int = 1024 # Updated to match the actual embedding dimensions
llm_provider: str = "ollama"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would these work only for Ollama models? I'm wondering if we can abstract this further to use any LLM provider supported by the NeMo Agent Toolkit.

llm_temperature: float = 0.0
llm_max_tokens: int = 2000
llm_base_url: str = "http://localhost:11434" # Default Ollama URL, change if needed
embedder_provider: str = "ollama"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also wondering we can we use the embedder interface in the toolkit to support any embedding model supported by the toolkit. If we can generalize to more than Ollama, it would be incredibly valuable

@jeremykpark
Copy link
Author

Think it should yes. A local NIM implementation of LLMs and embedder should be achievable too, and would be useful for the upcoming spark release. I did attempt this on some older RTX hardware, but was facing some challenges and just stuck with Ollama, but I feel like if I had a spark it might just work.

It would be good to have a version as well, that is setup for models on build.nvidia.com, for those without hardware that still want to experiment with their own models.

@jeremykpark
Copy link
Author

Lmk if you need anything from me to move this forward. I've been distracted with other projects but it would be great to have some street cred contributing to Nvidia AI agents :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement to existing functionality non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEA]: Improve Mem0 plugin to work with v2 API + add mem0 Open Source option
3 participants