Skip to content

JustinSo1/CPU_RAG

Repository files navigation


RAG on the Edge

Evaluations of several RAG pipelines on multiple datasets and benchmarks

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage

About The Project

RAG Pipeline This project is about the evaluation of several RAG pipelines. Each of them are evaluated by several methods such as the rag-mini-wikipedia dataset and the TARGET benchmark.

(back to top)

Built With

(back to top)

Getting Started

Prerequisites

  • Python 3.1x
  • Linux machine if you are running gemma.cpp RAG pipeline

Installation

  1. Install python packages
    pip install -r requirements.txt
  2. Install TARGET benchmark from source
cd target
pip install -e .

If using AzureOpenAI API

  1. Enter your credentials in .env
MODEL=''
OPENAI_API_BASE=''
OPENAI_API_KEY=''
API_VERSION=''
OPENAI_ORGANIZATION=''

(back to top)

Usage

  • To use the gemma.cpp RAG pipeline go into RAG_GemmaCPP README
  • To use the LlamaIndex RAG pipeline
python main.py
  • To use LLM only for QA tasks
python llm_query.py
  • To run TARGET benchmark
python run_target_benchmark.py
cd Eval
python evaluation.py

An example workflow would be running LlamaIndex RAG pipeline -> evaluate results -> visualize results

(back to top)

About

Several RAG pipelines on edge devices evaluated on different benchmarks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •