A portfolio-ready AI Research Assistant built with LangChain, Google Gemini API, and FAISS vector database.
Upload multiple PDFs and ask questions — the AI retrieves answers from your documents, with a collapsible “View Sources” feature under each response.
- Multi-PDF upload
- Chat-style interface (like ChatGPT)
- Collapsible and styled “View Sources” with PDF name and page
- RetrievalQA powered by Google Generative AI embeddings
- Multi-turn conversation with history
- FAISS vectorstore persistence for faster repeated queries
- Clean, professional UI with Gradio
- Clone the repository
git clone https://github.com/your-username/AI-Research-Assistant.git
cd AI-Research-Assistant- Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # Linux / macOS
.venv\Scripts\activate # Windows- Install dependencies
pip install -r requirements.txt- Set up Google Application Default Credentials (ADC)
Follow Google Cloud ADC setup.
Make sure you have a .env file in the project root with your API key if required:
GEMINI_API_KEY=your_google_gemini_api_key_here
python main.py- Upload one or more PDFs.
- Ask questions in the chat.
- Click View Sources under answers to see which PDF/page the answer came from.
- Clear chat with the Clear Chat button.
Albot/
├── main.py # Gradio app + Chat interface
├── utils.py # Document loading, embeddings, vectorstore
├── requirements.txt # Dependencies
├── .env # API keys (ignored in git)
├── vectorstore/ # FAISS vector database
└── .venv/ # Python virtual environment
- Do not commit
.envor.venv. They are included in.gitignore. - The vectorstore folder contains embeddings and is also ignored.
- FAISS deserialization is safe for your own data but should not be loaded from untrusted sources.
MIT License.