The final project is worth 100% of the grade and will be evaluated during the showcase. Each category includes clear expectations to ensure fairness and transparency.
Category | Weight | Criteria |
---|---|---|
Functionality | 50% | - The bot operates without errors and fulfills its intended purpose. |
- Accurate integration of RAG (e.g., demonstrates retrieval of relevant external data sources and effectively grounds responses). | ||
- Smooth interactions (clear responses, low latency, logical outputs). | ||
Innovation & Creativity | 25% | - Unique or creative application of the bot for a specific subject or user group. |
- Effective use of Gemini and embedding techniques to tailor the project. | ||
- Demonstrates originality in features or design. | ||
Presentation Quality | 25% | - Clear and structured presentation explaining the bot’s goals, design choices, and technical implementation. |
- Demonstrates a solid understanding of RAG principles and embedding models. | ||
- Engages the audience with visuals (e.g., slides, live demo, or flowcharts) and answers questions effectively. |
- Collaboration: Each group member’s contribution should be visible. Teams will be asked to reflect on their collaboration in the presentation.
- Real-World Relevance: Projects that address a real-world need or practical problem will be valued.
- Documentation: Groups should submit basic documentation (e.g., README file or usage guide) for their bot.
-
Functionality (50%):
- Retrieval-based answers show precision: 20/20
- External data integration is seamless: 15/15
- Minimal errors during the demo: 12/15
Total: 47/50
-
Innovation & Creativity (25%):
- Subject-specific adaptation: 10/10
- Unique problem-solving feature: 7/10
- Impressive design aesthetics: 6/5
Total: 23/25
-
Presentation Quality (25%):
- Structured explanation of RAG principles: 10/10
- Clear and concise delivery: 7/10
- Engaging visuals and demo: 6/5
Total: 23/25
Final Score: 93/100