Skip to content

AnjuMau8418/building-llm-applications-from-scratch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Building LLM Applications from Scratch

🚀 Build LLM-Powered Applications Like a Pro!

Welcome to the Open Sourced version of my course on LLMs.

This course is one of the top-rated technical courses on building Large Language Model (LLM) applications from the ground up. So far, I’ve taught this course to over 1500 professionals, at MAVEN, Stanford, UCLA and University of Minnesota, helping them gain a deep understanding of Transformer Architecture, Retrieval-Augmented Generation (RAG), and open-source LLM deployment.

Unlike most courses that focus on pre-built frameworks like LangChain, this course goes beyond by diving into the building blocks of retrieval systems, enabling you to design, build, and deploy your own custom LLM-powered solutions.

🌎 Also featured in Stanford's AI Leadership Series:
🔗 Stanford AI Leadership Series - Building and Scaling AI Solutions


📌 Learning Outcomes

  • Gain a comprehensive understanding of LLM architecture
  • Construct and deploy real-world applications using LLMs
  • Learn the fundamentals of search and retrieval for AI applications
  • Understand encoder and decoder models at a deep level
  • Train, fine-tune, and deploy LLMs for enterprise use cases
  • Implement RAG-based architectures with open-source models

📢 Who is This Course For?

This course is not for beginners. It requires:
Python programming skills
Basic machine learning knowledge

It is designed for:
🔹 Machine Learning Engineers
🔹 Data Scientists
🔹 AI Researchers
🔹 Software Engineers interested in LLMs


📌 What You’ll Learn

Collect and preprocess data for LLM applications
Train and fine-tune pre-trained LLMs for specific tasks
Evaluate model performance with appropriate metrics
Deploy LLM applications via APIs and Hugging Face
Address ethical concerns in AI development


📚 What’s Included?

29 in-depth lessons covering LLM architectures and RAG techniques
6 real-world projects to apply your learnings
Interactive live sessions and direct instructor access
Guided feedback & reflection
Private community of peers
Certificate upon completion


📢 Attribution & Credits

If you use my course material, content, or research in your work, please credit me and the respective contributors.

🔹 Proper citation format:

Farooq, H. (2024). Building LLM Applications from Scratch
Stanford Continuing Studies: The AI Leadership Series

📌 Tagging & mentions are always appreciated! 😊

📅 Course Syllabus

Week 1: Introduction to NLP

  • Understanding natural language processing fundamentals
  • Tokenization, embeddings, and vector representations

Week 2: Transformers & LLM System Design

  • The evolution of Transformer models
  • Understanding encoder-decoder architectures

Week 3: Semantic Search & Retrieval

  • Implementing vector search for LLM applications
  • Introduction to RAG-based architectures

Week 4: Building a Search Engine from Scratch

  • Developing a custom RAG solution
  • Optimizing search and retrieval pipelines

Week 5: The Generation Part of LLMs

  • Fine-tuning models for text generation tasks
  • Optimizing inference for real-time applications

Week 6: Prompt-Tuning, Fine-Tuning & Local LLMs

  • Techniques for efficient inference & quantization
  • Deploying custom LLMs at scale

🎉 Post-Course: Demo Day – Present your final project!


What Students Are Saying

"This course was amazing! I left feeling empowered and ready to build my own LLM-powered applications."
– Tiffany Teasley, Data Scientist

"Hamza’s approach to teaching is practical and engaging. The real-world projects made all the difference!"
– Victor Calderon, Senior ML Engineer

"One of the best courses for LLM applications! Highly recommended for anyone serious about the field."
– Abhinav, Security Researcher


🔥 Why Take This Course?

Unlike most AI courses that rely on pre-built frameworks, this course teaches you how to build LLM applications from scratch—without LangChain or LlamaIndex.

By the end, you’ll be able to:
✅ Build highly customizable LLM applications
✅ Optimize retrieval and search strategies
✅ Deploy cost-efficient and scalable AI solutions


About

Code and Slides

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.9%
  • Python 0.1%