This repository provides comprehensive resources for working with generative AI models using Amazon SageMaker and Amazon Bedrock. Whether you're looking to fine-tune foundation models, build RAG applications, create agents, or implement responsible AI practices, you'll find practical examples and workshops here.
- Build experimental RAG applications
- Implement RAG with SageMaker and OpenSearch
- Fine-tune embedding models
- Customize models with RAFT
- Apply guardrails to LLM outputs
- Use SageMaker JumpStart for fine-tuning
- Implement distributed training with FSDP and QLoRA
- Deploy models on SageMaker HyperPod with Kubernetes
- Basic inference with Bedrock and SageMaker
- Implement tool calling capabilities
- Build agent patterns (autonomous, orchestrator-worker, etc.)
- Use agent frameworks (LangGraph, CrewAI, Strands, etc.)
- Add observability with Langfuse and MLflow
- Set up a Foundation Model Playground
- Customize foundation models
- Evaluate models with LightEval
- Implement responsible AI with Bedrock Guardrails
- Develop FMOps fine-tuning workflows with SageMaker Pipelines
- Clone this repository
- Navigate to the workshop of your choice
- Follow the instructions in the workshop's README.md file
- Each workshop contains Jupyter notebooks that guide you through the process
- AWS account with appropriate permissions
- Basic understanding of machine learning concepts
- Familiarity with Python programming
- Access to Amazon SageMaker and Amazon Bedrock services
We welcome contributions! Please see CONTRIBUTING for details on how to submit pull requests, report issues, or suggest improvements.
See CONTRIBUTING for more information about reporting security issues.
This library is licensed under the MIT-0 License. See the LICENSE file for details.