For experiments and research on Applied AI.
Housing a variety of Triton and CUDA kernels for training and inference.
Inference kernels = no backward pass support.
1 - Triton - MoE (Mixtral) GEMM for accelerating inference. Uses col major access pattern to increase locality.


- CUDA Mode - Reading group for learning CUDA programming - (Discord, Lecture Materials, Lecture recordings)
- llama-recipes - Recipes for fine-tuning and inference for Llama model series
- NeurIPS'23 LLM Efficiency Challenge - 1LLM + 1GPU + 1Day competition - (website, code, NeurIPS Workshop recordings)
- PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation paper
- Accelerating a Triton Fused Kernel for W4A16 Quantized Inference with SplitK Work Decomposition paper
- PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel paper
- Sustainable AI: Environmental Implications, Challenges and Opportunities paper
The applied-ai repo is released under the BSD 3 license.