AI Engineer Core Track: LLM Engineering, RAG, QLoRA, Agents

AI Engineer Core Track: LLM Engineering, RAG, QLoRA, Agents

English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 432 lectures (58h 31m) | 52.74 GB

Become an LLM Engineer in 8 weeks: Build and deploy 8 LLM apps, mastering Generative AI, RAG, LoRA and AI Agents.

Mastering Generative AI and LLMs: An 8-Week Hands-On Journey

Accelerate your career in AI with practical, real-world projects led by industry veteran Ed Donner. Build advanced Generative AI products, experiment with over 20 groundbreaking models, and master state-of-the-art techniques like RAG, QLoRA, and Agents.

What you’ll learn

  • Build advanced Generative AI products using cutting-edge models and frameworks.
  • Experiment with over 20 groundbreaking AI models, including Frontier and Open-Source models.
  • Develop proficiency with platforms like HuggingFace, LangChain, and Gradio.
  • Implement state-of-the-art techniques such as RAG (Retrieval-Augmented Generation), QLoRA fine-tuning, and Agents.
  • Create real-world AI applications, including:
  • A multi-modal customer support assistant that interacts with text, sound, and images.
  • An AI knowledge worker that can answer any question about a company based on its shared drive.
  • An AI programmer that optimizes software, achieving performance improvements of over 60,000 times.
  • An ecommerce application that accurately predicts prices of unseen products.
  • Transition from inference to training, fine-tuning both Frontier and Open-Source models.
  • Deploy AI products to production with polished user interfaces and advanced capabilities.
  • Level up your AI and LLM engineering skills to be at the forefront of the industry.

About the Instructor

I’m Ed Donner, an entrepreneur and leader in AI and technology with over 20 years of experience. I’ve co-founded and sold my own AI startup, started a second one, and led teams in top-tier financial institutions and startups around the world. I’m passionate about bringing others into this exciting field and helping them become experts at the forefront of the industry.

Projects:

  • Project 1: AI-powered brochure generator that scrapes and navigates company websites intelligently.
  • Project 2: Multi-modal customer support agent for an airline with UI and function-calling.
  • Project 3: Tool that creates meeting minutes and action items from audio using both open- and closed-source models.
  • Project 4: AI that converts Python code to optimized C++, boosting performance by 60,000x!
  • Project 5: AI knowledge-worker using RAG to become an expert on all company-related matters.
  • Project 6: Capstone Part A – Predict product prices from short descriptions using Frontier models.
  • Project 7: Capstone Part B – Fine-tuned open-source model to compete with Frontier in price prediction.
  • Project 8: Capstone Part C – Autonomous agent system collaborating with models to spot deals and notify you of special bargains.

Why This Course?

  • Hands-On Learning: The best way to learn is by doing. You’ll engage in practical exercises, building real-world AI applications that deliver stunning results.
  • Cutting-Edge Techniques: Stay ahead of the curve by learning the latest frameworks and techniques, including RAG, QLoRA, and Agents.
  • Accessible Content: Designed for learners at all levels. Step-by-step instructions, practical exercises, cheat sheets, and plenty of resources are provided.
  • No Advanced Math Required: The course focuses on practical application. No calculus or linear algebra is needed to master LLM engineering.

Who this course is for:

  • Aspiring AI engineers and data scientists eager to break into the field of Generative AI and LLMs.
  • Professionals looking to upskill and stay competitive in the rapidly evolving AI landscape.
  • Developers interested in building advanced AI applications with practical, hands-on experience.
  • Individuals seeking a career transition or aiming to enhance productivity through LLM-built frameworks.
Table of Contents

NEW Week 1 – Build Your First LLM Product Exploring Top Models
1 Day 1 – Running Your First LLM Locally with Ollama and Open Source Models
2 Day 1 – Spanish Tutor Demo with Open-Source Models & Course Overview
3 Day 1 – Setting Up Your LLM Development Environment with Cursor and UV
4 Day 1 – Setting Up Your PC Development Environment with Git and Cursor
5 Day 1 – Mac Setup Installing Git, Cloning the Repo, and Cursor IDE
6 Day 1 – Installing UV and Setting Up Your Cursor Development Environment
7 Day 1 – Setting Up Your OpenAI API Key and Environment Variables
8 Day 1 – Installing Cursor Extensions and Setting Up Your Jupyter Notebook
9 Day 1 – Running Your First OpenAI API Call and System vs User Prompts
10 Day 1 – Building a Website Summarizer with OpenAI Chat Completions API
11 Day 1 – Hands-On Exercise Building Your First OpenAI API Call from Scratch
12 Day 2 – LLM Engineering Building Blocks Models, Tools & Techniques
13 Day 2 – Your 8-Week Journey From Chat Completions API to LLM Engineer
14 Day 2 – Frontier Models OpenAI GPT, Claude, Gemini & Grok Compared
15 Day 2 – Open-Source LLMs LLaMA, Mistral, DeepSeek, and Ollama
16 Day 2 – Chat Completions API HTTP Endpoints vs OpenAI Python Client
17 Day 2 – Using the OpenAI Python Client with Multiple LLM Providers
18 Day 2 – Running Ollama Locally with OpenAI-Compatible Endpoints
19 Day 3 – Base, Chat, and Reasoning Models Understanding LLM Types
20 Day 3 – Frontier Models GPT, Claude, Gemini & Their Strengths and Pitfalls
21 Day 3 – Testing ChatGPT-5 and Frontier LLMs Through the Web UI
22 Day 3 – Testing Claude, Gemini, Grok & DeepSeek with ChatGPT Deep Research
23 Day 3 – Agentic AI in Action Deep Research, Claude Code, and Agent Mode
24 Day 3 – Frontier Models Showdown Building an LLM Competition Game
25 Day 4 – Understanding Transformers The Architecture Behind GPT and LLMs
26 Day 4 – From LSTMs to Transformers Attention, Emergent Intelligence & Agentic A
27 Day 4 – Parameters From Millions to Trillions in GPT, LLaMA & DeepSeek
28 Day 4 – What Are Tokens From Characters to GPT’s Tokenizer
29 Day 4 – Understanding Tokenization How GPT Breaks Down Text into Tokens
30 Day 4 – Tokenizing with tiktoken and Understanding the Illusion of Memory
31 Day 4 – Context Windows, API Costs, and Token Limits in LLMs
32 Day 5 – Building a Sales Brochure Generator with OpenAI Chat Completions API
33 Day 5 – Building JSON Prompts and Using OpenAI’s Chat Completions API
34 Day 5 – Chaining GPT Calls Building an AI Company Brochure Generator
35 Day 5 – Building a Brochure Generator with GPT-4 and Streaming Results
36 Day 5 – Business Applications, Challenges & Building Your AI Tutor

NEW Week 2 – Build a Multi-Modal Chatbot LLMs, Gradio UI, and Agents
37 Day 1 – Connecting to Multiple Frontier Models with APIs (OpenAI, Claude, Gemini
38 Day 1 – Testing GPT-5 Models with Reasoning Effort and Scaling Puzzles
39 Day 1 – Testing Claude, GPT-5, Gemini & DeepSeek on Brain Teasers
40 Day 1 – Local Models with Ollama, Native APIs, and OpenRouter Integration
41 Day 1 – LangChain vs LiteLLM Choosing the Right LLM Framework
42 Day 1 – LLM vs LLM Building Multi-Model Conversations with OpenAI & Claude
43 Day 2 – Building Data Science UIs with Gradio (No Front-End Skills Required)
44 Day 2 – Building Your First Gradio Interface with Callbacks and Sharing
45 Day 2 – Building Gradio Interfaces with Authentication and GPT Integration
46 Day 2 – Markdown Responses and Streaming with Gradio and OpenAI
47 Day 2 – Building Multi-Model Gradio UIs with GPT and Claude Streaming
48 Day 3 – Building Chat UIs with Gradio Your First Conversational AI Assistant
49 Day 3 – Building a Streaming Chatbot with Gradio and OpenAI API
50 Day 3 – System Prompts, Multi-Shot Prompting, and Your First Look at RAG
51 Day 4 – How LLM Tool Calling Really Works (No Magic, Just Prompts)
52 Day 4 – Common Use Cases for LLM Tools and Agentic AI Workflows
53 Day 4 – Building an Airline AI Assistant with Tool Calling in OpenAI and Gradio
54 Day 4 – Handling Multiple Tool Calls with OpenAI and Gradio
55 Day 4 – Building Tool Calling with SQLite Database Integration
56 Day 5 – Introduction to Agentic AI and Building Multi-Tool Workflows
57 Day 5 – How Gradio Works Building Web UIs from Python Code
58 Day 5 – Building Multi-Modal Apps with DALL-E 3, Text-to-Speech, and Gradio Bloc
59 Day 5 – Running Your Multimodal AI Assistant with Gradio and Tools

NEW Week 3 – Open-Source Gen AI Automated Solutions with HuggingFace
60 Day 1 – Introduction to Hugging Face Platform Models, Datasets, and Spaces
61 Day 1 – HuggingFace Libraries Transformers, Datasets, and Hub Explained
62 Day 1 – Introduction to Google Colab and Cloud GPUs for AI Development
63 Day 1 – Getting Started with Google Colab Setup, Runtime, and Free GPU Access
64 Day 1 – Setting Up Google Colab with Hugging Face and Running Your First Model
65 Day 1 – Running Stable Diffusion and FLUX on Google Colab GPUs
66 Day 2 – Introduction to Hugging Face Pipelines for Quick AI Inference
67 Day 2 – HuggingFace Pipelines API for Sentiment Analysis on Colab T4 GPU
68 Day 2 – Named Entity Recognition, Q&A, and Hugging Face Pipeline Tasks
69 Day 2 – Hugging Face Pipelines Image, Audio & Diffusion Models in Colab
70 Day 3 – Tokenizers How LLMs Convert Text to Numbers
71 Day 3 – Tokenizers in Action Encoding and Decoding with Llama 3.1
72 Day 3 – How Chat Templates Work LLaMA Tokenizers and Special Tokens
73 Day 3 – Comparing Tokenizers Phi-4, DeepSeek, and QWENCoder in Action
74 Day 4 – Deep Dive into Transformers, Quantization, and Neural Networks
75 Day 4 – Working with Hugging Face Transformers Low-Level API and Quantization
76 Day 4 – Inside LLaMA PyTorch Model Architecture and Token Embeddings
77 Day 4 – Inside LLaMA Decoder Layers, Attention, and Why Non-Linearity Matters
78 Day 4 – Running Open Source LLMs Phi, Gemma, Qwen & DeepSeek with Hugging Face
79 Day 5 – Visualizing Token-by-Token Inference in GPT Models
80 Day 5 – Building Meeting Minutes from Audio with Whisper and Google Colab
81 Day 5 – Building Meeting Minutes with OpenAI Whisper and LLaMA 3.2
82 Day 5 – Week 3 Wrap-Up Build a Synthetic Data Generator with Open Source Models

New Week 4 – LLM Showdown Evaluating Models for Code Gen & Business Tasks
83 Day 1 – Choosing the Right LLM Model Selection Strategy and Basics
84 Day 1 – The Chinchilla Scaling Law Parameters, Training Data and Why It Matters
85 Day 1 – Understanding AI Model Benchmarks GPQA, MMLU-Pro, and HLE
86 Day 1 – Limitations of AI Benchmarks Data Contamination and Overfitting
87 Day 1 – Build a Connect Four Leaderboard (Reasoning Benchmark)
88 Day 2 – Navigating AI Leaderboards Artificial Analysis, HuggingFace & More
89 Day 2 – Artificial Analysis Deep Dive Model Intelligence vs Cost Comparison
90 Day 2 – Vellum, SEAL, and LiveBench Essential AI Model Leaderboards
91 Day 2 – LM Arena Blind Testing AI Models with Community Elo Ratings
92 Day 2 – Commercial Use Cases Automation, Augmentation & Agentic AI
93 Day 3 – Selecting LLMs for Code Generation Python to C++ with Cursor
94 Day 3 – Selecting Frontier Models GPT-5, Claude, Grok & Gemini for C++ Code Gen
95 Day 3 – Porting Python to C++ with GPT-5 230x Performance Speedup
96 Day 3 – AI Coding Showdown GPT-5 vs Claude vs Gemini vs Groq Performance
97 Day 4 – Open Source Models for Code Generation Qwen, DeepSeek & Ollama
98 Day 4 – Building a Gradio UI to Test Python-to-C++ Code Conversion Models
99 Day 4 – Qwen 3 Coder vs GPT OSS OpenRouter Model Performance Showdown
100 Day 5 – Model Evaluation Technical Metrics vs Business Outcomes
101 Day 5 – Python to Rust Code Translation Testing Gemini 2.5 Pro with Cursor
102 Day 5 – Porting Python to Rust Testing GPT, Claude, and Qwen Models
103 Day 5 – Open Source Model Wins Rust Code Generation Speed Challenge

New Week 5 – Mastering RAG Build Advanced Solutions with Vector Embeddings
104 Day 1 – Introduction to RAG Retrieval Augmented Generation Fundamentals
105 Day 1 – Building a Simple RAG Knowledge Assistant with GPT-4-1 Nano
106 Day 1 – Building a Simple RAG System Dictionary Lookup and Context Retrieval
107 Day 1 – Vector Embeddings and Encoder LLMs The Foundation of RAG
108 Day 1 – How Vector Embeddings Represent Meaning From word2vec to Encoders
109 Day 1 – Understanding the Big Idea Behind RAG and Vector Data Stores
110 Day 2 – Vectors for RAG Introduction to LangChain and Vector Databases
111 Day 2 – Breaking Documents into Chunks with LangChain Text Splitters
112 Day 2 – Encoder Models vs Vector Databases OpenAI, BERT, Chroma & FAISS
113 Day 2 – Creating Vector Stores with Chroma and Visualizing Embeddings with t-SNE
114 Day 2 – 3D Vector Visualizations and Comparing Embedding Models
115 Day 3 – Building a Complete RAG Pipeline with LangChain and Chroma
116 Day 3 – Building a RAG Pipeline with LangChain LLM & Retriever Setup
117 Day 3 – Building RAG with LangChain Retriever and LLM Integration
118 Day 3 – Building Production RAG with Python Modules and Gradio UI
119 Day 3 – RAG with Conversation History Building a Gradio UI and Debugging Chunki
120 Day 4 – RAG Evaluations Measuring Performance and Iterating on Your Pipeline
121 Day 4 – Evaluating RAG Systems Retrieval Metrics, LLM as Judge, and Golden Data
122 Day 4 – Evaluating RAG Systems MRR, NDCG, and Test Data with Pydantic
123 Day 4 – LLM as a Judge Evaluating RAG Answers with Structured Outputs
124 Day 4 – Running RAG Evaluations with Gradio MRR, nDCG, and Test Results
125 Day 4 – Experimenting with Chunking Strategies and Embedding Models in RAG
126 Day 4 – Testing OpenAI Embeddings and Evaluating RAG Performance Gains
127 Day 5 – Advanced RAG Techniques Pre-processing, Re-ranking & Evals
128 Day 5 – Advanced RAG Techniques Chunking, Encoders, and Query Rewriting
129 Day 5 – Advanced RAG Techniques Query Expansion, Re-ranking & GraphRAG
130 Day 5 – Building Advanced RAG Without LangChain Semantic Chunking with LLMs
131 Day 5 – Creating Embeddings with Chroma, Visualizing with t-SNE, and Re-ranking
132 Day 5 – Building RAG Without LangChain Re-ranking and Query Rewriting
133 Day 5 – Building Production RAG with Query Expansion and Multiprocessing
134 Day 5 – Advanced RAG Evaluation From 0.73 to 0.91 MRR with GPT-4o
135 Day 5 – RAG Challenge Beat My Results & Build Your Knowledge Worker

New Week 6 Fine-tuning Frontier Large Language Models with LoRAQLoRA
136 Day 1 – Training, Datasets, and Generalization Your Capstone Begins
137 Day 1 – Finetuning LLMs & The Price is Right Capstone Project Intro
138 Day 1 – Curating Datasets Finding Data Sources and Building Training Sets
139 Day 1 – Curating Amazon Data with Hugging Face for Price Prediction
140 Day 1 – Exploring Amazon Dataset Distribution and Removing Duplicates
141 Day 1 – Weighted Sampling with NumPy and Uploading Datasets to Hugging Face
142 Day 2 – Five-Step Strategy for Selecting and Applying LLMs to Business Problems
143 Day 2 – The Five-Step AI Process & Productionizing with MLOps
144 Day 2 – Data Pre-processing with LLMs and Groq Batch Mode
145 Day 2 – Batch Processing with Groq API and JSONL Files for LLM Workflows
146 Day 2 – Batch Processing with Groq Running 22K LLM Requests for Under $1
147 Day 3 – Building Baseline Models with Traditional ML and XGBoost
148 Day 3 – Building Your First Baseline with Random Pricer and Scikit-learn
149 Day 3 – Baseline Models and Linear Regression with Scikit-Learn
150 Day 3 – Bag of Words and CountVectorizer for Linear Regression NLP
151 Day 3 – Random Forest and XGBoost Ensemble Models in Scikit-Learn
152 Day 4 – Training Your First Neural Network and Testing Frontier Models
153 Day 4 – Human Baseline Performance vs Machine Learning Models in PyTorch
154 Day 4 – Building Your First Neural Network with PyTorch
155 Day 4 – Testing GPT-4o-mini and Claude Opus Against Neural Networks
156 Day 4 – Testing Gemini 3, GPT-5.1, Claude 4.5 & Grok on Price Prediction
157 Day 5 – Fine-Tuning OpenAI Frontier Models with Supervised Fine-Tuning
158 Day 5 – Fine-Tuning GPT-4o Nano with OpenAI’s API for Custom Models
159 Day 5 – Fine-Tuning GPT-4o-mini-nano Running Jobs and Monitoring Training
160 Day 5 – Fine-Tuning Results When GPT-4o-mini Gets Worse, Not Better
161 Day 5 – When Fine-Tuning Frontier Models Fails & Building Deep Neural Networks
162 Day 5 – Deep Neural Network Redemption 289M Parameters vs Frontier Models

New Week 7 – Fine-tuned open-source model to compete with Frontier model
163 Day 1 – Introduction to QLoRA for Fine-Tuning Open-Source Models
164 Day 1 – LoRA Training LLaMA 3.2 with Low-Rank Adapters
165 Day 1 – LoRA Hyperparameters and QLoRA Quantization Explained
166 Day 1 – Setting Up Google Colab and Exploring LLaMA 3.2 Model Architecture
167 Day 1 – Loading Models with 8-bit and 4-bit Quantization Using QLoRA
168 Day 1 – LoRA Parameter Calculations and Model Size on Hugging Face
169 Day 2 – Preparing Your Dataset for Fine-Tuning with Token Limits
170 Day 2 – Fine-Tuning Data Prep Rounding Prices and Token Length Optimization
171 Day 2 – Preparing Hugging Face Datasets and Testing Base LLaMA 3.2 Model
172 Day 2 – Base Models vs Chat Models Understanding LLaMA Fine-Tuning
173 Day 3 – Fine-Tuning Hyperparameters QLoRA Settings and Training Config
174 Day 3 – Learning Rate, Optimizers, and Training Hyperparameters for LoRA
175 Day 3 – Setting Up Training Hyperparameters, qLoRA Config & Weights & Biases
176 Day 3 – Setting Up Weights & Biases and the HuggingFace SFT Trainer
177 Day 3 – Running Fine-Tuning with TRL and Monitoring Training in Weights & Biases
178 Day 4 – Monitoring Your Fine-Tuning Run with Weights & Biases
179 Day 4 – Full Dataset Training on Google Colab A100 with 800K Data Points
180 Day 4 – Monitoring Training Loss and Learning Rate in Weights & Biases
181 Day 4 – Analyzing Weights & Biases Results and Catching Overfitting
182 Day 4 – Managing Runs in Weights & Biases and Selecting Best Model Checkpoints
183 Day 5 – Results Day Running Inference on Fine-Tuned Models & Loss Calculation D
184 Day 5 – Cross-Entropy Loss How LLMs Calculate Probability Distributions
185 Day 5 – Testing Our Fine-Tuned LoRA Model Against GPT-4o Nano
186 Day 5 – Fine-Tuned LLaMA 3.2 Crushes GPT-5.1 and Frontier Models

New Week 8 – Build Autonomous multi agent system
187 Day 1 – Intro to Agentic AI & Serverless Deployment on Modal
188 Day 1 – Designing Agent Architectures & Modal Platform Setup
189 Day 1 – Running Python Locally and in the Cloud with Modal Remote Execution
190 Day 1 – Setting Up Modal Secrets and Deploying LLaMA Models to the Cloud
191 Day 1 – Deploying Fine-Tuned Models to Modal Cloud with Persistent Storage
192 Day 1 – Building Your First Agent with Modal Serverless AI
193 Day 2 – Building Advanced RAG with ChromaDB and Vector Stores (No LangChain)
194 Day 2 – Visualizing Chroma Vectors with t-SNE and Building a RAG Pipeline
195 Day 2 – RAG with GPT-4o vs Fine-Tuned Models Building an Ensemble
196 Day 2 – Ensemble Model Success Combining RAG, Neural Networks & Modal
197 Day 2 – Building and Testing an Ensemble Agent with Multiple LLM Calls
198 Day 3 – Structured Outputs with Pydantic and Constrained Decoding
199 Day 3 – Building a Deal Scanner with Structured Outputs and Pydantic
200 Day 3 – Structured Outputs for Parsing & Building a Pushover Notification Agent
201 Day 4 – Building Agentic AI Planning Agents with Tool Orchestration
202 Day 4 – Building an Autonomous Planner Agent with Tool Calling and GPT-4
203 Day 4 – Building an Autonomous Multi-Agent System with Tool Calling and Agent Lo
204 Day 4 – Building a Multi-Model AI Platform 34 Calls Across GPT-5, Claude & Open
205 Day 5 – Finalizing Your Agentic Workflow and Becoming an AI Engineer
206 Day 5 – Building the Price-Is-Right Agent UI with Gradio and DealAgentFramework
207 Day 5 – Course Wrap-Up Your Journey to AI Engineer

APPENDIX – ARCHIVE – Original Week 1 – now replaced by new version
208 Day 1 – Cold Open Jumping Right into LLM Engineering
209 Day 1 – Setting Up Ollama for Local LLM Deployment on Windows and Mac
210 Day 1 – Unleashing the Power of Local LLMs Build Spanish Tutor Using Ollama
211 Day 1 – LLM Engineering Roadmap From Beginner to Master in 8 Weeks
212 Day 1 – Building LLM Applications Chatbots, RAG, and Agentic AI Projects
213 Day 1 – From Wall Street to AI Ed Donner’s Path to Becoming an LLM Engineer
214 Day 1 – Setting Up Your LLM Development Environment Tools and Best Practices
215 Day 1 – Mac Setup Guide Jupyter Lab and Conda for LLM Projects
216 Day 1 – Setting Up Anaconda for LLM Engineering Windows Installation Guide
217 Day 1 – Alternative Python Setup for LLM Projects Virtualenv vs. Anaconda Guide
218 Day 1- Setting Up OpenAI API for LLM Development Keys, Pricing & Best Practices
219 Day 1 – Creating a .env File for Storing API Keys Safely
220 Day 1- Instant Gratification Project Creating an AI-Powered Web Page Summarizer
221 Day 1 – Implementing Text Summarization Using OpenAI’s GPT-4 and Beautiful Soup
222 Day 1 – Wrapping Up Day 1 Key Takeaways and Next Steps in LLM Engineering
223 Day 2 – Mastering LLM Engineering Key Skills and Tools for AI Development
224 Day 2 – Understanding Frontier Models GPT, Claude, and Open Source LLMs
225 Day 2 – How to Use Ollama for Local LLM Inference Python Tutorial with Jupyter
226 Day 2 – Hands-On LLM Task Comparing OpenAI and Ollama for Text Summarization
227 Day 3 – Frontier AI Models Comparing GPT-4, Claude, Gemini, and LLAMA
228 Day 3 – Comparing Leading LLMs Strengths and Business Applications
229 Day 3 – Exploring GPT-4o vs O1 Preview Key Differences in Performance
230 Day 3 – Creativity and Coding Leveraging GPT-4o’s Canvas Feature
231 Day 3 – Claude 3.5’s Alignment and Artifact Creation A Deep Dive
232 Day 3 – AI Model Comparison Gemini vs Cohere for Whimsical and Analytical Tasks
233 Day 3 – Evaluating Meta AI and Perplexity Nuances of Model Outputs
234 Day 3 – LLM Leadership Challenge Evaluating AI Models Through Creative Prompts
235 Day 4 – Revealing the Leadership Winner A Fun LLM Challenge
236 Day 4 – Exploring the Journey of AI From Early Models to Transformers
237 Day 4 – Understanding LLM Parameters From GPT-1 to Trillion-Weight Models
238 Day 4 – GPT Tokenization Explained How Large Language Models Process Text Input
239 Day 4 – How Context Windows Impact AI Language Models Token Limits Explained
240 Day 4 – Navigating AI Model Costs API Pricing vs. Chat Interface Subscriptions
241 Day 4 – Comparing LLM Context Windows GPT-4 vs Claude vs Gemini 1.5 Flash
242 Day 4 – Wrapping Up Day 4 Key Takeaways and Practical Insights
243 Day 5 – Building AI-Powered Marketing Brochures with OpenAI API and Python
244 Day 5 – JupyterLab Tutorial Web Scraping for AI-Powered Company Brochures
245 Day 5 – Structured Outputs in LLMs Optimizing JSON Responses for AI Projects
246 Day 5 – Creating and Formatting Responses for Brochure Content
247 Day 5 – Final Adjustments Optimizing Markdown and Streaming in JupyterLab
248 Day 5 – Mastering Multi-Shot Prompting Enhancing LLM Reliability in AI Projects
249 Day 5 – Assignment Developing Your Customized LLM-Based Tutor
250 Day 5 – Wrapping Up Week 1 Achievements and Next Steps

APPENDIX – ARCHIVE – Original Week 2 – now replaced by new version
251 Day 1 – Mastering Multiple AI APIs OpenAI, Claude, and Gemini for LLM Engineers
252 Day 1 – Streaming AI Responses Implementing Real-Time LLM Output in Python
253 Day 1 – How to Create Adversarial AI Conversations Using OpenAI and Claude APIs
254 Day 1 – AI Tools Exploring Transformers & Frontier LLMs for Developers
255 Day 2 – Building AI UIs with Gradio Quick Prototyping for LLM Engineers
256 Day 2 – Gradio Tutorial Create Interactive AI Interfaces for OpenAI GPT Models
257 Day 2 – Implementing Streaming Responses with GPT and Claude in Gradio UI
258 Day 2 – Building a Multi-Model AI Chat Interface with Gradio GPT vs Claude
259 Day 2 – Building Advanced AI UIs From OpenAI API to Chat Interfaces with Gradio
260 Day 3 – Building AI Chatbots Mastering Gradio for Customer Support Assistants
261 Day 3 – Build a Conversational AI Chatbot with OpenAI & Gradio Step-by-Step
262 Day 3 – Enhancing Chatbots with Multi-Shot Prompting and Context Enrichment
263 Day 3 – Mastering AI Tools Empowering LLMs to Run Code on Your Machine
264 Day 4 – Using AI Tools with LLMs Enhancing Large Language Model Capabilities
265 Day 4 – Building an AI Airline Assistant Implementing Tools with OpenAI GPT-4
266 Day 4 – How to Equip LLMs with Custom Tools OpenAI Function Calling Tutorial
267 Day 4 – Mastering AI Tools Building Advanced LLM-Powered Assistants with APIs
268 Day 5 – Multimodal AI Assistants Integrating Image and Sound Generation
269 Day 5 – Multimodal AI Integrating DALL-E 3 Image Generation in JupyterLab
270 Day 5 – Build a Multimodal AI Agent Integrating Audio & Image Tools
271 Day 5 – How to Build a Multimodal AI Assistant Integrating Tools and Agents

APPENDIX – ARCHIVE – Original Week 3 – now replaced by new version
272 Day 1 – Hugging Face Tutorial Exploring Open-Source AI Models and Datasets
273 Day 1 – Exploring HuggingFace Hub Models, Datasets & Spaces for AI Developers
274 Day 1 – Intro to Google Colab Cloud Jupyter Notebooks for Machine Learning
275 Day 1 – Hugging Face Integration with Google Colab Secrets and API Keys Setup
276 Day 1 – Mastering Google Colab Run Open-Source AI Models with Hugging Face
277 Day 2 – Hugging Face Transformers Using Pipelines for AI Tasks in Python
278 Day 2 – Hugging Face Pipelines Simplifying AI Tasks with Transformers Library
279 Day 2 – Mastering HuggingFace Pipelines Efficient AI Inference for ML Tasks
280 Day 3 – Exploring Tokenizers in Open-Source AI Llama, Phi-2, Qwen, & Starcoder
281 Day 3 – Tokenization Techniques in AI Using AutoTokenizer with LLAMA 3.1 Model
282 Day 3 – Comparing Tokenizers Llama, PHI-3, and QWEN2 for Open-Source AI Models
283 Day 3 – Hugging Face Tokenizers Preparing for Advanced AI Text Generation
284 Day 4 – Hugging Face Model Class Running Inference on Open-Source AI Models
285 Day 4 – Hugging Face Transformers Loading & Quantizing LLMs with Bits & Bytes
286 Day 4 – Hugging Face Transformers Generating Jokes with Open-Source AI Models
287 Day 4 – Mastering Hugging Face Transformers Models, Pipelines, and Tokenizers
288 Day 5 – Combining Frontier & Open-Source Models for Audio-to-Text Summarization
289 Day 5 – Using Hugging Face & OpenAI for AI-Powered Meeting Minutes Generation
290 Day 5 – Build a Synthetic Test Data Generator Open-Source AI Model for Business

APPENDIX – ARCHIVE – Original Week 4 – now replaced by new version
291 Day 1 – How to Choose the Right LLM Comparing Open and Closed Source Models
292 Day 1 – Chinchilla Scaling Law Optimizing LLM Parameters and Training Data Size
293 Day 1 – Limitations of LLM Benchmarks Overfitting and Training Data Leakage
294 Day 1 – Evaluating Large Language Models 6 Next-Level Benchmarks Unveiled
295 Day 1 – HuggingFace OpenLLM Leaderboard Comparing Open-Source Language Models
296 Day 1 – Master LLM Leaderboards Comparing Open Source and Closed Source Models
297 Day 2 – Comparing LLMs Top 6 Leaderboards for Evaluating Language Models
298 Day 2 – Specialized LLM Leaderboards Finding the Best Model for Your Use Case
299 Day 2 – LLAMA vs GPT-4 Benchmarking Large Language Models for Code Generation
300 Day 2 – Human-Rated Language Models Understanding the LM Sys Chatbot Arena
301 Day 2 – Commercial Applications of Large Language Models From Law to Education
302 Day 2 – Comparing Frontier and Open-Source LLMs for Code Conversion Projects
303 Day 3 – Leveraging Frontier Models for High-Performance Code Generation in C++
304 Day 3 – Comparing Top LLMs for Code Generation GPT-4 vs Claude 3.5 Sonnet
305 Day 3 – Optimizing Python Code with Large Language Models GPT-4 vs Claude 3.5
306 Day 3 – Code Generation Pitfalls When Large Language Models Produce Errors
307 Day 3 – Blazing Fast Code Generation How Claude Outperforms Python by 13,000x
308 Day 3 – Building a Gradio UI for Code Generation with Large Language Models
309 Day 3 – Optimizing C++ Code Generation Comparing GPT and Claude Performance
310 Day 3 – Comparing GPT-4 and Claude for Code Generation Performance Benchmarks
311 Day 4 – Open Source LLMs for Code Generation Hugging Face Endpoints Explored
312 Day 4 – How to Use HuggingFace Inference Endpoints for Code Generation Models
313 Day 4 – Integrating Open-Source Models with Frontier LLMs for Code Generation
314 Day 4 – Comparing Code Generation GPT-4, Claude, and CodeQuen LLMs
315 Day 4 – Mastering Code Generation with LLMs Techniques and Model Selection
316 Day 5 – Evaluating LLM Performance Model-Centric vs Business-Centric Metrics
317 Day 5 – Mastering LLM Code Generation Advanced Challenges for Python Developers

APPENDIX – ARCHIVE – Original Week 5 – now replaced by new version
318 Day 1 – RAG Fundamentals Leveraging External Data to Improve LLM Responses
319 Day 1 – Building a DIY RAG System Implementing Retrieval-Augmented Generation
320 Day 1 – Understanding Vector Embeddings The Key to RAG and LLM Retrieval
321 Day 2 – Unveiling LangChain Simplify RAG Implementation for LLM Applications
322 Day 2 – LangChain Text Splitter Tutorial Optimizing Chunks for RAG Systems
323 Day 2 – Preparing for Vector Databases OpenAI Embeddings and Chroma in RAG
324 Day 3 – Mastering Vector Embeddings OpenAI and Chroma for LLM Engineering
325 Day 3 – Visualizing Embeddings Exploring Multi-Dimensional Space with t-SNE
326 Day 3 – Building RAG Pipelines From Vectors to Embeddings with LangChain
327 Day 4 – Implementing RAG Pipeline LLM, Retriever, and Memory in LangChain
328 Day 4 – Mastering Retrieval-Augmented Generation Hands-On LLM Integration
329 Day 4 – Master RAG Pipeline Building Efficient RAG Systems
330 Day 5 – Optimizing RAG Systems Troubleshooting and Fixing Common Problems
331 Day 5 – Switching Vector Stores FAISS vs Chroma in LangChain RAG Pipelines
332 Day 5 – Demystifying LangChain Behind-the-Scenes of RAG Pipeline Construction
333 Day 5 – Debugging RAG Optimizing Context Retrieval in LangChain
334 Day 5 – Build Your Personal AI Knowledge Worker RAG for Productivity Boost

APPENDIX – ARCHIVE – Original Week 6 – now replaced by new version
335 Day 1 – Fine-Tuning Large Language Models From Inference to Training
336 Day 1 – Finding and Crafting Datasets for LLM Fine-Tuning Sources & Techniques
337 Day 1 – Data Curation Techniques for Fine-Tuning LLMs on Product Descriptions
338 Day 1 – Optimizing Training Data Scrubbing Techniques for LLM Fine-Tuning
339 Day 1 – Evaluating LLM Performance Model-Centric vs Business-Centric Metrics
340 Day 2 – LLM Deployment Pipeline From Business Problem to Production Solution
341 Day 2 – Prompting, RAG, and Fine-Tuning When to Use Each Approach
342 Day 2 – Productionizing LLMs Best Practices for Deploying AI Models at Scale
343 Day 2 – Optimizing Large Datasets for Model Training Data Curation Strategies
344 Day 2 – How to Create a Balanced Dataset for LLM Training Curation Techniques
345 Day 2 – Finalizing Dataset Curation Analyzing Price-Description Correlations
346 Day 2 – How to Create and Upload a High-Quality Dataset on HuggingFace
347 Day 3 – Feature Engineering and Bag of Words Building ML Baselines for NLP
348 Day 3 – Baseline Models in ML Implementing Simple Prediction Functions
349 Day 3 Feature Engineering Techniques for Amazon Product Price Prediction Models
350 Day 3 – Optimizing LLM Performance Advanced Feature Engineering Strategies
351 Day 3 – Linear Regression for LLM Fine-Tuning Baseline Model Comparison
352 Day 3 – Bag of Words NLP Implementing Count Vectorizer for Text Analysis in ML
353 Day 3 – Support Vector Regression vs Random Forest Machine Learning Face-Off
354 Day 3 – Comparing Traditional ML Models From Random to Random Forest
355 Day 4 – Evaluating Frontier Models Comparing Performance to Baseline Frameworks
356 Day 4 – Human vs AI Evaluating Price Prediction Performance in Frontier Models
357 Day 4 – GPT-4o Mini Frontier AI Model Evaluation for Price Estimation Tasks
358 Day 4 – Comparing GPT-4 and Claude Model Performance in Price Prediction Tasks
359 Day 4 – Frontier AI Capabilities LLMs Outperforming Traditional ML Models
360 Day 5 – Fine-Tuning LLMs with OpenAI Preparing Data, Training, and Evaluation
361 Day 5 – How to Prepare JSONL Files for Fine-Tuning Large Language Models (LLMs)
362 Day 5 – Step-by-Step Guide Launching GPT Fine-Tuning Jobs with OpenAI API
363 Day 5 – Fine-Tuning LLMs Track Training Loss & Progress with Weights & Biases
364 Day 5 – Evaluating Fine-Tuned LLMs Metrics Analyzing Training & Validation Loss
365 Day 5 – LLM Fine-Tuning Challenges When Model Performance Doesn’t Improve
366 Day 5 – Fine-Tuning Frontier LLMs Challenges & Best Practices for Optimization

APPENDIX – ARCHIVE – Original Week 7 – now replaced by new version
367 Day 1 – Mastering Parameter-Efficient Fine-Tuning LoRa, QLoRA & Hyperparameters
368 Day 1 – Introduction to LoRA Adaptors Low-Rank Adaptation Explained
369 Day 1 – QLoRA Quantization for Efficient Fine-Tuning of Large Language Models
370 Day 1 – Optimizing LLMs R, Alpha, and Target Modules in QLoRA Fine-Tuning
371 Day 1 – Parameter-Efficient Fine-Tuning PEFT for LLMs with Hugging Face
372 Day 1 – How to Quantize LLMs Reducing Model Size with 8-bit Precision
373 Day 1 Double Quantization & NF4 Advanced Techniques for 4-Bit LLM Optimization
374 Day 1 – Exploring PEFT Models The Role of LoRA Adapters in LLM Fine-Tuning
375 Day 1 – Model Size Summary Comparing Quantized and Fine-Tuned Models
376 Day 2 – How to Choose the Best Base Model for Fine-Tuning Large Language Models
377 Day 2 – Selecting the Best Base Model Analyzing HuggingFace’s LLM Leaderboard
378 Day 2 – Exploring Tokenizers Comparing LLAMA, QWEN, and Other LLM Models
379 Day 2 – Optimizing LLM Performance Loading and Tokenizing Llama 3.1 Base Model
380 Day 2 – Quantization Impact on LLMs Analyzing Performance Metrics and Errors
381 Day 2 – Comparing LLMs GPT-4 vs LLAMA 3.1 in Parameter-Efficient Tuning
382 Day 3 – QLoRA Hyperparameters Mastering Fine-Tuning for Large Language Models
383 Day 3 – Understanding Epochs and Batch Sizes in Model Training
384 Day 3 – Learning Rate, Gradient Accumulation, and Optimizers Explained
385 Day 3 – Setting Up the Training Process for Fine-Tuning
386 Day 3 – Configuring SFTTrainer for 4-Bit Quantized LoRA Fine-Tuning of LLMs
387 Day 3 – Fine-Tuning LLMs Launching the Training Process with QLoRA
388 Day 3 – Monitoring and Managing Training with Weights & Biases
389 Day 4 – Keeping Training Costs Low Efficient Fine-Tuning Strategies
390 Day 4 – Efficient Fine-Tuning Using Smaller Datasets for QLoRA Training
391 Day 4 – Visualizing LLM Fine-Tuning Progress with Weights and Biases Charts
392 Day 4 – Advanced Weights & Biases Tools and Model Saving on Hugging Face
393 Day 4 – End-to-End LLM Fine-Tuning From Problem Definition to Trained Model
394 Day 5 – The Four Steps in LLM Training From Forward Pass to Optimization
395 Day 5 – QLoRA Training Process Forward Pass, Backward Pass and Loss Calculation
396 Day 5 – Understanding Softmax and Cross-Entropy Loss in Model Training
397 Day 5 – Monitoring Fine-Tuning Weights & Biases for LLM Training Analysis
398 Day 5 – Revisiting the Podium Comparing Model Performance Metrics
399 Day 5 – Evaluation of our Proprietary, Fine-Tuned LLM against Business Metrics
400 Day 5 – Visualization of Results Did We Beat GPT-4
401 Day 5 – Hyperparameter Tuning for LLMs Improving Model Accuracy with PEFT

APPENDIX – ARCHIVE – Original Week 8 – now replaced by new version
402 Day 1 – From Fine-Tuning to Multi-Agent Systems Next-Level LLM Engineering
403 Day 1 Building a Multi-Agent AI Architecture for Automated Deal Finding Systems
404 Day 1 – Unveiling Modal Deploying Serverless Models to the Cloud
405 Day 1 – LLAMA on the Cloud Running Large Models Efficiently
406 Day 1 – Building a Serverless AI Pricing API Step-by-Step Guide with Modal
407 Day 1 – Multiple Production Models Ahead Preparing for Advanced RAG Solutions
408 Day 2 – Implementing Agentic Workflows Frontier Models and Vector Stores in RAG
409 Day 2 – Building a Massive Chroma Vector Datastore for Advanced RAG Pipelines
410 Day 2 – Visualizing Vector Spaces Advanced RAG Techniques for Data Exploration
411 Day 2 – 3D Visualization Techniques for RAG Exploring Vector Embeddings
412 Day 2 – Finding Similar Products Building a RAG Pipeline without LangChain
413 Day 2 – RAG Pipeline Implementation Enhancing LLMs with Retrieval Techniques
414 Day 2 – Random Forest Regression Using Transformers & ML for Price Prediction
415 Day 2 – Building an Ensemble Model Combining LLM, RAG, and Random Forest
416 Day 2 – Wrap-Up Finalizing Multi-Agent Systems and RAG Integration
417 Day 3 – Enhancing AI Agents with Structured Outputs Pydantic & BaseModel Guide
418 Day 3 – Scraping RSS Feeds Building an AI-Powered Deal Selection System
419 Day 3 – Structured Outputs in AI Implementing GPT-4 for Detailed Deal Selection
420 Day 3 – Optimizing AI Workflows Refining Prompts for Accurate Price Recognition
421 Day 3 – Mastering Autonomous Agents Designing Multi-Agent AI Workflows
422 Day 4 – The 5 Hallmarks of Agentic AI Autonomy, Planning, and Memory
423 Day 4 – Building an Agentic AI System Integrating Pushover for Notifications
424 Day 4 Implementing Agentic AI Creating a Planning Agent for Automated Workflows
425 Day 4 – Building an Agent Framework Connecting LLMs and Python Code
426 Day 4 – Completing Agentic Workflows Scaling for Business Applications
427 Day 5 – Autonomous AI Agents Building Intelligent Systems Without Human Input
428 Day 5 – AI Agents with Gradio Advanced UI Techniques for Autonomous Systems
429 Day 5 – Finalizing the Gradio UI for Our Agentic AI Solution
430 Day 5 Enhancing AI Agent UI Gradio Integration for Real-Time Log Visualization
431 Day 5 – Analyzing Results Monitoring Agent Framework Performance
432 Day 5 – AI Project Retrospective 8-Week Journey to Becoming an LLM Engineer

Homepage