Learn Generative AI from the ground up. Understand LLMs, prompt engineering, RAG, AI agents, and how to build AI-powered applications.
Generative AI refers to artificial intelligence systems that can create new content — text, images, code, music, and more — based on patterns learned from training data. Tools like ChatGPT, Claude, and GitHub Copilot have made GenAI part of everyday development workflows.
This series covers GenAI from foundational concepts to hands-on building:
New to Generative AI? Start with our What is Generative AI? guide for a beginner-friendly introduction, then work through the series in order.
You’ve probably used ChatGPT, asked Copilot to write some code, or generated an image with DALL-E. But what’s actually …
In What is Generative AI?, we covered the big picture of GenAI and where Large Language Models fit in. Now let’s go a …
When you work with LLM APIs, three concepts come up constantly: tokens, context windows, and model parameters like …
So far in this series, we’ve focused on LLMs that generate text. But there’s another fundamental capability that powers …
You’ve built a prototype with an LLM and it works pretty well, but the model doesn’t know about your company’s products, …
Prompt engineering is the practice of crafting inputs to get better outputs from LLMs. It’s not magic — it’s about …
In Prompt Engineering Fundamentals, we covered the basics of writing effective prompts. Now let’s look at three specific …
When you build an application on top of an LLM, you don’t want to repeat the same instructions in every user message. …
When you’re building applications with LLMs, you usually need the output in a specific format — JSON, CSV, a particular …
Time to write some code. In this tutorial, we’ll go from zero to making LLM API calls in JavaScript — setting up a …
This tutorial covers calling LLM APIs using Python — the same concepts from Calling LLM APIs with JavaScript, but with …
When you make a standard LLM API call, you wait for the entire response to be generated before you see anything. For …
LLM API calls fail. Servers go down, rate limits get hit, tokens exceed context windows, and networks time out. If your …
There are now dozens of LLM providers and hundreds of models to choose from. This tutorial cuts through the noise and …
LLMs are trained on public data up to a cutoff date. They don’t know about your company’s documentation, your product’s …
In Introduction to RAG, we built a minimal RAG system with an in-memory store and simple documents. Now let’s build …