Generative AI Tutorials

Learn Generative AI from the ground up. Understand LLMs, prompt engineering, RAG, AI agents, and how to build AI-powered applications.

What is Generative AI?

Generative AI refers to artificial intelligence systems that can create new content — text, images, code, music, and more — based on patterns learned from training data. Tools like ChatGPT, Claude, and GitHub Copilot have made GenAI part of everyday development workflows.

What You’ll Learn

This series covers GenAI from foundational concepts to hands-on building:

  • Foundations — How LLMs work, tokens, embeddings, and the key concepts behind modern AI
  • Prompt Engineering — Techniques for getting better results from AI models
  • Building with LLM APIs — Calling AI models from your own code using JavaScript and Python
  • RAG (Retrieval-Augmented Generation) — Grounding AI responses in your own data
  • AI Agents & Tools — Building autonomous systems that can take actions
  • AI-Powered Dev Tools — Getting the most out of tools like Kiro, Claude Code, and Codex

Getting Started

New to Generative AI? Start with our What is Generative AI? guide for a beginner-friendly introduction, then work through the series in order.

What is Generative AI?

You’ve probably used ChatGPT, asked Copilot to write some code, or generated an image with DALL-E. But what’s actually …

March 28, 2026
#genai #ai
How LLMs Work

In What is Generative AI?, we covered the big picture of GenAI and where Large Language Models fit in. Now let’s go a …

March 28, 2026
#genai #ai
Tokens, Context Windows & Model Parameters

When you work with LLM APIs, three concepts come up constantly: tokens, context windows, and model parameters like …

March 28, 2026
#genai #ai
Embeddings & Vector Search

So far in this series, we’ve focused on LLMs that generate text. But there’s another fundamental capability that powers …

March 28, 2026
#genai #ai
Fine-Tuning vs RAG vs Prompt Engineering

You’ve built a prototype with an LLM and it works pretty well, but the model doesn’t know about your company’s products, …

March 28, 2026
#genai #ai
Prompt Engineering Fundamentals

Prompt engineering is the practice of crafting inputs to get better outputs from LLMs. It’s not magic — it’s about …

March 28, 2026
#genai #ai
Zero-Shot, Few-Shot & Chain-of-Thought Prompting

In Prompt Engineering Fundamentals, we covered the basics of writing effective prompts. Now let’s look at three specific …

March 28, 2026
#genai #ai
System Prompts & Role Design

When you build an application on top of an LLM, you don’t want to repeat the same instructions in every user message. …

March 28, 2026
#genai #ai
Structured Output & JSON Mode

When you’re building applications with LLMs, you usually need the output in a specific format — JSON, CSV, a particular …

March 28, 2026
#genai #ai
Calling LLM APIs with JavaScript

Time to write some code. In this tutorial, we’ll go from zero to making LLM API calls in JavaScript — setting up a …

March 28, 2026
#genai #ai
Calling LLM APIs with Python

This tutorial covers calling LLM APIs using Python — the same concepts from Calling LLM APIs with JavaScript, but with …

March 28, 2026
#genai #ai
Streaming Responses

When you make a standard LLM API call, you wait for the entire response to be generated before you see anything. For …

March 28, 2026
#genai #ai
Error Handling & Rate Limits

LLM API calls fail. Servers go down, rate limits get hit, tokens exceed context windows, and networks time out. If your …

March 28, 2026
#genai #ai
Comparing LLM Providers

There are now dozens of LLM providers and hundreds of models to choose from. This tutorial cuts through the noise and …

March 28, 2026
#genai #ai
Introduction to RAG

LLMs are trained on public data up to a cutoff date. They don’t know about your company’s documentation, your product’s …

March 28, 2026
#genai #ai
Building a RAG Pipeline

In Introduction to RAG, we built a minimal RAG system with an in-memory store and simple documents. Now let’s build …

March 28, 2026
#genai #ai