Prompt Engineering Fundamentals
Table of Contents
Prompt engineering is the practice of crafting inputs to get better outputs from LLMs. It’s not magic — it’s about understanding how these models process text and structuring your requests accordingly. A well-written prompt can be the difference between a vague, unhelpful response and exactly what you need.
This tutorial covers the foundational techniques. If you’re new to GenAI, start with What is Generative AI? and How LLMs Work first — understanding how models predict tokens will make these techniques more intuitive.
Why Prompts Matter
LLMs are general-purpose text generators. The same model that writes poetry can also debug code, summarize legal documents, or generate SQL queries. What determines the output isn’t the model — it’s the prompt.
Consider these two prompts asking the same question:
Vague prompt:
Tell me about sorting.
Specific prompt:
Explain the quicksort algorithm to a beginner programmer.
Include the time complexity, a brief description of how partitioning works,
and a simple JavaScript implementation.
The first prompt might give you a general overview of sorting in everyday life. The second gives you exactly what you need for a programming tutorial. The model is the same — the prompt made the difference.
The Anatomy of a Good Prompt
Effective prompts tend to share a few characteristics. Not every prompt needs all of these, but knowing them gives you a toolkit to reach for when results aren’t what you want.
1. Be Specific About What You Want
The more specific your instructions, the better the output. Vague prompts produce vague results.
❌ Write some code for a web server.
✅ Write a minimal Express.js server in Node.js that serves a JSON response
at GET /api/health with the body { "status": "ok" }.
2. Provide Context
LLMs don’t know your situation unless you tell them. Include relevant background information.
❌ Why is my query slow?
✅ I have a PostgreSQL table "orders" with 10 million rows.
This query takes 30 seconds:
SELECT * FROM orders WHERE customer_id = 12345 ORDER BY created_at DESC;
The customer_id column is not indexed. Why is it slow and how can I fix it?
3. Specify the Format
If you want a specific output format, say so explicitly.
✅ List the top 5 JavaScript array methods. For each one, provide:
- The method name
- A one-sentence description
- A short code example
4. Define the Audience or Tone
Tell the model who the output is for.
✅ Explain Docker containers to a junior developer who has never used them.
Use simple language and a real-world analogy.
5. Set Constraints
Constraints help focus the output and prevent the model from going off track.
✅ Summarize this article in exactly 3 bullet points, each no longer than
one sentence.
Common Prompt Patterns
Here are patterns that work well across many use cases.
The Instruction Pattern
The simplest and most common pattern. Give a clear instruction, optionally with context.
Convert this CSV data to a JSON array:
name,age,city
Alice,30,New York
Bob,25,London
The Role Pattern
Assign the model a role or persona to shape its responses. This is especially useful when you want domain-specific expertise or a particular communication style.
You are a senior database administrator with 15 years of experience.
Review the following SQL schema and suggest improvements for performance
and data integrity.
We cover this in much more depth in System Prompts & Role Design.
The Template Pattern
Provide a template and ask the model to fill it in. This gives you precise control over the output structure.
Generate a changelog entry using this template:
## [VERSION] - DATE
### Added
- FEATURE_DESCRIPTION
### Fixed
- BUG_DESCRIPTION
The version is 2.3.0, released today. We added dark mode support
and fixed a crash when uploading files larger than 10MB.
The Step-by-Step Pattern
Ask the model to break down its work into steps. This often produces more accurate results for complex tasks because it forces the model to “show its work.”
Determine whether this JavaScript function has any bugs.
Think through it step by step:
1. What does the function intend to do?
2. Trace through the logic with a sample input.
3. Identify any issues.
function findMax(arr) {
let max = 0;
for (let i = 0; i < arr.length; i++) {
if (arr[i] > max) max = arr[i];
}
return max;
}
This is closely related to chain-of-thought prompting, which we cover in Zero-Shot, Few-Shot & Chain-of-Thought.
Iterating on Prompts
Prompt engineering is iterative. Your first prompt rarely produces the perfect result. Here’s a practical workflow:
- Start simple. Write a straightforward prompt and see what you get.
- Identify what’s wrong. Is the output too long? Too vague? Wrong format? Missing information?
- Add constraints or context to address the specific issue.
- Repeat until the output meets your needs.
For example, if a code generation prompt produces working but unreadable code, add: “Include clear variable names and brief comments explaining each step.”
If a summary is too long, add: “Keep the summary under 100 words.”
Common Mistakes
Being Too Vague
❌ Help me with my code.
The model doesn’t know what language you’re using, what the code does, what’s wrong, or what kind of help you want. Always provide the code, describe the problem, and state what you’re looking for.
Overloading a Single Prompt
Asking the model to do too many things at once often produces mediocre results for all of them. If you need a function written, tested, documented, and optimized — break that into separate prompts or clearly numbered steps.
Not Specifying Output Format
If you need JSON, say “respond with valid JSON.” If you need a numbered list, say so. LLMs will match whatever format seems most natural for the prompt, which may not be what you want.
Assuming the Model Knows Your Context
The model only knows what’s in the current conversation. It doesn’t know your project structure, your database schema, or your team’s coding conventions unless you include that information.
A Practical Example
Let’s put these principles together. Say you want to generate a utility function for your Node.js project:
Write a JavaScript function called `retryAsync` that:
- Takes an async function and a max retry count (default 3)
- Retries the function if it throws an error
- Waits 1 second between retries (doubling each time: 1s, 2s, 4s)
- Throws the last error if all retries fail
- Returns the result on success
Use async/await. No external dependencies.
This prompt is specific about the function name, parameters, behavior, error handling, and constraints. The model has everything it needs to produce exactly what you want.
What’s Next?
These fundamentals will get you far, but there’s more to prompt engineering. In Zero-Shot, Few-Shot & Chain-of-Thought, we’ll cover three powerful techniques for improving accuracy: prompting without examples, prompting with examples, and getting the model to reason through problems step by step.