Notes on effectively using AI
I cover the use of LLMs in several of the books I have written. This blog post is meant to be common general purpose advice for effectively using AI using four techniques:
- Use one-shot or few-shot prompting: show the AI what you want with examples.
- Chain of thought reasoning: LLMs are auto-regressive models so the tokens generated "become part of the prompt," that is, without asking for chain of thought reasoning if the model makes a quick guess at an answer, then that answer drives the remaining token generation process.
- Assigning a role to the AI assistant: for example "You are a historian who uses web search to verify your analysis of history and current events."
- Reverse prompting: explicitly tell the AI to ask you questions (important because modern system prompts often coach a model to not do this).
One-shot or few-shot prompting
As an example, you might want to process input text and save people's email addresses and names in some structured output format, like JSON. If you don't start a prompt with context for the format you want, results will vary. Here is an example prompt, where text to be processed would be placed below Text:.
#<<EOFYou are an information extraction system.
Extract all people’s **full names** and **email addresses** from the following text.
If no names or emails are present, return an empty list.
Return the result strictly in this JSON format:
{
"contacts": [
{
"name": "<full name as written in text>",
"email": "<email address>"
}
]
}
Text:
Hi, I’m Alice Johnson, please email me at alice.j@example.com.
Also, you can reach Bob Smith via bob.smith42@gmail.com.
EOF
Chain of thought reasoning
Treat the AI as you world a person: Start a prompt with something like this:
Before answering, give a concise reasoning summary: list the key assumptions and the 3–7 high-level steps you’ll take (no inner monologue). Then provide the final answer, followed by a quick self-check (one possible pitfall or edge case). Use Markdown, keep it succinct, and prefer bullet points.Question: ‹your question›
Assigning a role to the AI assistant:
All-purpose role prompt
You are a ‹role›. Always adopt this perspective when reasoning and answering. If relevant, use external verification (e.g. web search, references, or sources).Before answering: give a reasoning summary (assumptions + 3–7 steps).
Then: provide the final answer, followed by a self-check (biases, gaps, or counterexamples).
Question: ‹your question›
Ultra-compact variant
Act as a ‹role›.Reasoning summary → answer → self-check.
Q: ‹your question›
Example (historian role)
You are a historian who cross-checks analysis using web sources.Before answering: assumptions + reasoning summary.
Then: provide the historical analysis, with citations when possible.
Self-check: note one uncertainty, bias, or missing source.
Q: How did the Black Death reshape European economies?
Coding / technical role
You are a senior software engineer specializing in Python and AI.Step 1: reasoning summary.
Step 2: final code, with inline explanation.
Step 3: minimal test + note one edge case.
Task: Build a script that reads a CSV and outputs JSON.
Reverse prompting
Before answering, first ask me clarifying questions if anything is underspecified, ambiguous, or could change the answer. Keep it concise (max 3 questions). If the question is already clear, briefly state: “No clarifications needed.”
Then: give a reasoning summary (assumptions + 3–7 steps).
Finally: provide the answer with a self-check (one pitfall/edge case).
Question: ‹your question›
Then: reasoning summary → answer → 1-line self-check.
Q: ‹your question›
Step 2: Reasoning summary.
Step 3: Provide final code, plus a usage example and one edge case.
Task: ‹your task›
Comments
Post a Comment