Affiliate Disclosure: This post contains links to tools we recommend. If you click and make a purchase, we may receive a small commission at no extra cost to you.
How to Build Your First AI Agent with n8n
Most people think building an AI agent requires a computer science degree and months of work. It does not. With n8n and an LLM API key, you can have a working agent — one that thinks, decides, and takes action — running in under an hour.
This guide walks you through building a real content research agent from scratch. By the end, you will have a workflow that takes a topic, generates research angles with AI, and saves the results to a spreadsheet. No heavy coding. No overengineering.
What you will build
A content research agent that follows this pattern:
Trigger → Think → Act → Store
Specifically:
1. You input a topic and target audience
2. An LLM generates article angles, FAQs, and content ideas
3. The results get saved to Google Sheets automatically
This is a real agentic workflow. It does not just generate text — it takes an input, reasons about it, and produces structured output in an external system. That is the core of what makes something an AI agent rather than a chatbot.
What you need before starting
Three things, all free or nearly free to set up:
1. An n8n account. Use n8n cloud for the fastest setup — you get a 14-day free trial with no credit card. If you prefer self-hosting, our n8n review covers the deployment options.
2. An OpenAI API key. Go to platform.openai.com, create an account, and generate an API key. This costs a few cents per workflow run. You can also use Anthropic's Claude API — the workflow structure is the same, only the API endpoint changes.
3. A Google Sheet. Create a blank spreadsheet with three column headers: topic, output, created_at. This is where your agent stores its results.
That is the entire stack. No Docker, no databases, no frameworks.
Step 1: Set up n8n
If you are using n8n cloud, sign up and you will land in the workflow editor immediately. It is a visual canvas where you drag nodes (actions) and connect them with wires (data flow).
If you are self-hosting, the quickest path is Docker:
docker run -it --rm --name n8n -p 5678:5678 n8nio/n8n
Open localhost:5678 and you are in the editor.
For this tutorial, cloud is easier. Self-hosting becomes valuable once you are running agents in production and want unlimited executions at zero marginal cost — see our full n8n review for the cost breakdown.
Step 2: Build the workflow
Your workflow has six nodes connected in sequence. Here is what each one does and why it exists.
Node 1: Manual Trigger
This is how you start the workflow. Click "Add first step" and select Manual Trigger. Later you can replace this with a webhook (so external services can trigger your agent), a schedule (so it runs automatically), or a form (so you can input topics from a web page).
For now, manual is fine. You click a button, the agent runs.
Node 2: Set Topic
Add a Set node after the trigger. This is where you define your input — the topic and audience your agent will research.
Create two string fields:
topic→ set to something like "agentic AI for small business"audience→ set to "operators and solopreneurs"
This node is important because it separates your input from the trigger. When you later upgrade to a webhook or form trigger, you only change this node — everything downstream stays the same.
Node 3: Build Prompt
Add a Code node. This takes your topic and audience and constructs a detailed prompt for the LLM.
Here is the JavaScript:
const topic = $json.topic;
const audience = $json.audience;
return [{
json: {
prompt: You are a content strategist. For the topic "${topic}" and audience "${audience}", generate 10 article angles, 10 FAQs, and 5 monetization-friendly article ideas. Return clean markdown.
}
}];
Why a Code node instead of passing the topic directly to the LLM? Because prompt engineering matters. A well-structured prompt with specific instructions produces dramatically better output than a bare topic. This node is where you control the quality of your agent's thinking.
Node 4: OpenAI Request
Add an HTTP Request node. Configure it:
- Method: POST
- URL:
https://api.openai.com/v1/chat/completions - Headers:
Authorization: Bearer YOUR_API_KEYandContent-Type: application/json - Body (JSON):
{"model":"gpt-4o-mini","messages":[{"role":"user","content":"{{$json.prompt}}"}],"temperature":0.7}
This is where the thinking happens. The LLM receives your prompt and generates the research output. gpt-4o-mini is cheap and fast for this kind of structured generation. You can swap to gpt-4o for higher quality or Claude for different reasoning style.
Node 5: Parse Output
Add another Code node to extract the useful content from the API response:
const response = $json.choices?.[0]?.message?.content || 'No content returned';
return [{
json: {
topic: $input.first().json.topic || 'unknown',
output: response,
created_at: new Date().toISOString()
}
}];
This cleans up the API response into three fields that map directly to your spreadsheet columns.
Node 6: Save to Google Sheets
Add a Google Sheets node. Connect your Google account, select your spreadsheet, and map the three fields (topic, output, created_at) to your columns.
Now your agent's output persists outside of n8n. You can review it, share it, or feed it into another workflow.
Step 3: Run it
Click Execute Workflow in the top right. Watch each node light up in sequence as data flows through. In about 10-15 seconds, your Google Sheet will have a row of AI-generated content research.
That is your first AI agent. It followed the pattern: Trigger → Think → Act → Store.
Step 4: Make it smarter (optional upgrades)
Once the basic flow works, here are three upgrades worth considering:
Add a web search step
Insert an HTTP Request node between the Set Topic and Build Prompt nodes that searches a web API (like SerpAPI or Brave Search) for recent articles on your topic. Pass the search results into your prompt so the LLM reasons about real, current information — not just its training data.
This turns your agent from a brainstorming tool into a research tool.
Replace manual trigger with a webhook
Swap the Manual Trigger for a Webhook node. Now your agent has a URL that any external service can call. Connect it to a Slack command, a form on your website, or another workflow. Your agent becomes a service, not a button you click.
Add email delivery
Add a Gmail or Send Email node at the end. Instead of (or in addition to) saving to Sheets, email the research directly to yourself or your team. Now the agent proactively delivers results to you.
Common mistakes to avoid
Overcomplicating your first agent. Do not add vector databases, multi-agent orchestration, or complex branching on day one. Get the basic Trigger → Think → Act → Store pattern working first. You can always add complexity later.
Using the wrong model for the task. gpt-4o-mini is fine for structured content generation. You do not need the most expensive model for every workflow. Match the model to the task complexity.
Skipping error handling. Add an If node after the OpenAI Request that checks whether choices exists in the response. If the API fails (rate limit, bad key, timeout), your workflow should handle it gracefully — not silently produce empty rows in your spreadsheet.
Forgetting to test with real inputs. Run your agent with 3-5 different topics before considering it done. Edge cases (very broad topics, topics with special characters, empty inputs) will surface problems early.
What makes this an "agent" and not just automation?
Fair question. The line between automation and agents is genuinely blurry — we cover the full spectrum in our guide on AI agents vs chatbots.
The short version: this workflow is a simple agent because the LLM is making decisions about what to output based on reasoning, not just following a fixed template. A pure automation would do the same thing every time regardless of input. Your agent adapts its output to the topic and audience you give it.
As you add web search, conditional branching, and tool selection, it becomes more agentic. Full multi-agent systems like CrewAI take this further by having multiple agents collaborate — but that is a later step.
Why n8n is the right tool for this
We tested this workflow pattern across several platforms. n8n wins for beginners building AI agents because of three things:
The visual editor makes data flow visible. You can see exactly what data enters and leaves each node. When something breaks, you know where.
Execution-based billing is honest. This entire workflow is one execution. On Zapier, the same workflow would consume 5-6 tasks per run. At scale, n8n is dramatically cheaper — see our n8n vs Zapier comparison for the full cost breakdown.
Self-hosting removes the ceiling. Once you outgrow the cloud trial, you can self-host n8n for free with unlimited executions. No other automation platform offers this.
For more detail on n8n's strengths and limitations, read our complete n8n review. For a broader view of the tool landscape, see best AI agent tools.
What to build next
Once your first agent is running, here are the highest-value next steps:
Copy more templates. We have 3 ready-made n8n agent workflows you can import directly — a Reddit insight agent, a monitoring alert agent, and a variation of the content research agent. Each follows the same Trigger → Think → Act → Store pattern.
Explore the ecosystem. See our best no-code AI tools guide for alternatives to n8n, and our best AI agent tools for the broader landscape including developer frameworks.
Understand the bigger picture. Our guide on what is OpenClaw explains the viral open-source agent platform that started the agentic AI movement — and how tools like n8n fit into that ecosystem.