Why n8n Over Everything Else

The automation platform landscape is crowded. Zapier, Make (formerly Integromat), Power Automate, Tray.io — they all solve the same fundamental problem: connecting systems that don’t natively talk to each other. So why do we reach for n8n?

Three reasons: self-hosting, pricing model, and code-level flexibility.

Zapier charges per task execution. At scale, this adds up fast. A workflow that processes 50 form submissions per day across five steps means 250 tasks — roughly 7,500 per month. On Zapier’s Professional plan, you’re looking at $100+/month for a single workflow. n8n runs on your own infrastructure with no per-execution fees. A €5/month VPS can handle thousands of workflow executions daily.

But cost isn’t the whole story. The real advantage is control. n8n lets you write custom JavaScript or Python within any workflow node, access the full request/response cycle, and self-host on your own infrastructure where your data never touches a third party’s servers.

Core Concepts Worth Understanding

Before diving into practical examples, let’s establish how n8n thinks about automation.

Workflows, Nodes, and Triggers

A workflow is a directed graph of nodes. Each node performs an operation — fetching data, transforming it, sending it somewhere. Workflows begin with a trigger node that defines when execution starts: a webhook, a cron schedule, a new email, a database change.

Execution Model

n8n processes data as arrays of items. Each node receives items from the previous node, processes them, and passes items to the next node. This item-based model is powerful because it naturally handles batch operations — process 50 records the same way you process one.

// Inside a Function node, you have access to all input items
const items = $input.all();

return items.map(item => {
  return {
    json: {
      ...item.json,
      processedAt: new Date().toISOString(),
      fullName: `${item.json.firstName} ${item.json.lastName}`
    }
  };
});

Five Workflows We Actually Run in Production

Theory is nice. Let’s look at real workflows we’ve built for ourselves and our clients.

1. Client Inquiry Pipeline

Trigger: Webhook receives form submission from website contact form.

Flow:

  1. Validate and sanitize input data
  2. Check for spam using a simple heuristic scoring node
  3. Create a record in our project management tool (Notion or Linear)
  4. Send a formatted notification to a Slack channel
  5. Send an acknowledgment email to the client via SMTP
  6. If the inquiry mentions specific services, tag it accordingly

This replaces a chain of disconnected integrations and gives us a single place to monitor and debug the entire pipeline.

2. Invoice Processing

Trigger: New email arrives with PDF attachment matching specific criteria.

Flow:

  1. Extract PDF attachment from email
  2. Send to an OCR/parsing service to extract invoice data
  3. Validate extracted amounts and dates
  4. Create a draft entry in our accounting system
  5. Notify the finance channel with a summary for review

The key detail here is error handling. OCR isn’t perfect, so we built a confidence threshold — if the parsed data falls below 85% confidence, the workflow routes to a manual review queue instead of auto-creating the entry.

3. Content Publishing Pipeline

Trigger: Webhook from headless CMS on content status change to “Published.”

Flow:

  1. Fetch the full content entry from the CMS API
  2. Trigger a site rebuild via the hosting platform’s deploy hook
  3. Wait for deployment to complete (polling the deploy status API)
  4. Purge relevant CDN cache paths
  5. Post a social media update with the article title and link
  6. Log the publication event to our internal dashboard

4. Uptime and Performance Monitoring

Trigger: Cron schedule, every 5 minutes.

Flow:

  1. HTTP Request nodes hit each monitored endpoint
  2. Function node evaluates response time and status code
  3. If response time exceeds threshold or status is non-200, trigger alert
  4. Alert goes to Slack with contextual information (which site, what error, response time)
  5. If the issue persists for 3 consecutive checks, escalate to email and SMS

We run this instead of paid monitoring services for simpler projects. For critical infrastructure, we still use dedicated monitoring, but for client sites and internal tools, this workflow covers 90% of our needs.

5. Weekly Client Report Generator

Trigger: Cron schedule, every Monday at 8:00 AM.

Flow:

  1. Query analytics API for the previous week’s metrics
  2. Query uptime monitoring data
  3. Aggregate and format data into a structured report
  4. Generate a PDF using a templating service
  5. Email the report to the client’s designated contact

This saves roughly 2 hours of manual work per client per week.

Patterns for Reliable Workflows

Building workflows that work once is easy. Building workflows that work reliably over months requires discipline.

Error Handling

Every workflow should have an error path. n8n supports error triggers — a special workflow that fires when any other workflow fails. We use this to send all workflow errors to a dedicated Slack channel with the workflow name, node that failed, error message, and execution ID for debugging.

// Error trigger workflow — format the error for Slack
const error = $input.first().json;

return [{
  json: {
    text: `Workflow failed: *${error.workflow.name}*\n` +
          `Node: ${error.execution.lastNodeExecuted}\n` +
          `Error: ${error.execution.error.message}\n` +
          `Execution: ${error.execution.id}`
  }
}];

Idempotency

Workflows can retry. Network requests can timeout and succeed on the server side. Design every workflow so that running it twice with the same input produces the same result. Use unique identifiers to check for existing records before creating duplicates.

Rate Limiting

Third-party APIs have rate limits. If your workflow processes a batch of 200 items and each item requires an API call, you’ll likely hit a rate limit. n8n’s SplitInBatches node lets you process items in groups with a configurable delay between batches.

Credential Management

n8n encrypts credentials at rest, but self-hosting means you’re responsible for the encryption key. Store it as an environment variable, never in the n8n configuration file. Use a dedicated service account for each integration rather than personal credentials.

Infrastructure Recommendations

For production n8n deployments, here’s what we recommend:

  • Hosting: A dedicated VPS with at least 2GB RAM, or a container on your existing infrastructure. Docker Compose is the simplest deployment method.
  • Database: PostgreSQL instead of the default SQLite. SQLite works for development but doesn’t handle concurrent executions well.
  • Reverse proxy: Nginx or Caddy in front of n8n with TLS termination.
  • Backups: Regular database backups and workflow export (n8n supports JSON export of all workflows).
  • Updates: Pin your n8n version and test updates in a staging environment before applying to production.
# docker-compose.yml — production n8n setup
version: '3.8'
services:
  n8n:
    image: n8nio/n8n:1.30.1
    restart: always
    ports:
      - "5678:5678"
    environment:
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
      - WEBHOOK_URL=https://auto.yourdomain.com/
    volumes:
      - n8n_data:/home/node/.n8n

  postgres:
    image: postgres:16
    restart: always
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  n8n_data:
  postgres_data:

When n8n Isn’t the Answer

n8n is excellent for event-driven workflows that connect systems and transform data. It’s not a general-purpose application platform. If your “workflow” is really a complex application with branching business logic, state management, and a user interface, you should build an application, not a workflow.

Similarly, if you need sub-second latency for real-time data processing, n8n’s execution overhead (typically 50-200ms per node) makes it unsuitable. Use purpose-built stream processing tools instead.

Getting Started

If you’re new to n8n, start with one workflow that solves a real problem you have today. The form-submission-to-notification pipeline is a great first project — it’s simple enough to complete in an afternoon but immediately useful.

Self-host from day one. The Docker setup takes 15 minutes, and you’ll immediately benefit from the lack of execution limits. Build the habit of adding error handling to every workflow, even simple ones. Your future self will thank you when something breaks at 2 AM and the error notification tells you exactly what happened.