Build automations systematically

Choose the right tool for the job. Handle edge cases before they cause problems. Test thoroughly on sample data before going live. Monitor for failures after launch.

Introduction

Most automation failures happen because people rush the build. They create a workflow in 30 minutes, turn it on for all records, and discover it's breaking things. The systematic approach takes longer upfront but saves time fixing problems later.

This chapter shows you how to choose the right automation platform for your needs, handle edge cases that will break simple workflows, test with sample data before going live, and set up monitoring to catch failures quickly.

Choose the right automation tool

Different tools for different needs. Don't try to use HubSpot workflows for everything when Zapier or Make might be better.

Native platform workflows (HubSpot, Salesforce, etc.): Use for automations within the platform. Fastest performance, deepest integration, no API limits to worry about. But limited to that platform's capabilities.

Best for: lead assignment, data updates within CRM, email sequences, task creation, notifications within the platform. Limitations: can't easily connect to external tools, limited custom logic, constrained by platform features.

Integration platforms (Zapier, Make, n8n): Use for connecting different tools. Wide range of integrations, moderate complexity, usage-based pricing. Great for multi-tool workflows.

Best for: form submission in Webflow creates contact in HubSpot, deal closed in HubSpot posts to Slack, new customer in CRM creates project in Notion. Limitations: slower than native (API calls take time), usage costs add up, debugging harder.

Custom code (Python scripts, serverless functions): Use for complex logic that platforms can't handle or when you need maximum control. Unlimited flexibility, but requires development skills and ongoing maintenance.

Best for: complex data transformations, ML-based routing, custom integrations with obscure tools, high-volume batch processing. Limitations: requires technical skills, breaks if APIs change, no visual interface.

The rule: use the simplest tool that can do the job. Don't write Python when HubSpot workflow works. Don't use Zapier when native workflow handles it. Each additional tool adds complexity and maintenance burden.

Handle edge cases before they break things

Simple workflows work for 80% of cases. Edge cases are the 20% that break your logic.

Missing data edge cases: What if required field is empty? Your workflow checks company size to assign leads. But what if company size field is blank? Workflow breaks or makes wrong assignment.

Solution: Add conditions checking for empty fields first. If company size is empty, assign to default queue or create task for manual review. Don't assume data exists.

Unexpected value edge cases: What if someone enters '1000+' instead of '1000' in company size field? Your condition checks if > 500, but can't compare text to number.

Solution: Standardise data entry (use dropdown menus not free text where possible). Add validation in workflow to catch unexpected formats. Have fallback for invalid data.

Timing edge cases: What if someone fills out form, immediately fills out another form, and both trigger the same workflow? Race condition might assign to two different reps or send two welcome emails.

Solution: Add duplicate checks. Before sending welcome email, check if welcome email already sent today. Before assigning, check if already assigned. Use property updates as locks (set 'processing' flag, check it before starting, clear it when done).

Volume edge cases: What if 200 people fill out form simultaneously (conference attendees)? Your workflow handles 5 per hour fine but breaks at high volume.

Solution: Test at expected peak volume. Add rate limiting if needed. Monitor queue depths. Have alert if workflow falls more than 1 hour behind.

For each automation, ask: what could go wrong? Missing data, invalid formats, timing conflicts, high volume, system failures. Handle top 5 edge cases explicitly.

Test with sample data before going live

Never turn on automation for all records immediately. Test thoroughly with sample data first.

Create test records: Build 5-10 test contacts or deals representing different scenarios. Happy path (everything works perfectly), missing data (critical fields empty), edge cases (unusual values), negative cases (should not trigger workflow).

Example test records for lead assignment: (1) Perfect fit: company size 500, industry finance, all fields complete. Should assign to senior rep. (2) Missing company size: industry finance but company size empty. Should assign to default queue. (3) Too small: company size 20, industry finance. Should go to nurture, not sales. (4) Wrong industry: company size 500, industry hospitality. Should assign to standard rep. (5) Already assigned: company size 500, industry finance, but already has owner. Should not reassign.

Run tests in isolated environment: Use test workflows or test records clearly marked. Don't test on real data. Turn off external actions during testing (don't actually send emails, don't actually create Slack notifications). Check 'do not send' for test emails.

Verify each step: After workflow runs on test record, check: Did right properties update? Did record take right path (if branches exist)? Did actions fire in right order? Did timing work (wait steps respected)? Did it handle edge cases correctly?

Test failure modes: What happens if external service is down? If API call fails? If email bounces? Workflow should handle failures gracefully (log error, create alert, don't just stop silently).

Only after all test scenarios pass, enable for real records. Start with small batch (10% of traffic) for first week. Monitor closely. If no issues, expand to 100%.

Monitor automations and set up alerts

Automations break. APIs change, platforms update, edge cases appear. You need monitoring to catch problems fast.

Error logging: Every workflow should log errors somewhere you'll see them. Native platform error logs (check daily). Email alerts when workflow fails. Slack notifications for critical automations. Don't rely on users reporting problems.

Example: Lead assignment workflow fails because API rate limit exceeded. Error log shows 'API limit reached, 50 leads unassigned'. Alert sent to #ops channel. Fixed within 1 hour rather than discovered days later when reps complain.

Success metrics: Track workflow performance over time. How many records processed daily? How many succeed versus fail? How long does workflow take to complete? Trends reveal problems before they become critical.

Example: Lead assignment workflow normally processes 200 leads/day with 98% success rate. This week: 180 processed, 85% success. Something's wrong (maybe new form added that's missing required fields). Investigate before it gets worse.

Health checks: For critical automations, run daily health check. Send test record through workflow, verify it works end-to-end. If test fails, alert immediately.

Example: Every morning at 8am, create test lead, run through assignment workflow, verify it assigns correctly and sends notification. If any step fails, Slack alert to #ops. Catches problems before real leads affected.

Maintenance schedule: Review all automations quarterly. Check if logic still accurate (has ICP changed? are assignment rules still correct?). Update for platform changes. Clean up deprecated workflows. Test edge cases again.

Build a dashboard showing: automation count, records processed per automation, success rate, average execution time, last maintenance date. Review monthly.

Conclusion

Choose the right automation tool: native platform workflows for within-platform tasks, integration platforms for connecting tools, custom code for complex logic. Use the simplest tool that works.

Handle edge cases explicitly: missing data, unexpected values, timing conflicts, volume spikes. Ask what could go wrong and handle top 5 failure modes. Add validation and fallbacks.

Test with sample data before going live. Create test records for happy path, edge cases, and failure modes. Verify each step works. Test in isolated environment. Start with 10% of traffic when going live.

Monitor for failures and set up alerts. Log errors, track success metrics, run daily health checks for critical workflows. Review all automations quarterly. Build dashboard showing automation health.

Next chapter: optimise automations over time.

Related tools

Zapier

Rating

Rating

Rating

Rating

Rating

From

20

per month

Zapier

No-code automation connecting 5,000+ apps to move data and trigger actions excellent for quick wins when you need integrations that just work.

n8n

Rating

Rating

Rating

Rating

Rating

From

24

per month

n8n

Open-source automation with self-hosting ideal when you need complete control, want to own infrastructure, or have technical teams building workflows.

Make

Rating

Rating

Rating

Rating

Rating

From

4.13

per month

Make

Visual automation platform with advanced logic and error handling more powerful than Zapier when you need control over complex, branching workflows.

Pipedream

Rating

Rating

Rating

Rating

Rating

From

45

per month

Pipedream

Code-friendly automation running Node.js workflows excellent when you need custom logic, API integrations, or automations that Zapier can't handle.

Related wiki articles

No items found.

Further reading

Automation

Automation

Choose the right tool for the job. Handle edge cases before they cause problems. Test thoroughly on sample data before going live. Monitor for failures after launch.