[THE CHALLENGE]
What made AllShips interesting as a design problem wasn't any single piece — it was the fact that everything had to come together at once. The brand had to speak to business owners who are stretched thin, skeptical of hype, and tired of being sold tools they don't understand. The website had to make a complex service offering feel approachable and human. The technical infrastructure had to be production-grade from day one, because this isn't a demo — it's the system I actually run my business on. And the whole thing had to come together on a compressed timeline, mostly solo, with AI as my development partner. The pieces of a system either reinforce or undermine each other. A website that promises simplicity but requires a ten-minute onboarding call is a broken promise. An automation that works perfectly but can't be observed or understood by the person who owns it is a liability. A client portal that looks polished but can't surface the data that actually matters is just theater. Everything had to be honest — the marketing, the engineering, and the experience had to tell the same story. There was also a strategic dimension. Hyper-niche automation products tend to outperform generalist offerings in terms of sellability. But I chose to build AllShips as a generalist consultancy on purpose. By positioning it to serve any industry, I created an avenue to get close to different business types, learn their specific problems firsthand, and discover which verticals are worth going deeper on. The underlying automation infrastructure — lead pipelines, prospect tracking, content engines, voice agents — is intentionally industry-agnostic. These patterns transfer. AllShips is both a business and a research vehicle for finding what comes next.
[THE CHALLENGE]
[WEBSITE DESIGN]
I designed and built the AllShips website in Framer, which let me move at the speed of a solo operator while keeping the design fidelity I'd hold myself to on any client project. It gave me the component architecture, responsive breakpoints, and CMS integration I needed without the overhead of a custom frontend build for what is fundamentally a marketing site. The design language came from a constraint I set early: AllShips needed to feel like a premium technology brand without feeling like an enterprise consultancy. My audience — small business owners, operators, founders — responds to confidence and clarity, not corporate jargon or dense feature matrices. I landed on a dark-mode foundation with a signature green accent that cuts through with energy and warmth. The typography pairs TT Hoves Pro for display with Inter for body — clean and legible at every scale. I gave every section generous whitespace and deliberate hierarchy, letting the content breathe instead of competing for attention.
One decision I'm particularly glad I made early: I structured the site as a single-page narrative and wrote every section to do double duty — compelling marketing copy on the page, and source material for the voice agent's knowledge base. That constraint actually made the writing better. If a section couldn't be understood when spoken aloud by a voice agent, it was too complex for the page. Designing for two audiences at once forced a clarity I might not have reached otherwise.

[MOBILE WALKTHROUGH]
[VOICE AGENT]
The voice agent — Aria — might be the part of this project I'm most proud of, because it sits at the intersection of design engineering, user experience, and useful AI. Aria is a fully custom voice assistant embedded in the Framer site as a React component, backed by a Cloudflare Workers API that orchestrates Claude AI, OpenAI text-to-speech, Whisper speech-to-text, and Cal.com booking in real time. I built her from scratch because the off-the-shelf voice platforms I evaluated all had the same problem: perceptible latency between the user finishing their sentence and the agent responding. That gap — even half a second too long — breaks the conversational rhythm and makes the experience feel robotic instead of natural. The solution I landed on was sentence-level TTS chunking. As Claude streams its response via Server-Sent Events, a custom SentenceSplitter buffers the incoming text and yields complete sentences the moment they end. Each sentence gets dispatched to the TTS API immediately while Claude keeps generating the next one. The result is that Aria starts speaking within one to two seconds of the user finishing their question — fast enough to feel like a conversation. Getting this right was one of those moments where the technical architecture and the experience design are the same decision. The latency isn't a performance metric — it's the difference between an interface that feels alive and one that feels like a loading screen.
The frontend is 1,300 lines of React/TypeScript managing three systems: audio playback with buffer queuing, microphone input that routes to the browser's native Web Speech API on desktop or Whisper on mobile, and a canvas-based dot waveform visualization that responds to audio frequency data in real time. The visualization matters more than it might seem — when Aria is listening, the dots pulse with your voice. When she's thinking, concentric rings sweep outward. When she's speaking, the dots dance with her output. Those cues replace the uncomfortable silence that makes most voice interfaces feel broken.
On the backend, Claude is configured with tools that let Aria check availability and book discovery calls directly during the conversation. The system prompt positions her as a knowledgeable colleague — warm, helpful, never salesy. Getting that tone right took iteration. The first versions were either too eager to push for a booking or too passive to convert. I ended up treating the system prompt more like copywriting than engineering, which felt like the right instinct.
[VOICE COMPONENT WALKTHROUGH]



[CONTACT OPPORTUNITIES]
The contact and booking systems are where the website stops being a brochure and starts being infrastructure. I designed them as the first touchpoints in a fully automated prospect pipeline — every form submission and booking triggers a coordinated sequence of routing, deduplication, enrichment, and follow-up. The contact form handles two fundamentally different submission types through a single endpoint. The subject field determines the path: prospect inquiries flow into the sales pipeline, while job applications route to a separate recruitment workflow. That routing happens cleanly at the validation layer — no conditional spaghetti, just type-safe model definitions that make the right thing happen automatically. Every submission gets deduplicated against a master email database, written to the appropriate pipeline, and surfaced via Discord within seconds. If someone includes a phone number, the system schedules an outbound voice call automatically, respecting their timezone.






[COMMUNICATION AUTOMATIONS]
This is where the individual automations start working as a coordinated system, and where I learned the most about what "production-grade" actually means when you're the one running it. When a prospect submits the contact form with a phone number, the call scheduler resolves their timezone from the number itself, checks whether it's within calling hours in their local time, and either initiates the call or queues it for the next available window. I store all scheduled times in UTC — a lesson I learned the hard way when EST timestamps compared against UTC strings produced wrong results in every edge case. Those are the moments where you develop respect for the complexity hiding inside "simple" scheduling logic. The outbound calls use Vapi's API with the prospect's name, subject, and message injected as template variables. Aria conducts the call, answers questions, and gauges interest. After each call, structured processing routes by outcome: interested prospects get a booking link, callback requests go to Follow Up, and no-answer cases trigger timezone-aware retry scheduling. Every call generates structured notes — duration, outcome, key points, transcript — that get written back to the prospect record.
I spent time thinking about the ethical details here. The opt-out system maintains a persistent list so we never call someone who's asked us not to. The retry manager respects calling hours and caps attempts rather than endlessly queuing. These aren't features you'd list on a marketing page, but they're the difference between automation that respects people and automation that annoys them. What ties it all together is Discord as the observability layer. Rather than building a monitoring dashboard for the automation server, I wired every significant event — form submissions, database writes, voice calls, pipeline transitions, errors — to fire structured Discord notifications. Over 50 event types, all formatted consistently. It's not Grafana, but for a system at this scale, it gives me exactly the visibility I need while I'm building the more sophisticated platform alongside it. Sometimes the pragmatic choice is the right one.
[VOICE COMPONENT WALKTHROUGH]
[KEY LEARNINGS]
Building AllShips with AI-assisted development taught me things I couldn't have picked up from reading about it. Not about AI as a concept — about what it actually feels like to use it as an engineering partner on something real. The workflow with Claude Code was conversational. I'd describe a spec — often verbally through voice mode — and we'd work through the architecture together before writing any code. Then Claude would generate implementation, I'd review it, we'd iterate on edge cases, and move to testing. That cycle compressed what would normally be weeks of solo development into days. The codebase grew to 420 passing tests across 10 automation packages with consistent patterns and production-grade error handling — on a timeline that would have been aggressive even for a small team. The debugging stories are where the learning happened. The voice agent was saying literal "name" and "subject" instead of the prospect's actual data — turns out Vapi uses LiquidJS double-brace syntax, not Python f-strings. The call scheduler was firing calls immediately instead of waiting — scheduled times were stored in EST but compared against UTC via string comparison. Gmail API tokens failed with "Permission denied" on the production server — rsync from macOS preserved the source user ID, but the service runs as a different user. Each bug sounds obvious when you describe it in one sentence. Each one took detective work to find. And each one got resolved in the same session — symptom to root cause to fix to deployment in minutes, not hours. The deeper takeaway is a clearer understanding of the boundary. AI is excellent at implementation velocity — turning a well-described spec into working, tested code. It catches edge cases you'd miss when you're moving fast. But it doesn't replace the architectural decisions, the design sensibility, the judgment about what's worth building in the first place. The AI compresses the distance between deciding and doing. The deciding is still mine — and it's the same thing I bring to client engagements. Not just the speed, but the thinking behind it.
[NO TIME TO READ?]
Listen to the podcast
[INTRO TO ALLSHIPS PLATFORM]
Once the website was live, the voice agent was operational, and the automation server was handling real traffic, I found myself switching between database views and Discord channels to piece together what was happening across the business. The data was all there — just fragmented across tools that weren't designed to give you a unified picture. So I built one. The AllShips Platform is a Next.js 15 application that serves two purposes: an internal dashboard where I can see the state of every prospect, lead, subscriber, booking, and client at a glance — and a white-labeled client portal where active clients access their project information. The architecture reflects a core principle: the platform reads from the same databases the automation server writes to, with no direct coupling between the two systems. The database is the single source of truth. Either system can evolve independently without breaking the other. The service layer abstracts all data access behind a clean API boundary, so the underlying database can be swapped or scaled without touching application logic. That separation was a deliberate architectural decision — it keeps the platform's complexity where it belongs and makes the system portable across infrastructure changes. I carried the AllShips visual identity into the dashboard — the same signature green, the same dark-first aesthetic, the same typography — because I wanted it to feel like a natural extension of the brand, not a generic admin panel bolted on. I started with shadcn/ui as a component foundation and extended it into 38 custom component directories: Kanban boards with drag-and-drop, radial gauge charts, data tables, status editors, timeline views. Internal tools deserve the same design care as customer-facing products. If I'm going to live in this dashboard every day, it should be something I enjoy using.
[INTRO TO ALLSHIPS PLATFORM]

[INTRO TO ALLSHIPS PLATFORM]
[PLATFORM FEATURES OVERVIEW]
The main dashboard aggregates everything into a single operational view: eight core metrics, a daily briefing, an attention feed surfacing priority items, a drag-and-drop pipeline Kanban, a bookings calendar, an activity timeline, and charts covering funnel progression, lead distribution, source attribution, and cost tracking. Every visualization pulls from live data with caching and on-demand revalidation. The platform organizes the business into seven territories — Leads, Prospects, Clients, Newsletter, Bookings, Jobs, and Email Directory — each with consistent UX: list views, filters, detail pages with inline editing, and cross-references between related records. The Kanban boards support drag-and-drop status updates with optimistic UI and automatic rollback on failure — the user should never wonder whether their action actually saved. Authentication is lightweight and intentional: admin credentials validated against environment variables, client credentials against the database. JWT sessions in HTTP-only cookies, enforced by middleware. Rate limiting on login. No external auth service — just the right amount of security for the threat model.


[LEAD ENRICHMENT AND REACHOUT]
The lead generation suite is where automation starts doing work that would be impossible to do manually at any reasonable scale. Discovery takes a location, industry, and batch name, generates targeted search queries, runs them against Brave Search, and filters the results through a 40-domain blocklist and URL-path pattern matching to strip out aggregator pages. Early runs kept returning Indeed listings and Yelp directories instead of actual businesses — fixing that filtering was a quality inflection point. The platform displays leads as a Kanban with color-coded fit scores, and each detail page shows the full enrichment profile: CMS detection, tech stack, social links, AI-generated pain point analysis, and a fit score visualized as a radial gauge. Enrichment is where it gets interesting. The system analyzes each lead's website, extracts contact information with heuristic scoring, and sends the collected data to Claude for an automation readiness assessment — pain points, industry classification, service tier recommendation, and a fit score from 1 to 10. Watching the first enrichment batch come back with insightful analysis — the AI identifying that an auto repair shop's outdated WordPress site and lack of online booking represented specific automation opportunities — was a moment where the whole system clicked. Outreach generates personalized emails referencing each business's specific pain points, with dry-run mode for review before sending. The platform tracks status and response indicators, connecting automated discovery to human-driven sales conversations.






[NEWSLETTER AUTMATIONS]
The newsletter pipeline is the most complex automation in the system, and the one that best demonstrates what I mean by "human-in-the-loop" design. It starts with a background poller watching for topics marked as "Draft" in the database. When it finds one, it kicks off a multi-agent research pipeline: three parallel agents each approach the topic from a different angle — industry context, technical depth, and market landscape. Each generates targeted search queries, fetches and processes web content, and produces a structured research artifact. A synthesis agent distills the three into a consolidated summary with cross-cutting themes. The article writer uses Claude to produce a 1,000 to 2,000 word piece in the AllShips editorial voice. A cover image is generated via Gemini. Every cited URL is validated before the pipeline pauses at Review status and waits for my approval. If I take too long, Discord reminds me. A seven-day timeout prevents zombie topics. On approval, the article publishes to the Framer CMS and a Mailchimp campaign goes out to subscribers. The whole flow runs autonomously with a single human checkpoint: is this good enough to publish? That gate matters. I could have automated it away, but the editorial decision — does this represent the voice and quality I want associated with AllShips — isn't something I'm ready to delegate. Maybe eventually. Not yet.




[THE CHALLENGE]
What I ended up with is a business that runs on its own infrastructure. Prospects are contacted, qualified, and tracked automatically. Leads are discovered, enriched, and scored while I focus on other things. Articles are researched, drafted, and published with a single approval. Clients log into a portal that reflects their project status in real time. Every event fires a notification I can check from my phone. It works — not as a demo, but as the system I actually use every day to run the business. AI accelerates the building, but it doesn't replace the thinking. The architectural decisions, the brand sensibility, the design judgment about what's worth building and how it should feel to use — that's the work I bring to client engagements, and it's the part that matters most. I built AllShips because I believe small teams deserve access to the same quality of automation that large organizations take for granted. Then I proved it by being one of those small teams. The infrastructure, the design, the experience — it all reinforces the same idea: thoughtful automation, built with care, lets you do more than you thought you could. That's what I offer clients. And this is the evidence that it's real.