Skip to main content

Talk

The AI-Accelerated Workflow: Shipping Faster with Claude Code and Linear

From PlebLab's AI Startup Rodeo

At PlebLab's AI Startup Rodeo, Reid McCrabb — co-founder and CEO of Linkt — shared how his seven-person team has radically transformed their software development process using AI coding agents. The results speak for themselves: one engineer shipped 235 merged PRs across seven simultaneous projects in just 90 days, and the company's revenue is on track to double month over month.

The Old Way Is Broken

The traditional software development loop — ticket, plan, implement, review, PR — typically takes two to three days per cycle, with one project per developer. Historically, shipping faster meant hiring more devs. That equation is changing. Small teams equipped with AI coding agents can now outproduce much larger teams.

The Stack That Makes It Work

Linkt's shipping velocity comes down to four key tools working together:

Claude Code (Opus 4.6) is the foundation. Every developer on the team has a Max subscription — it's mandated. The two Claude Code features driving the most value are:

  • Skills — reusable patterns that let Claude Code plan, implement, and ship without long prompts every time. Skills can be shared across the org for consistency.
  • Worktrees — running four or five Claude Code instances in parallel on different tasks. The tradeoff is higher token consumption, but the output increase is massive.

Linear serves as the source of truth — not just for engineers, but for Claude Code itself. Connecting Linear via MCP gives the AI agent a “brain” to reference during project work. A critical insight: ticket quality becomes a forcing function. Vague tickets confuse Claude Code. Clear requirements and context are non-negotiable.

Granola handles meeting transcription and note-taking. The real power is the pipeline it enables: client meetings are recorded in Granola, requirements get piped into Linear tickets, and Claude Code implements them. In one case study, an enterprise client's long sales cycle produced months of calls and requirements. By the time the contract was signed, every requirement was already ticketed and ready for the coding agents to execute.

Edward (Open Claw agent) is the team's AI product manager that lives in Slack. Edward monitors for tickets, asks clarifying questions, generates fully scoped Linear issues with specs, and scans for unscoped issues to auto-generate plans. This freed up the CTO from the bottleneck of manually scoping and assigning every ticket. Edward even reacts with emoji to acknowledge bug reports and tells the team when he'll have something scoped out.

The Business Model Problem

One honest admission: the traditional hourly billing model breaks down when your team is this productive. If you have deep agent harnesses and ship at 10x speed, billing by the hour hurts you. Linkt is actively experimenting with agent-hour billing and retainers, and considers this an open problem for anyone doing AI-powered service work.

Lessons Learned the Hard Way

Watch your costs with autonomous agents. It's easy to get excited, hook up Open Claw to everything, and burn through $50-100 in the first day. Linkt switched their agent's backbone from Claude Sonnet to Kimi K2 — an open-source model that delivered roughly a 90% cost reduction. Some tasks fail that previously worked with Sonnet, but for workhorse operations, the tradeoff is worth it.

Garbage in, garbage out. Context management and ticket quality determine your output quality. What you feed the agents is what you get back.

People are still reading the code — for now. Every PR still gets human review, especially for client work. But the team anticipates that going away this year as tooling and trust in AI-generated code improves.

Key Takeaways

  1. Start with skills, not agents. Build repeatable patterns in Claude Code before trying to orchestrate autonomous agents.
  2. Linear + MCP is a huge unlock. Give your coding agent a structured source of truth to work from.
  3. Record everything. Whether it's Granola or another tool, capture every meeting. That context becomes your ticket pipeline.
  4. Match models to tasks. Use frontier models for complex reasoning, cheaper models like Kimi K2 or Gemini Flash for workhorse tasks.
  5. Rethink your billing model. If AI makes you 10x more productive, hourly billing leaves money on the table.

The future Reid is rooting for: local open-source model clusters running company operations autonomously. The workhorse models are getting good enough to run locally, even if AGI-level reasoning still requires cloud providers. For a seven-person team already doubling revenue month over month, that future doesn't seem far off.