How I Use Claude Code to Ship Features 10× Faster
I stopped writing boilerplate by hand six months ago. Claude Code turned my terminal into a pair programmer that never gets tired, never loses context, and runs slash commands I designed myself. Here is exactly how I use it.

Article focus
10×
faster feature shipping with AI-assisted workflow
Key takeaways
- Claude Code is a CLI that runs in your terminal and has full access to your codebase - it reads, edits, and runs commands on your behalf.
- The biggest productivity gain is not autocomplete - it is offloading entire task sequences: read relevant files, implement change, run tests, fix failures, commit.
- In real work it is a loop: first draft often fails lint, types, or UI expectations. The speed comes from fast iterate + tight constraints, not perfect first tries.
- Custom slash commands (skills) let you encode your own team conventions and workflows into reusable one-line prompts.
- Hooks automate pre/post actions on every tool call - auto-lint on save, auto-format on write, auto-push after commit.
- Claude Code works best when you treat it as a junior developer who needs clear context, not a magic oracle.
What Is Claude Code?
Claude Code is Anthropic's official CLI tool that runs Claude directly in your terminal. It reads your codebase, edits files, runs shell commands, and executes multi-step tasks - all from a single prompt or conversation.
Most AI coding tools bolt onto your editor as an autocomplete plugin. Claude Code is different: it lives in your terminal, has access to your entire repo, and can run commands on your machine. You prompt it once and it will read ten files, write a new component, run the test suite, fix the failures it caused, and commit - without you touching anything.
The underlying model is Claude Sonnet or Opus (your choice), but the CLI layer is what makes it useful for actual software development. It has a permission system, hooks for automation, slash commands you design yourself, and MCP (Model Context Protocol) server integrations that extend what it can reach.
I use it every day to build this portfolio. The blog infrastructure, structured data generation, SEO tooling, and most component work happened through Claude Code conversations. This post is honest about what it does well and where it needs steering.
See It: Claude Code Task Lifecycle
Click Run to watch the exact sequence Claude Code follows - from your prompt to reviewed, committed code.
When you give Claude Code a task, this is the sequence it runs - all automatically, in one session.
Context-rich prompt with file hints and constraints
Scans relevant files, understands existing patterns
Implements matching your codebase style
Executes test suite, reads output
Iterates until tests pass
You run git diff and approve
Setup in Under 5 Minutes
Install via npm globally, authenticate with your Anthropic API key or claude.ai account, then run `claude` in any project directory.
Claude Code works on Mac, Windows, and Linux. The CLI integrates with VS Code and JetBrains IDEs - you can open the terminal pane and run it without leaving your editor. On Windows, it runs best in WSL2 or Git Bash, though native PowerShell works too.
The first thing to configure is a CLAUDE.md file in your repo root. This is where you tell Claude Code about your project: tech stack, conventions, what commands to run, what not to touch. It reads this file automatically on every session. Invest 15 minutes here and save hours of re-explaining context.
bash
# Install globally
npm install -g @anthropic-ai/claude-code
# Launch in your project
cd my-project
claude
# Or with a one-shot prompt
claude "add dark mode toggle to the header component"
# Check version
claude --versionMy Core Workflow: Building a Feature End-to-End
I give Claude Code a feature description with context, let it explore the codebase, ask clarifying questions, then implement. I review the diff before anything is committed.
The workflow that changed how I build: instead of writing code and asking Claude to review it, I describe what I want and let Claude read the existing patterns first. It finds the right files, matches the code style, and writes something that actually fits the codebase rather than a generic snippet.
Key habit: always give context. "Add a share button" is weak. "Add a share button to BlogReaderFeatures.tsx - look at the existing ShareToX component for the pattern, use the same animation style as the copy-link button, and add Hacker News as a new platform" gets you usable code on the first try.
After Claude proposes changes, I check the diff with git diff before accepting. Claude Code never auto-commits unless you ask it to - you are always in control of what lands.
bash
# Feature build with context
claude "Add a reading progress bar to the blog post page.
Look at how BlogReaderFeatures.tsx is structured.
The bar should stick to the top, use the accent color from tailwind config,
and disappear on mobile. No new dependencies."
# Claude will:
# 1. Read BlogReaderFeatures.tsx and related files
# 2. Check tailwind.config for accent color
# 3. Write the component
# 4. Update imports where needed
# 5. Show you what changed
# Review changes before committing
git diffTry It: Prompt Quality Matters
Toggle between vague, decent, and specific prompts. See how specificity changes what Claude Code produces.
Same goal, three quality levels. See how much specificity changes what Claude Code produces.
Your prompt
“add a button”
No file, no reference component, no position, no constraints. Claude has to guess everything.
Reality Check: It Ships Because of the Correction Loop
Claude Code is fast because it can run tight “change → check → fix” loops. First passes often break lint, types, or edge cases. I plan for that and treat iteration as the default.
If you only watch demos, it looks like: prompt → perfect code. Real work is: prompt → almost-right code → lint errors → type errors → UI feels off → fix → repeat. The workflow is not brittle because Claude is perfect - it is robust because the loop is fast.
My rule: Claude must run a checker after touching code. Lint, typecheck, and any relevant tests. When it fails, I do not “hand fix” immediately - I feed the exact error back and constrain the next change.
Three common failure patterns I see:
- ESLint/Prettier fails after a write. Fix: paste error output, ask for smallest diff that satisfies linting. Never silence rules.
- TypeScript fails after a refactor. Fix: keep public function signatures stable, update callers in one pass, run tsc --noEmit again.
- UI is technically correct but wrong vibe. Fix: point to an existing component as reference and name the constraint (spacing, hover, animation, mobile). Then verify in browser.
bash
# Tight correction loop prompt
claude "You changed BlogPost.tsx and now lint fails.
Here is the exact output:
$(npm run lint 2>&1 | tail -80)
Fix with the smallest diff.
Do not disable rules.
Re-run lint + typecheck after and confirm both pass."Debugging: Describe the Symptom, Not the Suspected Fix
Give Claude Code the error message, the file where it occurs, and what you expected to happen. Let it read the code and form its own diagnosis - it often finds a different root cause than you suspected.
My worst habit before Claude Code: I would form a theory about a bug, then ask the AI to implement my theory as a fix. Half the time my theory was wrong and the fix made things worse. Now I describe what broke and let Claude read the relevant code cold.
Claude Code can run your test suite, read the stack trace, trace execution through files, and look at git history to find when a regression was introduced. For intermittent bugs, asking it to add targeted logging and then reproduce the issue is often faster than a debugger session.
bash
# BAD: Telling Claude your theory
claude "the useEffect dependency array in BlogPost.tsx is wrong, add 'slug' to it"
# GOOD: Describing the symptom
claude "blog post page shows stale content after navigating between posts.
The title updates but the body stays from the previous post.
Check BlogPost.tsx and any data fetching hooks it uses.
Find the root cause."
# For test failures
claude "these tests are failing after my last commit:
$(npm test 2>&1 | tail -40)
Find why and fix without changing test assertions."Refactoring Legacy Code Without Breaking Things
Claude Code is excellent at refactoring when you give it the invariants: what must not change, what the output contract is, and what tests exist. It rewrites the internals while preserving behavior.
Refactoring is where Claude Code beats most AI tools. Because it reads multiple files in one context window, it can rename a function across 20 call sites, update all imports, and adjust types - consistently, in one pass. Manual find-and-replace across a large codebase takes an afternoon. Claude Code does it in minutes.
Critical rule for refactoring: tell it the constraints. "Refactor this but keep the public API identical" or "the return type must stay the same, only change the internals" produces safe refactors. Open-ended "clean this up" prompts can cause Claude to change interfaces that break callers it did not check.
bash
# Safe refactor with constraints
claude "Refactor the structured data functions in lib/seo.ts.
Rules:
- Public function signatures must not change
- Return types must stay identical
- All existing callers in app/ must still work
- Goal: extract the FAQ schema builder into its own function
Run the type checker after and fix any errors."
# Large rename
claude "Rename BlogPost type to Post everywhere in the codebase.
Update all imports, type annotations, and JSDoc references.
Do not change any runtime behavior."Custom Slash Commands: Encoding Your Workflow
Claude Code slash commands (called skills) are reusable prompts you define once and invoke with a single /command. They encode your team conventions so you never explain them twice.
Skills live in your project's .claude/ directory or in your global ~/.claude/ config. A skill is a markdown file with frontmatter that defines the command name and the full prompt template. When you type /blog-post it expands into a detailed prompt that knows your post.ts schema, your writing style, your SEO requirements.
I have skills for: creating new blog posts (reads the schema, asks for topic, writes the full post.ts), running SEO audits, generating structured data, reviewing PRs against my conventions, and cleaning up component files. Each one embodies weeks of learned patterns from real sessions.
markdown
<!-- .claude/commands/new-blog-post.md -->
---
name: new-blog-post
description: Create a new blog post following the portfolio schema
---
Create a new blog post for the portfolio.
1. Read one existing post.ts file to understand the exact schema
2. Ask me: what is the topic, target audience, and key takeaway?
3. Generate a complete post.ts with:
- 6-8 sections with directAnswer, paragraphs, and codeBlock
- 20+ seoKeywords targeting real search queries
- 8-10 structuredDataFaq entries
- authorNote from personal experience
- Realistic readTime estimate
4. Create the directory and file at the correct path
5. Add the slug to BLOG_PUBLISHING_SCHEDULE.mdBuild Your Hook Config
Pick a trigger and command. Copy the generated JSON straight into .claude/settings.json.
Pick a trigger and command. The JSON config drops straight into your .claude/settings.json.
When to fire
What to run
Generated config
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "npx eslint --fix {{file}} 2>/dev/null || true"
}
]
}
]
}
}Merge this into .claude/settings.json in your project root (or ~/.claude/settings.json for global).
Hooks: Automating Before and After Every Action
Claude Code hooks run shell commands automatically when specific tool events fire - before a file write, after a bash command, when a session ends. They automate the repetitive parts of your workflow.
Hooks live in your settings.json. The most useful ones: run ESLint after every file write (catches errors before Claude moves to the next file), run the type checker after a refactor session ends, auto-format with Prettier on save, send a desktop notification when a long task finishes.
Hooks run in your shell with full access to your environment. They can read the event payload (which file was written, what command ran) and decide whether to block the action or let it proceed. A pre-write hook that rejects writes to node_modules is a simple guardrail that prevents Claude from accidentally modifying vendor code.
json
// .claude/settings.json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "npx eslint --fix {{file}} 2>/dev/null || true"
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "npx tsc --noEmit 2>&1 | head -20"
}
]
}
]
}
}MCP Servers: Extending What Claude Code Can Reach
MCP (Model Context Protocol) servers connect Claude Code to external tools - Vercel deployments, Linear issues, Notion docs, GitHub PRs - without leaving the terminal.
MCP is an open protocol that lets Claude Code communicate with any service that implements a server. Anthropic ships official MCP servers for common tools. Third-party servers exist for databases, browsers, APIs, and internal tools. You register them in settings and Claude Code gains new tools it can call.
Practical example from this portfolio: I have the Vercel MCP server configured. After Claude Code finishes building a feature, I can say "deploy and check the build logs" in the same conversation. It calls the Vercel API, waits for the deployment, reads the logs, and surfaces any errors - all without leaving Claude Code.
If you want to build your own MCP server, read this step-by-step guide: How to Build an MCP Server from Scratch (Step-by-Step for Beginners)
json
// .claude/settings.json - MCP server config
{
"mcpServers": {
"vercel": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-vercel"],
"env": {
"VERCEL_TOKEN": "your-token"
}
},
"github": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-github"],
"env": {
"GITHUB_TOKEN": "your-token"
}
}
}
}
// Now you can say:
// "create a PR for the current branch with a summary of the changes"
// "check the Vercel deployment status and show build errors"
// "find all open Linear issues tagged 'blog' and show them"Real Example: How I Built This Blog
Every blog post on this portfolio - schema design, structured data, SEO metadata, table of contents, reading progress - was built primarily through Claude Code conversations with clear context and constraints.
The post.ts schema you are reading right now was designed in a Claude Code session. I described the SEO goals (FAQ schema, article schema, keyword targeting), the rendering requirements (sections with code blocks, author notes, featured stats), and asked Claude to design a TypeScript type that fit all of it. It proposed the shape, I refined it, and we iterated three times in one session.
The structured data generation - the JSON-LD that powers FAQ rich results in Google - took one Claude Code session to write and another to audit. I gave it the Google documentation URL, the schema type requirements, and the post.ts type, and it wrote the generator. Then I asked it to check its own output against the spec and fix what was wrong.
This workflow only works because I give Claude Code the full context: the types, the existing code, the output requirements, the constraints. The sessions where I am vague ("make the SEO better") produce shallow work. The sessions where I am specific ("generate Article + FAQ JSON-LD from post.ts, validate against schema.org spec, output as a Next.js Script component") produce production-ready code.
When NOT to Use Claude Code
Skip Claude Code for: security-sensitive logic you need to audit line-by-line, complex algorithmic problems where you need to think through the approach first, and any operation that requires interactive input or a visual feedback loop.
Authentication flows, payment processing, cryptographic implementations - write these yourself or have a senior engineer review every line. Claude Code can generate plausible-looking security code that has subtle vulnerabilities. The cost of a bug in auth is too high to shortcut.
UI polish - pixel-level spacing, animation timing, visual hierarchy - requires a browser and your own eyes. Claude Code cannot see the rendered output. You can describe what is wrong ("the button is too close to the edge on mobile") and it will make a reasonable change, but you need to verify in the browser. Never ship UI work from Claude Code without visual testing.
Architecture decisions also belong to you. Claude Code is good at executing a design, bad at questioning whether the design is right. If you ask it to implement a pattern, it will implement it - it will not usually push back and say "are you sure this is the right abstraction?" That judgment is yours.
Test Your Understanding
1.You have a bug where a button shows wrong text after navigating. Best prompt?
2.You want Claude Code to auto-lint every file it writes. Where does that go?
3.You are refactoring a utility function used in 15 files. What constraint must you include?
4.When should you NOT trust Claude Code to write the code?
5.What is the purpose of CLAUDE.md in your project?
Recommended blogs
Continue reading

How to Build an MCP Server from Scratch (Step-by-Step for Beginners)
A practical beginner guide to building a local MCP server (stdio) step-by-step. You will run a working server, connect it to Claude Desktop, and call your first tool.
Photo by Christina Morillo on Pexels
Read article
JavaScript Closures Explained: Why Your Functions Remember Everything
Learn JavaScript closures with interactive demos. Covers lexical scope, the var vs let loop bug, stale React hooks, memory leak patterns, and closure interview questions.
Reference photo by Asad Photo on Pexels
Read article