7 min read

My Claude Code Setup: 7 MCP Servers, Custom Hooks, and an AI That Tweets For Me

How I turned Claude Code into a full operating system -- with 7 MCP servers, security hooks, and custom skills that let AI operate my entire dev stack and social media.

#Claude Code#MCP#Automation#AI Tools#Developer Experience#Playwright#Hooks

I have been writing code since 2005. In that time I have used every flavor of IDE, terminal multiplexer, and automation tool you can imagine. Nothing has come close to what I am running now: Claude Code wired up with 7 MCP servers, a custom hook system that enforces security and formatting on every action, and a set of skills that let AI operate my X account through the browser.

This is not a tutorial. This is a walkthrough of the actual setup I use every day to build, ship, and even post on social media -- mostly hands-free.

The MCP Server Stack

MCP (Model Context Protocol) servers give Claude Code direct access to external tools and services. Instead of copy-pasting output from one tool into Claude, the AI can query databases, search documentation, manage GitHub repos, and interact with vector stores natively.

Here is what my ~/.claude.json looks like (simplified for readability):

json
{
  "mcpServers": {
    "github": { "type": "http", "url": "https://api.githubcopilot.com/mcp/" },
    "playwright": { "type": "stdio", "command": "npx", "args": ["@playwright/mcp@latest"] },
    "postgres": { "type": "stdio", "command": "npx", "args": ["@bytebase/dbhub", "--dsn", "postgresql://localhost:5432/mydb"] },
    "redis": { "type": "stdio", "command": "npx", "args": ["@modelcontextprotocol/server-redis", "redis://localhost:6379"] },
    "pinecone": { "type": "stdio", "command": "npx", "args": ["@pinecone-database/mcp"] },
    "context7": { "type": "stdio", "command": "npx", "args": ["@upstash/context7-mcp"] },
    "sequential-thinking": { "type": "stdio", "command": "npx", "args": ["@modelcontextprotocol/server-sequential-thinking"] },
    "twitter": { "type": "stdio", "command": "node", "args": ["~/.claude/twitter-mcp/index.mjs"] }
  }
}

GitHub -- The Command Center

The GitHub MCP connects via Copilot's HTTP endpoint. Claude can create PRs, review code, search repositories, manage issues, and push files -- all without me opening a browser or running gh commands manually.

Playwright -- Browser Automation (and Much More)

This one started as a testing tool. Now it is the backbone of my social media automation. Playwright gives Claude direct control over a headless or headed Chromium browser. It can navigate pages, click buttons, fill forms, take screenshots, and read DOM snapshots.

Postgres and Redis -- Direct Database Access

My project's database runs on Postgres locally, and I use Redis for caching. With these MCP servers, Claude can query my database directly, inspect table schemas, check cache state, and debug data issues in real-time. No more copying SQL output back and forth.

I work with embeddings and RAG pipelines. Having Pinecone connected means Claude can search my vector indexes, upsert records, and check index stats without me writing any wrapper code.

Context7 -- Always Up-to-Date Docs

Context7 fetches current documentation for libraries and frameworks on demand. Claude's training data has a cutoff, so when I am working with a recently updated API or a new version of Next.js, Context7 pulls the actual latest docs.

Sequential Thinking -- Structured Reasoning

For complex architectural decisions or multi-step debugging, this server lets Claude break down its reasoning into explicit sequential steps.

The Hook System

MCP servers give Claude capabilities. Hooks give me control. They are scripts that run before or after Claude uses any tool, and they enforce rules that I never want broken.

Pre-Tool Hooks: The Guardrails

protect-files.sh -- Blocks Claude from editing .env files, API keys, credentials, or any file I have marked as protected. AI is powerful, but I do not want it anywhere near my secrets.

block-dangerous-commands.sh -- Prevents destructive operations like rm -rf /, git push --force, or DROP DATABASE.

block-commit-main.sh -- Claude cannot commit directly to main. Every change goes through a branch and a PR.

rtk-rewrite.sh -- Rewrites CLI commands to go through RTK (Rust Token Killer), a proxy that compresses tool output by 60-90%. Over a long session, this saves thousands of tokens.

scan-secrets.sh -- Scans any file content Claude is about to write for patterns that look like API keys, tokens, or passwords.

Post-Tool Hooks: Cleanup and Tracking

auto-format.sh -- After Claude writes or edits any file, this hook runs Prettier for JS/TS and Ruff for Python.

audit-log.sh -- Every tool use gets logged with a timestamp, the tool name, and a summary. When I want to review what Claude did during a long autonomous session, the audit log gives me a complete trail.

The Twitter Playwright Hack

This is the part of my setup I am most proud of, born entirely out of frustration.

I was using the Twitter API v2 free tier to post tweets and search trends. Then the credits ran out. Twitter's paid API tiers are hundreds of dollars a month for basic posting and searching.

So I built something better.

I wrote a custom MCP server at ~/.claude/twitter-mcp/index.mjs. It exposes tools like post_tweet, search_tweets, like_tweet, and reply_to_tweet. But instead of calling the Twitter API, every single one drives a real browser session through Playwright.

The flow:

  1. Playwright launches Chromium with my X session cookies loaded
  2. Claude calls a tool like search_tweets with a query
  3. The MCP server navigates to x.com/search, types the query, waits for results, scrapes the DOM
  4. Results come back to Claude as structured data
  5. Claude reasons about what to engage with, drafts a reply, and calls reply_to_tweet
  6. The server navigates to the tweet, clicks reply, types the text, and posts

No API. No rate limits. No credits. Just a browser doing what a browser does.

I then built custom Claude Code skills on top of this:

  • /tweet -- compose and post a tweet
  • /tweet-reply -- reply to a specific tweet
  • /tweet-search -- search for tweets by keyword or trend
  • /tweet-follow -- follow an account
  • /tweet-like -- like a tweet
  • /tweet-engage -- full engagement blitz that searches conversations, likes good takes, follows interesting people, and posts thoughtful replies

The /tweet-engage skill is what I run most often. I give it a topic like "AI developer tools" and Claude spends a few minutes browsing X, finding relevant threads, and engaging authentically.

What This All Adds Up To

The combined effect of 7 MCP servers, a hook system, and custom skills is that Claude Code stops being a chat assistant and starts being an operating system for my entire workflow:

  • I describe a feature and Claude builds it, queries the database to verify it works, formats the code, opens a PR, and notifies me when done
  • I say "engage with the AI tools community on X" and Claude searches trending conversations, crafts replies, and posts them
  • I ask Claude to debug a slow query and it pulls the table schema from Postgres, checks the Redis cache state, looks up the ORM docs via Context7, and suggests an indexed query -- all in one flow
  • Every action is logged, every file is formatted, and every secret is protected by hooks that run whether I am watching or not

I have been building software for over 20 years. The last six months with this setup have been the most productive stretch of my career. Not because the AI writes better code than me -- it often does not -- but because the automation layer around it eliminates every context switch, every copy-paste, every "let me check that real quick" interruption that used to fragment my days.

If you are using Claude Code with default settings, you are leaving 80% of its potential on the table. The MCP ecosystem is young, but it is already transformative. Wire it up to your actual tools, put guardrails around it with hooks, and build custom skills for the workflows that are unique to you.

That is where the real leverage is.