One place to discover MCPs developers actually use
A community-curated hub to discover and contribute MCPs, AI rules, and AI development tools. Try our free Claude Prompt Generator to create production-ready system prompts in seconds.
A community-curated hub to discover and contribute MCPs, AI rules, and AI development tools. Try our free Claude Prompt Generator to create production-ready system prompts in seconds.
Why developers come here
Find MCPs without digging through GitHub or Reddit
See platform picks alongside community submissions
Share MCPs and help others find what works
New MCPs are added weekly — community submissions are free and credited.
Curated by AI Stack · Platform pick
Firebase MCP is a Model Context Protocol server that allows AI tools to securely interact with Firebase projects. It enables LLM clients to inspect and manage Firestore / Realtime Database data, Authentication users, Cloud Functions, Hosting configs, and project metadata through natural language requests, helping developers debug, explore, and operate Firebase-backed applications faster.
Submitted by @deepsyyt · Community
Sentry MCP is a Model Context Protocol server that enables AI tools to securely access and interact with Sentry projects. It allows LLM clients to query issues, errors, performance traces, releases, and alerts using natural language, helping developers debug, investigate incidents, and understand application health directly from AI assistants.
Submitted by Merwyn Mohankumar · Community
Notion MCP is a Model Context Protocol server (hosted and open implementations) that enables AI tools to securely access and interact with your Notion workspace. Using MCP, LLM clients can search, read, create, update, and manage Notion pages, databases, and content through natural language requests. 0
Curated by AI Stack · Platform pick
Atlassian MCP lets Claude interact with Atlassian products like Jira and Confluence to read, search, and update work items using natural language. Instead of manually navigating issues, tickets, and docs, you can ask Claude to fetch context, summarize work, create or update issues, and help with planning and analysis directly from Atlassian data.
Submitted by Yuan Teoh · Community
MCP Toolbox for Databases is an open source MCP server for databases.
Latest and most popular rules for AI development tools
Practical rules developers actually use with AI coding tools
## Goal Ensure backend behavior is validated through real execution: routing, configs, async flows, not assumptions. ## Rule Behavior ### 1️⃣ Validate Endpoints Using Actual Requests - Test every API with sample payloads - Confirm status codes and response shape - Verify error payload structure matches spec - Prefer execution confidence over mental models ### 2️⃣ Prove Async Execution Order - Log inside async tasks to detect silent failures - Surface race conditions using delayed mock calls - Never allow Promise errors to be swallowed ### 3️⃣ Runtime Verified Configuration - Validate env variables resolve correctly - Confirm DB credentials, ports, tokens through live connects - Test fallback logic by simulating missing configs ### 4️⃣ Architecture Transparency - Independent controllers and services for isolated execution - No hidden side effects or runtime surprises - Run business logic without full server boot ### 5️⃣ Data Driven Confidence - Validate input and output schemas live - Log edge cases and boundary responses - Test on real data before refactoring decisions ### 6️⃣ Safe Refactoring With Proof - Compare before and after logs for critical paths - Ensure no routing or behavior drift - Runtime validation required before merge ### 7️⃣ Replit Agent Partnership - Trace real request lifecycle - Convert invisible bugs into observable truth - Ask AI to explain failures from logs ## Examples - "Run GET /users with invalid token and confirm response format." - "Inject delay and check async sequence correctness." - "Validate DB env vars and test live connect." ## Tool Prompts - "Execute this endpoint with payload {...} and display logs." - "Trace async calls and show execution order." - "Simulate missing API_KEY and show fallback behavior." - "Compare logs before and after refactor and list differences." ## Quick Implementation Wins - Add centralized request logging - Enable error stack traces in logs - Automate smoke tests for core routes
## Goal Validate Node.js backend behavior by inspecting real execution: async flow, event loop timing, routing accuracy, and transparent module structure. ## Rule Behavior ### 1️⃣ Validate Routes With Real Requests - Test endpoints using example payloads - Confirm response codes and JSON schema correctness - Verify error flows show explicit messages ### 2️⃣ Debug Async Flows Through Execution - Log inside async functions to confirm order of operations - Detect race conditions using delayed mock responses - Ensure Promise errors are surfaced, not swallowed ### 3️⃣ Configuration Proven at Runtime - Confirm environment variables load correctly - Test DB connectivity using actual credentials and ports - Validate fallback branches by simulating missing configs ### 4️⃣ Node Architecture Transparency - Keep controllers, services, and helpers modular and traceable - Enable isolated execution of pure functions without server - Avoid hidden behavior and side effects in modules ### 5️⃣ Event Loop Visibility - Ask for execution traces to detect microtask and macrotask ordering - Monitor long running operations blocking the event loop - Use logs to reveal async delays affecting user experience ### 6️⃣ Safe Refactoring With Behavioral Proof - Compare execution logs before and after change - Confirm there is no routing or logic drift - Refactor only after runtime stability is verified ## Examples - "Run POST /login and show full response logs." - "Simulate slow DB call and detect ordering issues." - "Show timing of callbacks vs Promise resolution." ## Tool Prompts - "Trace async events and show execution order." - "Test missing env variables and confirm fallback behavior." - "Run service function independently without server boot." ## Quick Implementation Wins - Add structured logging for async functions - Detect hidden await or blocking code using traces - Use isolated unit routes for debugging problematic flows
## Goal Modernize React code by converting class components to functional components using hooks, improving readability, testability, and runtime behavior while preserving existing UI contracts. ## Rule Behavior 1️⃣ Identify Conversion Candidates - Target components that use state, lifecycle methods, or complex render logic. - Prefer converting small and isolated components first. - Delay conversion of components that depend on unusual class patterns until tests exist. 2️⃣ Preserve Public Props and Behavior - Keep prop names and defaults identical to avoid breaking calling components. - Ensure events, callback behavior, and external API expectations remain unchanged. 3️⃣ Map Lifecycles to Hooks Translate lifecycle methods into hook equivalents: - componentDidMount becomes a useEffect that runs once. - componentDidUpdate becomes a useEffect that listens to specific dependencies. - componentWillUnmount becomes the cleanup function inside useEffect. - setState logic becomes useState or useReducer depending on complexity. ### Before 'class Counter extends React.Component { constructor() { super(); this.state = { count: 0 }; this.tick = this.tick.bind(this); } componentDidMount() { this.timer = setInterval(this.tick, 1000); } componentWillUnmount() { clearInterval(this.timer); } tick() { this.setState({ count: this.state.count + 1 }); } render() { return <div>{this.state.count}</div>; } }' ### After 'function Counter() { const [count, setCount] = useState(0); useEffect(() => { const timer = setInterval(() => { setCount(c => c + 1); }, 1000); return () => clearInterval(timer); }, []); return <div>{count}</div>; }' ### Common Mistake: Stale State in Interval Wrong: 'useEffect(() => { const timer = setInterval(() => { setCount(count + 1); }, 1000); return () => clearInterval(timer); }, []);' Right: 'useEffect(() => { const timer = setInterval(() => { setCount(c => c + 1); }, 1000); return () => clearInterval(timer); }, []);' 4️⃣ Replace Derived State Carefully - Convert computed values into memoized values rather than storing them directly. - Use memoized callbacks when passing functions to children. 5️⃣ Handle Complex State with Reducers - Use useReducer for multi-field or interdependent state structures. - Keep the reducer pure and testable. 6️⃣ Validate Through Execution and Tests - Run UI flows to confirm behavior, accessibility, and interactions remain identical. - Add interaction and snapshot tests to ensure no regressions occur. ## Examples - Convert a class component to a functional one using React hooks while maintaining identical behavior. - Replace setState logic with a reducer and verify the refactor through tests. - Move lifecycle-dependent logic into effects and confirm correct cleanup behavior. ## Tool Prompts - Convert this class component into a functional component while keeping behavior unchanged. - Show a step-by-step transformation explaining each lifecycle mapping. - Run the updated component and provide logs demonstrating correct interactions. ## Quick Implementation Wins - Start with leaf-level components that have few dependencies. - Add tests for event handling and rendering before conversion. - Use automated codemods for basic class-to-function transformations, then refine manually.
## Goal Introduce patterns like CQRS, Clean Architecture, and Event Driven designs only when behavior requires them, and prove their value through real execution and tracing in Cursor or Replit Agent. ## Rule Behavior ### 1️⃣ Start From Behavior, Not Pattern Names - Identify pain first: duplication, tight coupling, fragile changes, or unclear flows - Choose a pattern only when it clearly simplifies execution and testing - Use runtime evidence to confirm that complexity is justified ### 2️⃣ Apply CQRS When Reads And Writes Diverge - Separate command paths that change state from queries that only read - Use dedicated command handlers for mutations - Use query handlers for read models optimized for the UI - Validate by running high load read and write flows independently ### 3️⃣ Use Clean Architecture For Testable Core Logic - Keep core domain and use cases independent of frameworks - Place business rules in inner layers that run without HTTP or database - Keep controllers, ORM models, and UI adapters at the edges - Execute use cases directly in tests and Agent runs without infrastructure ### 4️⃣ Adopt Event Driven Flows For Decoupling - Emit domain events when important state changes occur - Handle follow up work in subscribers instead of inline logic - Ensure handlers are idempotent and safe to replay - Use logs and traces to confirm event ordering and delivery ### 5️⃣ Validate Patterns Through Execution - Run end to end flows before and after pattern introduction - Confirm behavior is easier to observe, modify, and test - Use Agent to trace calls across layers, handlers, or events ### 6️⃣ Avoid Over Engineering - Do not apply CQRS or Clean Architecture to trivial features - Introduce patterns at boundaries that show sustained complexity - Remove unused abstractions that do not help runtime behavior ## Examples - "Refactor this combined read and write handler into CQRS style and run both paths." - "Move business rules into a use case layer and execute it without HTTP." - "Introduce a domain event for order created and show all subscribers that react to it." ## Tool Prompts - "Analyze this module and suggest where a pattern such as CQRS or Event Driven flow would reduce branching." - "Refactor into Clean Architecture layers and run tests to confirm behavior is unchanged." - "Trace event publishing and consumption for this workflow and list all handlers." - "Compare logs before and after applying this pattern and highlight clarity improvements." ## Quick Implementation Wins - Extract use case functions for core behaviors and run them directly in tests - Separate read and write routes where logic has become tangled - Introduce simple domain events for high value state changes and log handler execution
**Summary** Debugging becomes significantly easier when issues are reduced to the smallest possible reproduction. Small, isolated snippets allow Replit Agent to detect incorrect logic, state transitions, async flaws, and edge-case behavior without noise from the full codebase. **Objectives** • Accelerate debugging by isolating failure conditions • Remove unrelated code that hides the real issue • Give the agent a precise, minimal context for accurate diagnosis • Provide a consistent structure for reproducing UI, state, and async bugs **Principles** 1. A minimal snippet should contain only the logic or state producing the bug. 2. Remove UI styling, irrelevant imports, or unrelated branches. 3. Rewrite complex flows into simple steps the agent can execute directly. 4. Prefer explicit, deterministic examples over large abstractions. **Implementation Pattern** Below is the recommended step-by-step workflow for creating high-quality reproduction snippets: **Step 1 — Identify the Failing Behavior** Determine whether the issue involves: • Broken rendering logic • Incorrect state transitions • Async timing or Promise resolution • API handling or unexpected data shape • Side effects triggering errors **Step 2 — Extract Only the Relevant Code** Start removing everything that does not influence the bug. Examples of removals: • Unrelated components • Styling and layout code • External libraries that don’t contribute to the issue • Extra configuration or routing files **Step 3 — Inline Mock Data and Functions** Instead of reproducing full API flows, use inline mocks such as: 'const mockData = { id: 1, value: null };' or 'function fakeFetch() { return Promise.resolve({ ok: false }); }' This ensures deterministic behavior. **Step 4 — Reproduce the Issue in the Simplest Possible Form** A good reproduction snippet should: • Fail consistently • Contain ≤ 20–30 lines of code • Be runnable without additional setup • Demonstrate the bug clearly **Step 5 — Validate the Snippet With Replit Agent** Ask the agent to: • Run the snippet and show the error • Trace execution order • State what assumption is breaking • Propose the minimal fix **Anti-Pattern (Before)** Providing the full repository or a large component tree: 'App.jsx with 15 nested components, API handlers, auth logic, styling imports, and unrelated pages.' Problems: • Hard to isolate the failure • Agent must guess too much context • Debugging becomes slow and noisy **Recommended Pattern (After)** A minimal reproduction snippet: 'function Example() { const [value, setValue] = useState(null); useEffect(() => { async function load() { const res = await fakeFetch(); setValue(res.ok ? "success" : "error"); } load(); }, []); return <div>{value}</div>; }' Benefits: • Agent sees the failure path instantly • Debugging becomes deterministic • Solutions are more accurate and focused **Best Practices** • Keep reproduction snippets short and complete. • Always inline data and simplify async behavior. • Remove noise—focus only on the failing logic. • Validate the snippet yourself before sending it to the agent. **Agent Prompts** "Here is a minimal snippet—run it and explain why it fails." "Identify the exact line or assumption causing incorrect behavior." "Rewrite this snippet to fix the bug while preserving behavior." "Trace async execution order inside this reproduction example." **Notes** • The smaller the snippet, the faster the agent finds root causes. • Snippets should demonstrate one failure at a time. • Always confirm the snippet reproduces the bug before submitting it.
Popular MCPs being explored by the community this week.
Based on recent community views and activity
Sentry MCP is a Model Context Protocol server that enables AI tools to securely access and interact with Sentry projects. It allows LLM clients to query issues, errors, performance traces, releases, and alerts using natural language, helping developers debug, investigate incidents, and understand application health directly from AI assistants.
Firebase MCP is a Model Context Protocol server that allows AI tools to securely interact with Firebase projects. It enables LLM clients to inspect and manage Firestore / Realtime Database data, Authentication users, Cloud Functions, Hosting configs, and project metadata through natural language requests, helping developers debug, explore, and operate Firebase-backed applications faster.
Atlassian MCP lets Claude interact with Atlassian products like Jira and Confluence to read, search, and update work items using natural language. Instead of manually navigating issues, tickets, and docs, you can ask Claude to fetch context, summarize work, create or update issues, and help with planning and analysis directly from Atlassian data.
An MCP server that enables AI assistants to perform real-time web searches using the Brave Search API. It exposes search capabilities over Model Context Protocol so tools like Claude Desktop and Cursor can retrieve up-to-date web results in a privacy-focused way.
Official HTTP Fetch Model Context Protocol (MCP) server that enables Claude to perform secure outbound HTTP requests. Supports REST API calls, JSON responses, header configuration, and controlled network access for API integration, data retrieval, and service orchestration workflows. Designed specially for production-safe API interactions using Claude.