Mstro

FAQ

Frequently Asked Questions

Quick answers about Mstro — what it is, how it differs from Cursor and Claude Code, where your code runs, pricing, and the Security Bouncer.

  • What is Mstro?

    Mstro is a browser-based IDE and AI agent orchestration platform that runs Claude Code on your own machines. You install the open-source `mstro-app` CLI, point your browser at mstro.app, and run multiple long-lived AI sessions in parallel — each on its own git worktree — from any device.

  • How is Mstro different from Cursor or Claude Code?

    Cursor is a local AI-native editor; Claude Code is a terminal CLI. Mstro is the orchestration layer above both: a browser UI that drives many Claude Code sessions in parallel across git worktrees, with a Security Bouncer that auto-approves safe tool calls so long-running tasks finish without you clicking Allow. See the comparison pages at /compare for details.

  • Where does Mstro run my code?

    On your machines. The mstro-app CLI runs locally on each computer you connect (laptop, dev VM, Raspberry Pi, etc.). The Mstro platform server is only a WebSocket relay — your code, files, and conversations never live on our infrastructure.

  • Is my code safe?

    Yes. Code never leaves your hardware. The Security Bouncer is a 2-layer (pattern matching + Haiku AI analysis) tool-approval system that auto-approves safe Claude Code tool calls and blocks risky ones. Sessions are HTTPS-only. Device tokens are stored as SHA-256 hashes. Conversation history is saved locally in `.mstro/` on your machine.

  • Does Mstro need my Anthropic API key?

    Yes. Mstro is bring-your-own-key (BYOK). You run Claude Code with your own Anthropic API key, so you pay Anthropic directly for token usage — there is no Mstro-side markup. Add the key once via the CLI; it stays on your machine.

  • Can I use Mstro on mobile?

    Yes. mstro.app is a fully responsive browser app and an installable PWA. You can supervise long-running AI sessions, review diffs, push commits, and open shell terminals from a phone or tablet. The CLI runs on your dev machine; the browser UI works anywhere.

  • Does Mstro work offline?

    The browser UI requires an internet connection to reach the WebSocket relay. Once a session is running, the AI work happens on your machine — but Anthropic API calls obviously need internet. There is no fully offline mode.

  • How much does Mstro cost?

    Free for the first 1,000 users. After that, paid plans will be announced — current users will get advance notice. You always bring your own Anthropic API key, so AI usage is billed directly by Anthropic at standard rates with no Mstro markup.

  • What is the Security Bouncer?

    A 2-layer tool-approval system that runs as an MCP server next to Claude Code. Layer 1 is pattern matching against a configurable allow/deny list. Layer 2 is a Claude Haiku call that classifies novel tool calls in milliseconds. Together they let long-running AI tasks proceed without the constant "Allow this tool?" prompts that block real autonomy.

  • Can multiple developers share an orchestra?

    Yes. You can share a read-only link to any orchestra (a connected machine + project) so collaborators, stakeholders, or pair-programming partners can watch AI work happen in real time. Invite links to join an orchestra are also supported.

  • Does Mstro support Windows?

    The CLI runs on macOS, Linux, and Windows via WSL (Windows Subsystem for Linux). Native Windows is on the roadmap. The browser UI works on every modern browser — Chrome, Safari, Firefox, Edge — across desktop and mobile.

  • What models does Mstro support?

    Anything Claude Code supports. Today that is Claude Opus, Sonnet, and Haiku via the Anthropic API. Mstro picks up your Claude Code model configuration and lets you switch models per session in the settings UI.

  • Is Mstro open source?

    The CLI (`mstro-app` on npm) is open source. The browser app and platform server are source-available; their source is in the same monorepo and you can run them locally. Self-hosting docs are on the roadmap.

  • Can I self-host Mstro?

    The CLI runs entirely locally and is self-hosted by design. The browser UI and relay can be run from the source in the public repo, but the official deployment at mstro.app is the supported path today. Enterprise self-host packaging is on the roadmap — reach out at bravo@mstro.app if you need it.

  • What is the PM Board?

    A kanban view that turns one prompt into a list of tasks, then assigns tasks to AI agents that work in parallel on separate git worktrees. You watch progress in real time, review each agent's changes, and merge what you like. It is how a week of work ships in hours.

  • How do git worktrees fit in?

    Each parallel AI agent works on its own git worktree — a separate working tree off your repo, so changes never collide. When you accept work, the worktree merges back. This is how Mstro safely runs many agents on the same codebase at the same time.

  • How do I get started?

    Run `npx mstro-app` in any project directory, sign up at mstro.app, and your project becomes browser-accessible. There is no Docker, no config file, no VM. Full guide at /docs/getting-started.

Still have a question?

Email bravo@mstro.app or jump straight in.

Get startedCompare to Cursor, Claude Code →

Mstro
FAQCompareBlogPrivacyTermsSign in