Map the human surface.
Arachne scans authorized pages, links, forms, docs, metadata, structured data, and policy pages to build a normalized AgentSiteModel.
The web was built for human eyeballs — which makes it hostile to AI agents. Arachne compiles any authorized website into a deterministic, agent-operable Shadow API: a live MCP server, a capability-token wallet, and a tamper-evident audit ledger. Even for sites that have no API at all.
"Describing a dashboard you can't see is exactly what Arachne is going to change."
Arachne evaluates how an AI agent sees your site, where it gets confused, what actions it can infer, and which workflows need structured resources, action contracts, approval boundaries, or policy gates.
Arachne scans authorized pages, links, forms, docs, metadata, structured data, and policy pages to build a normalized AgentSiteModel.
Generate llms.txt, agent-manifest.json, MCP resources, OpenAPI-style action surfaces, and a risk-classified action registry.
Identify writes, forms, checkout paths, auth surfaces, and prompt-injection risks before agents treat untrusted page content as instructions.
The output is not a scraper report. It is a repeatable interface layer that tells agents what exists, what they can read, what they can draft, and what needs approval before execution.
A verified domain enters the bounded crawler. External, private, and disallowed routes stay outside scope.
Server-rendered sites get the fast HTML crawl. JS SPAs route to a headless-Chromium pipeline that renders each page and observes the real fetch/XHR traffic — so the resulting Shadow API gets the actual API endpoints, methods, and request shapes, not empty action="" forms.
Read actions, draft actions, submit actions, purchases, auth paths, and inferred workflows are labeled by confidence and risk.
Arachne emits machine-readable files agents can consume instead of blindly clicking through a human UI.
Optional adversarial testing checks whether an agent can bypass approval, misuse forms, follow prompt injections, or perform risky actions.
Arachne gives you concrete files, not vague AI-readiness advice. You receive the machine interface, the human report, and the risk model needed to safely expose your site to agents.
A machine-readable inventory of pages, resources, tools, policies, approval requirements, and action schemas.
A curated agent-facing site map that tells LLMs what matters and warns them to treat page content as untrusted data.
Tool/resource definitions that let AI systems query your site through structured interfaces instead of brittle browsing.
A plain-English assessment of discoverability, action risk, form handling, auth surfaces, prompt-injection exposure, and remediation steps.
Agents never hold raw credentials or cards. State-changing actions mint single-use, merchant- and amount-locked capability tokens that require human approval to activate.
Every capability issued, every approval, every execution is hash-chained into a tamper-evident ledger. Re-walk it from genesis to prove nothing was altered.
Your Shadow API ships as a single URL — mcp.<your-host>/mcp/<shadow_id>/mcp — that any MCP client (Claude Desktop, IDE extensions, agent frameworks) can connect to. Streamable HTTP transport, real protocol, zero install on the customer's side.
One download with manifest.json, mcp-config.json, and a personalized README.md — 1-minute install steps, policy walkthrough, domain-verification instructions. Attach to a single onboarding email.
{
"protocol": "artemis-agent-interface",
"site": "https://customer.com",
"verified_domain": "customer.com",
"tools": [
{
"name": "submit_inquiry",
"method": "POST",
"target": "https://customer.com/api/v1/inquiry",
"risk": "medium",
"requires_human_approval": true,
"input_schema": {
"name": "string",
"email": "string",
"message": "string"
}
}
],
"policy": {
"writes_default": "draft_only",
"untrusted_page_content_is_data": true,
"requires_site_owner_verification": true
},
"hosted": {
"transport": "streamable-http",
"endpoint": "https://mcp.solsticestudio.ai/mcp/shadow_xxx/mcp"
}
}
Not a consulting engagement — a self-serve product. Score any site free in seconds, buy the full report when you want the fix list, and compile a live Shadow API when you're ready to let agents in.
Point us at any URL. Get a 0–100 agent-readiness score, a letter grade, and a count of what's broken — in seconds, no signup.
Every signal scored with the specific detail, a prioritized fix list, and your projected score after Arachne compiles a Shadow API.
Compile your verified domain into a live MCP endpoint. Setup covers the compile, domain-ownership verification, and initial tuning. Monthly covers hosting, re-sync as your site changes, and the audit ledger.
mcp.<your-host> endpoint — paste once in Claude DesktopEvery Shadow API customer starts with a paid readiness report. We use it to scope your setup.
Higher volume, audit-ledger exports, SLA, dedicated gateway hosting — enterprise is a conversation.
Real questions from real buyers. If you don't see yours here, email [email protected].
Your front door is already open. Arachne replaces it with a glass door and a guest log.
Agents are scraping your site right now, anonymously and unauditably — your "no Shadow API" stance isn't safety, it's blindness. Arachne doesn't make agents possible; it makes them visible and accountable.
Every action is gated by a policy you define. Writes default to draft_only and require explicit per-call approval via the capability-token wallet. Page content returned to agents is flagged untrusted_page_content_is_data: true, so the MCP server tells agents verbatim to treat your content as data, never as instructions — the canonical defense against prompt injection. Every call lands in a hash-chained DeltaStore ledger you can re-walk for compliance or forensics.
You're trading invisibility for accountability. The risk delta is negative.
Deterministic, 7 weighted signals, max 100. Each signal is binary — you get the full weight or zero. The weights sum to 100, so the score IS the percentage.
| Signal | Weight | What we check |
|---|---|---|
| MCP server | 35 | /.well-known/mcp/server, /.well-known/mcp.json, or /mcp returns 200 |
| OpenAPI spec | 25 | /openapi.json, /swagger.json, or 6 other common locations contain a valid spec |
| llms.txt | 15 | /llms.txt exists and has content |
| Sitemap | 10 | /sitemap.xml or /sitemap_index.xml exists |
| robots.txt | 5 | /robots.txt exists |
| JSON-LD | 5 | Homepage has application/ld+json structured data |
| Structured forms | 5 | Forms (if any) use semantic markup — labels, named inputs |
Grade scale: A (90+) · B (75-89) · C (60-74) · D (40-59) · F (<40)
What it doesn't measure: content quality, API design quality, response time, security posture, or whether your endpoints actually work. It measures discoverability — can an agent find what it needs to call you without scraping HTML? A site can score 100 and still ship a buggy API. We measure the front door, not the rooms inside.
llms.txt?Those are descriptions. Arachne ships infrastructure.
An OpenAPI spec is a static document — useful, but agents still need a server to call. llms.txt tells crawlers what to read, not how to act. Arachne compiles either of them (or, for sites that have neither, the rendered DOM and observed network traffic) into a live MCP endpoint — with a capability-token wallet for state-changing actions and a hash-chained audit ledger for every call.
Best of all: if you already have OpenAPI, our compile is higher-quality and you score better. We're additive to those standards, not a replacement.
Your manifest is yours. You can email us to remove tools you don't want exposed, force-recompile after a site change, or upgrade tools from draft_only to wallet-gated write access once you've verified domain ownership.
Recompiles are part of your monthly. The endpoint URL stays the same so your customers don't need to reconfigure.
Less than scrapers do today. The gateway rate-limits per agent, and unlike anonymous scrapers, every call is identified and logged — you can see which MCP client is calling, how often, and what they touched. If a specific caller is abusive, you revoke their token; you can't revoke a scraper.
No. Arachne is fully egress-side — we only see what a normal browser sees on your public site. Domain ownership is proven via a DNS TXT record (or a file at /.well-known/arachne-challenge). We never touch your code, your database, or your auth system.
Anything that speaks the MCP Streamable HTTP protocol. Today that includes Claude Desktop (the most common client), Claude Code, Cursor, Continue, and custom agents built with LangChain MCP adapters or the OpenAI Agents SDK. The URL is universal.
Your hosted endpoint goes dark. Your manifest.json is yours forever — open format, no vendor lock-in. You can self-host it from the stdio config we ship in the bundle, or hand it to another MCP runtime. No data hostage situations.
Yes. The $49 readiness report includes a live preview of the Shadow API we'd compile for your domain — real manifest, real tools, real policy. You see what you'd be buying before committing to a hosted build.
A 0-100 agent-readiness score, in seconds, no signup. If you like what you see, the full report is $49 — billed instantly via Stripe — and lands you on a live preview of the Shadow API we'd build for your domain.