Web-to-Agent Interface Compiler

AGENT-READY WEBSITES

The web was built for human eyeballs — which makes it hostile to AI agents. Arachne compiles any authorized website into a deterministic, agent-operable Shadow API: a live MCP server, a capability-token wallet, and a tamper-evident audit ledger. Even for sites that have no API at all.

"Describing a dashboard you can't see is exactly what Arachne is going to change."

Agent-Ready Audit

Your website is already an interface. Agents just cannot safely use it yet.

Arachne evaluates how an AI agent sees your site, where it gets confused, what actions it can infer, and which workflows need structured resources, action contracts, approval boundaries, or policy gates.

Discovery

Map the human surface.

Arachne scans authorized pages, links, forms, docs, metadata, structured data, and policy pages to build a normalized AgentSiteModel.

sitemapformsschema
Compilation

Emit the agent interface.

Generate llms.txt, agent-manifest.json, MCP resources, OpenAPI-style action surfaces, and a risk-classified action registry.

llms.txtMCPOpenAPI
Governance

Stop blind automation.

Identify writes, forms, checkout paths, auth surfaces, and prompt-injection risks before agents treat untrusted page content as instructions.

risk mapapprovalaudit
Runtime Path

From human website to governed agent surface.

The output is not a scraper report. It is a repeatable interface layer that tells agents what exists, what they can read, what they can draft, and what needs approval before execution.

01 / Scout

Authorized Website

A verified domain enters the bounded crawler. External, private, and disallowed routes stay outside scope.

Domain Scoped
02 / Extract

Static crawl OR headless-Chromium SPA mode

Server-rendered sites get the fast HTML crawl. JS SPAs route to a headless-Chromium pipeline that renders each page and observes the real fetch/XHR traffic — so the resulting Shadow API gets the actual API endpoints, methods, and request shapes, not empty action="" forms.

Static · JS
03 / Classify

Risk and Confidence Mapping

Read actions, draft actions, submit actions, purchases, auth paths, and inferred workflows are labeled by confidence and risk.

Policy Layer
04 / Compile

Agent Manifest + MCP/OpenAPI

Arachne emits machine-readable files agents can consume instead of blindly clicking through a human UI.

Interface Ready
05 / Validate

Nemesis Abuse-Case Testing

Optional adversarial testing checks whether an agent can bypass approval, misuse forms, follow prompt injections, or perform risky actions.

Attack Tested
Deliverables

Everything your site needs to become agent-ready.

Arachne gives you concrete files, not vague AI-readiness advice. You receive the machine interface, the human report, and the risk model needed to safely expose your site to agents.

Artifact

agent-manifest.json

A machine-readable inventory of pages, resources, tools, policies, approval requirements, and action schemas.

Artifact

llms.txt

A curated agent-facing site map that tells LLMs what matters and warns them to treat page content as untrusted data.

Artifact

MCP / OpenAPI Surface

Tool/resource definitions that let AI systems query your site through structured interfaces instead of brittle browsing.

Report

Agent Readiness Findings

A plain-English assessment of discoverability, action risk, form handling, auth surfaces, prompt-injection exposure, and remediation steps.

Runtime

Shadow Wallet

Agents never hold raw credentials or cards. State-changing actions mint single-use, merchant- and amount-locked capability tokens that require human approval to activate.

Runtime

DeltaStore Audit Ledger

Every capability issued, every approval, every execution is hash-chained into a tamper-evident ledger. Re-walk it from genesis to prove nothing was altered.

Hosted

Live MCP Endpoint

Your Shadow API ships as a single URL — mcp.<your-host>/mcp/<shadow_id>/mcp — that any MCP client (Claude Desktop, IDE extensions, agent frameworks) can connect to. Streamable HTTP transport, real protocol, zero install on the customer's side.

Delivery

Customer Bundle (.zip)

One download with manifest.json, mcp-config.json, and a personalized README.md — 1-minute install steps, policy walkthrough, domain-verification instructions. Attach to a single onboarding email.

agent-manifest.previewv0.1
{
  "protocol": "artemis-agent-interface",
  "site": "https://customer.com",
  "verified_domain": "customer.com",
  "tools": [
    {
      "name": "submit_inquiry",
      "method": "POST",
      "target": "https://customer.com/api/v1/inquiry",
      "risk": "medium",
      "requires_human_approval": true,
      "input_schema": {
        "name": "string",
        "email": "string",
        "message": "string"
      }
    }
  ],
  "policy": {
    "writes_default": "draft_only",
    "untrusted_page_content_is_data": true,
    "requires_site_owner_verification": true
  },
  "hosted": {
    "transport": "streamable-http",
    "endpoint": "https://mcp.solsticestudio.ai/mcp/shadow_xxx/mcp"
  }
}
Pricing

Start free. Compile when you're ready. Scale when it works.

Not a consulting engagement — a self-serve product. Score any site free in seconds, buy the full report when you want the fix list, and compile a live Shadow API when you're ready to let agents in.

Free

Agent-Readiness Score

$0

Point us at any URL. Get a 0–100 agent-readiness score, a letter grade, and a count of what's broken — in seconds, no signup.

  • Instant score + grade
  • Issue count across 7 readiness signals
  • Shareable — send it to your team
Get Your Free Score
One-Time

Full Readiness Report

$49 / domain

Every signal scored with the specific detail, a prioritized fix list, and your projected score after Arachne compiles a Shadow API.

  • All 7 signals, fully detailed
  • Prioritized remediation list
  • Projected post-Arachne score
  • Pay by card — instant, no procurement
Run a Scan →
Hosted

Shadow API

$500 setup · $99 / mo

Compile your verified domain into a live MCP endpoint. Setup covers the compile, domain-ownership verification, and initial tuning. Monthly covers hosting, re-sync as your site changes, and the audit ledger.

  • Live mcp.<your-host> endpoint — paste once in Claude Desktop
  • Static + JS-SPA compile (headless Chromium)
  • Shadow Wallet capability tokens · DeltaStore audit ledger
  • Auto re-compile when your site changes
  • Customer-ready bundle (manifest + config + README)
Start with the $49 report →

Every Shadow API customer starts with a paid readiness report. We use it to scope your setup.

Higher volume, audit-ledger exports, SLA, dedicated gateway hosting — enterprise is a conversation.

FAQ

The honest answers.

Real questions from real buyers. If you don't see yours here, email [email protected].

Doesn't publishing a Shadow API make my site more vulnerable to AI agent attacks?

Your front door is already open. Arachne replaces it with a glass door and a guest log.

Agents are scraping your site right now, anonymously and unauditably — your "no Shadow API" stance isn't safety, it's blindness. Arachne doesn't make agents possible; it makes them visible and accountable.

Every action is gated by a policy you define. Writes default to draft_only and require explicit per-call approval via the capability-token wallet. Page content returned to agents is flagged untrusted_page_content_is_data: true, so the MCP server tells agents verbatim to treat your content as data, never as instructions — the canonical defense against prompt injection. Every call lands in a hash-chained DeltaStore ledger you can re-walk for compliance or forensics.

You're trading invisibility for accountability. The risk delta is negative.

How is the agent-readiness score calculated?

Deterministic, 7 weighted signals, max 100. Each signal is binary — you get the full weight or zero. The weights sum to 100, so the score IS the percentage.

SignalWeightWhat we check
MCP server35/.well-known/mcp/server, /.well-known/mcp.json, or /mcp returns 200
OpenAPI spec25/openapi.json, /swagger.json, or 6 other common locations contain a valid spec
llms.txt15/llms.txt exists and has content
Sitemap10/sitemap.xml or /sitemap_index.xml exists
robots.txt5/robots.txt exists
JSON-LD5Homepage has application/ld+json structured data
Structured forms5Forms (if any) use semantic markup — labels, named inputs

Grade scale: A (90+) · B (75-89) · C (60-74) · D (40-59) · F (<40)

What it doesn't measure: content quality, API design quality, response time, security posture, or whether your endpoints actually work. It measures discoverability — can an agent find what it needs to call you without scraping HTML? A site can score 100 and still ship a buggy API. We measure the front door, not the rooms inside.

How is this different from publishing an OpenAPI spec or an llms.txt?

Those are descriptions. Arachne ships infrastructure.

An OpenAPI spec is a static document — useful, but agents still need a server to call. llms.txt tells crawlers what to read, not how to act. Arachne compiles either of them (or, for sites that have neither, the rendered DOM and observed network traffic) into a live MCP endpoint — with a capability-token wallet for state-changing actions and a hash-chained audit ledger for every call.

Best of all: if you already have OpenAPI, our compile is higher-quality and you score better. We're additive to those standards, not a replacement.

What if I want to add, remove, or change a tool in my Shadow API?

Your manifest is yours. You can email us to remove tools you don't want exposed, force-recompile after a site change, or upgrade tools from draft_only to wallet-gated write access once you've verified domain ownership.

Recompiles are part of your monthly. The endpoint URL stays the same so your customers don't need to reconfigure.

Will agents hammer my server with calls?

Less than scrapers do today. The gateway rate-limits per agent, and unlike anonymous scrapers, every call is identified and logged — you can see which MCP client is calling, how often, and what they touched. If a specific caller is abusive, you revoke their token; you can't revoke a scraper.

Do you need access to my codebase or backend?

No. Arachne is fully egress-side — we only see what a normal browser sees on your public site. Domain ownership is proven via a DNS TXT record (or a file at /.well-known/arachne-challenge). We never touch your code, your database, or your auth system.

What MCP clients does the hosted endpoint work with?

Anything that speaks the MCP Streamable HTTP protocol. Today that includes Claude Desktop (the most common client), Claude Code, Cursor, Continue, and custom agents built with LangChain MCP adapters or the OpenAI Agents SDK. The URL is universal.

What happens if I cancel?

Your hosted endpoint goes dark. Your manifest.json is yours forever — open format, no vendor lock-in. You can self-host it from the stdio config we ship in the bundle, or hand it to another MCP runtime. No data hostage situations.

Can I see what my Shadow API would look like before committing?

Yes. The $49 readiness report includes a live preview of the Shadow API we'd compile for your domain — real manifest, real tools, real policy. You see what you'd be buying before committing to a hosted build.

Find out before the agents do.

Score your site free. Compile when you're ready.

A 0-100 agent-readiness score, in seconds, no signup. If you like what you see, the full report is $49 — billed instantly via Stripe — and lands you on a live preview of the Shadow API we'd build for your domain.

Get Your Free Score →