Powered by JAK Shield· The Secure Control Plane for AI Agents

The Secure Control PlaneforAI Agents.

Run AI agents safely across code, browser, files, email, GitHub, and business tools — with permissions, approvals, sandboxing, risk scoring, defensive security review, and tamper-evident audit trails.

PermissionsApprovalsSandboxingRisk ScoringDefensive ReviewTamper-evident Audit
JAK Cockpit · Workflow #847
running
>
Research Agent
CMO Agent
Verifier

Live preview of the cockpit · same surface every workflow runs through

Why chat isn’t enough

AI chat gives answers. JAK gets work done.

A chatbot is a generator. JAK is an operator. Three things change the moment you stop chatting and start running workflows.

Chatbots don’t manage workflows

You ask a chatbot for help. It writes a great paragraph. Then you copy, paste, switch tabs, follow up, retry. The work doesn’t finish itself.

How JAK fixes it

JAK turns one command into a multi-step plan, hands each step to the right agent, and pushes the result through to the finish.

Agents are dangerous without approval gates

An autonomous agent that can send email, post publicly, run code, or move money is a liability the moment it misreads context.

How JAK fixes it

JAK pauses every external action behind an inline approval card that names the tool, the payload, and the file. No surprises, no replays.

Real work needs visibility, traceability, and control

You can’t hand business work to an opaque black box. You need to see who did what, when, and prove it later if a customer asks.

How JAK fixes it

Every agent step lands in the cockpit timeline. Every workflow leaves a tamper-evident audit trail. Every output is replayable.

How It Works

Seven steps from intent to delivered work.

Every JAK workflow runs the same pipeline. You see every step. You gate every risky one. You can replay every run.

  1. 1
    Command

    You type a task in plain English. No syntax, no flags, no special prompt format.

    commander.parses(intent)
  2. 2
    Plan

    JAK breaks the task into ordered steps you can review before anything runs.

    planner.decompose() → 4 steps
  3. 3
    Route

    Each step goes to the right specialist agent — research, content, code, ops, design.

    router.assign(task → CMO / CTO / Research)
  4. 4
    Execute

    Specialists run with your connected tools — Gmail, Slack, GitHub, Notion, the browser.

    worker.run() · live in cockpit
  5. 5
    Approve

    Anything risky pauses for you. Inline card shows tool, payload, files, expected result.

    approval.gate(payload-bound)
  6. 6
    Verify

    JAK checks the work — citations, tone, safety, hallucination, payload integrity.

    verifier.check() · 4-layer
  7. 7
    Deliver

    Final output, signed audit trail, replayable run. Ready to ship — or reuse next time.

    output.deliver() · audit.signed

The Cockpit

Every workflow, one operating surface.

Your command on the left. The agent graph in the middle. The approval card and the result on the right. The audit on the bottom. One place to run, gate, and prove the work.

JAK Cockpit · /workspace
awaiting approval

Your command

Research my top 3 competitors and draft a CMO-voice LinkedIn post

sent 12s ago · workflow #847

Recent runs

  • Website review · 5 fixes proposed
  • SOC 2 readiness summary · pending review
  • Cold-email batch · 14 drafts approved

Agent graph · live

3/5 done · 1 running
Commander
Planner
Research
CEO
CMO
Verifier
done running queued
Approval required
toollinkedin_publishpayload248-word draftreplay-safepayload-bound ✓
Draft preview · 248 words

“Three things every founder should know about positioning against {Competitor A}, {B}, {C}— with one line you can copy into your next pitch…”

Audit timeline · run #847
HMAC-SHA256 · every step replayable
  1. 00:00Workflow #847 started
  2. 00:02Plan created · 4 steps
  3. 00:05Research Agent · 3 competitors
  4. 00:09CEO Agent · 5-point angle
  5. 00:12CMO Agent · drafting…
  6. Approval gate · awaiting

What JAK actually ships

Finished work, not chat output.

Every workflow ends in something concrete — a brief, a draft, a diff, an audit pack. Approval-gated where it matters, signed where it’s required, reversible where it’s risky.

Competitor + market research brief

Multi-agent research across the web and your own documents. Every claim cites a source — uncited statements get flagged before delivery.

Competitor ACompetitor BCompetitor C
  • ·Pricing gap on the entry tier [evidence: doc_3]
  • ·Positioning angle missed by all 3 [evidence: web_7]
  • ·2 LinkedIn posts to copy + adapt [evidence: web_4]
Citation density ≥ 0.7pgvector RAG

LinkedIn + outreach drafts

Researches your company and audience, then drafts a LinkedIn post + cold-email + follow-up sequence in your brand voice. JAK drafts; you approve. Nothing publishes or sends without your sign-off.

“Three things every founder should know about positioning against {Competitor A}, {B}, {C}…”

#Founders#GoToMarket#Positioning

□ Replace {placeholders} · □ Add link · □ Approve before publishing

Manual handoff requiredBrand-voice grounded

Website review + 5 fixes mapped to source

Crawls your site, screenshots key pages, reviews design + copy + SEO, and proposes concrete fixes that point at the exact source files in your repo.

  • [1]apps/web/.../page.tsx · hero CTA contrast
  • [2]components/Pricing.tsx · mobile tap target
  • [3]app/layout.tsx · meta description < 50 chars
  • + 2 more · all sandbox-only until you approve
Source-file pointersSandbox-only edits

Audit-ready evidence pack

Every workflow step lands in a tamper-evident audit log. When an enterprise asks, JAK exports a HMAC-SHA256-signed evidence bundle that verifies byte-for-byte — across SOC 2, HIPAA, and ISO 27001 controls.

Bundle verified
runs847, 846, 845controls63 SOC 2 · all evidencedsignaturehmac:7a4c…f0d2 ✓
HMAC-SHA256 signedReplay-safe approval

Trust Layer

Built for controlled autonomy.

Six guarantees, every one wired into the runtime. Not policies. Not promises. Code paths reviewers can grep.

Human approval gates

Every external action — send, post, deploy, charge — pauses for an inline approval card. Replays with a different payload are rejected.

approval-node.ts · payload-bound

Source-grounded outputs

Research-class agents must cite. The verifier flags any claim under the citation-density threshold before delivery.

verifier.agent.ts · density ≥ 0.7

Tool maturity labels

Every tool carries an honest CI-enforced label: real, heuristic, llm_passthrough, config_dependent, or experimental. No tool ships unlabeled.

check:truth · 122 / 0 unclassified

Tamper-evident audit trail

Every workflow run, every approval decision, every external action emits an audit log row. Final evidence packs are HMAC-SHA256 signed.

audit-log plugin · bundle.service.ts

Self-hostable open-source core

JAK is MIT-licensed. Run it on your laptop, your VPS, or your cluster. Hosted ops are a convenience, not a lock-in.

github.com/inbharatai/jak-swarm · MIT

OpenAI-first runtime

OpenAI Responses API as the primary path with structured output. Anthropic, Gemini, DeepSeek, Ollama, and OpenRouter remain wired as optional fallback providers.

openai-runtime.ts · Responses API

JAK Shield

AI agents are powerful. JAK Shield makes them safe.

Before an agent touches your code, browser, files, email, GitHub, or business tools, JAK Shield checks permissions, scores risk, blocks unsafe actions, asks for approval where needed, and records every step in a tamper-evident evidence bundle.

Agent Firewall

Detects prompt-injection attacks and offensive-cyber requests (malware, exploits, credential theft, unauthorized scanning, phishing) BEFORE the LLM sees them. Defensive security work — audit my repo, harden auth, find CVEs — passes through.

packages/security/src/guardrails/offensive-cyber-detector.ts

Risk-Based Approvals

Every tool call is classified across 6 risk tiers — READ_ONLY through CRITICAL_MANUAL_ONLY. Risky calls pause the workflow. Approval is bound to the exact payload via a SHA-256 hash; replays with modified payloads are rejected with HTTP 409.

packages/tools/src/registry/approval-policy.ts

Secure Tool Permissions

Per-tenant tool registry + industry-pack restrictions + Standing Orders (allowed-tools whitelist + blocked-actions list + budget cap + expiry). REVIEWER+ role required to install or run anything destructive.

packages/tools/src/registry/tenant-tool-registry.ts

Sandboxed Execution

Browser sessions in per-tenant data dirs (500 MB quota), URL allowlist with cloud-metadata + RFC1918 + IPv6 link-local blocked, DNS-rebinding defense on every navigation, downloads disabled. Installer runs in a sandboxed subprocess with literal argv (never shell:true), 60s timeout, stripped env.

packages/tools/src/browser-operator/playwright-browser-operator.ts

Defensive Vulnerability Triage

JAK Shield supports defensive security work — repo audits, dependency scans, secret-leak detection, patch recommendations. Offensive work (writing exploits, generating malware, phishing kits) is blocked at the boundary.

docs/jak-shield-manifest.md

Audit Evidence Layer

Every workflow lifecycle event lands in AuditLog. AgentTrace fields are PII-redacted at write time. workflows.{goal,error,finalOutput,planJson,stateJson} are AES-256-GCM encrypted at rest. Final evidence bundles are HMAC-SHA256 signed and verify byte-for-byte.

apps/api/src/services/bundle.service.ts

Safety boundary

JAK Shield is built for defensive security, safe automation, permissioned workflows, and audit-ready agent execution. It does not support offensive hacking, malware generation, credential theft, phishing, unauthorized scanning, or exploit generation. Defensive work is allowed. Offensive work is refused.

When you need audit-grade

Enterprise-grade auditability when you need it.

You don’t need to think about SOC 2 on day one. Every workflow JAK runs is already tamper-evident, signed, and replayable — so when an enterprise customer asks, the evidence is already there.

63SOC 2 Type 237HIPAA Security Rule82ISO/IEC 27001:2022

182 controls seeded across three frameworks — 108 are operationally backed (evidence pulled from system activity) and 74 require reviewer attestation. LLM-driven control testing, reviewer-gated workpaper PDFs, HMAC-signed final evidence packs.

Open Audit Workspace
  • Reviewer-gated workpaper PDFs — download blocked until approved
  • Final-pack signing refuses if any workpaper is unapproved (FinalPackGateError)
  • External Auditor Portal — invite-token-only, engagement-scoped, fully audited
  • HMAC-SHA256 evidence bundles verify byte-for-byte

Pricing

Transparent pricing. Open-source core.

Run JAK free on your own infrastructure, forever. Upgrade when you want hosted OpenAI ops, higher limits, and SLA.

Free

$0forever

Run JAK on your own machine, forever.

  • 200 credits / month
  • 30 credits / day
  • 5 core agents
  • 1 vibe coding project
  • Bring-your-own OpenAI key
  • Community support
Most Popular

Pro

$29/mo

Hosted runtime, OpenAI managed, approvals built in.

  • 3,000 credits / month
  • 200 credits / day
  • All 38 specialist agents
  • 5 vibe coding projects
  • Managed OpenAI runtime (GPT-4o tier)
  • 500 premium credits
  • Email support

Team

$99/mo

Higher limits and priority model access for teams.

  • 15,000 credits / month
  • 600 credits / day
  • All agents + custom skills
  • Unlimited projects
  • 3,000 premium credits
  • Managed OpenAI runtime
  • Priority support

Enterprise

$249/mo

SSO, audit exports, and dedicated deployment.

  • 50,000 credits / month
  • 2,000 credits / day
  • 15,000 premium credits
  • SSO + RBAC + audit logs
  • Custom integrations
  • Managed OpenAI runtime
  • Dedicated support
38
Specialist Agents
122
Classified Tools
20+
Connectors
MIT
Open Source

Stop chatting with AI.
Start operating with it.

Run workflows with visibility, approval, and audit from day one. Open-source core. Self-hostable. OpenAI-first runtime.

Free to start · No credit card · MIT licensed · Self-host or cloud