Run AI agents safely across code, browser, files, email, GitHub, and business tools — with permissions, approvals, sandboxing, risk scoring, defensive security review, and tamper-evident audit trails.
4ApprovalShow draft to you before anything publishes
Research Agent
CMO Agent
Verifier
Approval required · before risky action
Toollinkedin_publishAccountYour LinkedIn (verified)Will publish“Three things every founder should know about positioning…”
Replays with a different payload are rejected
Output ready · 248 words · LinkedIn-formatted
Three things every founder should know about positioning against {Competitor A}, {Competitor B}, and {Competitor C} — with one line you can copy into your next pitch.
JAK checks the work — citations, tone, safety, hallucination, payload integrity.
verifier.check() · 4-layer
7
Deliver
Final output, signed audit trail, replayable run. Ready to ship — or reuse next time.
output.deliver() · audit.signed
The Cockpit
Every workflow, one operating surface.
Your command on the left. The agent graph in the middle. The approval card and the result on the right. The audit on the bottom. One place to run, gate, and prove the work.
JAK Cockpit · /workspace
awaiting approval
Your command
Research my top 3 competitors and draft a CMO-voice LinkedIn post
“Three things every founder should know about positioning against {Competitor A}, {B}, {C}— with one line you can copy into your next pitch…”
Audit timeline · run #847
HMAC-SHA256 · every step replayable
00:00Workflow #847 started
00:02Plan created · 4 steps
00:05Research Agent · 3 competitors
00:09CEO Agent · 5-point angle
00:12CMO Agent · drafting…
—Approval gate · awaiting
What JAK actually ships
Finished work, not chat output.
Every workflow ends in something concrete — a brief, a draft, a diff, an audit pack. Approval-gated where it matters, signed where it’s required, reversible where it’s risky.
Competitor + market research brief
Multi-agent research across the web and your own documents. Every claim cites a source — uncited statements get flagged before delivery.
Competitor ACompetitor BCompetitor C
·Pricing gap on the entry tier [evidence: doc_3]
·Positioning angle missed by all 3 [evidence: web_7]
·2 LinkedIn posts to copy + adapt [evidence: web_4]
Citation density ≥ 0.7pgvector RAG
LinkedIn + outreach drafts
Researches your company and audience, then drafts a LinkedIn post + cold-email + follow-up sequence in your brand voice. JAK drafts; you approve. Nothing publishes or sends without your sign-off.
“Three things every founder should know about positioning against {Competitor A}, {B}, {C}…”
#Founders#GoToMarket#Positioning
□ Replace {placeholders} · □ Add link · □ Approve before publishing
Manual handoff requiredBrand-voice grounded
Website review + 5 fixes mapped to source
Crawls your site, screenshots key pages, reviews design + copy + SEO, and proposes concrete fixes that point at the exact source files in your repo.
[1]apps/web/.../page.tsx · hero CTA contrast
[2]components/Pricing.tsx · mobile tap target
[3]app/layout.tsx · meta description < 50 chars
+ 2 more · all sandbox-only until you approve
Source-file pointersSandbox-only edits
Audit-ready evidence pack
Every workflow step lands in a tamper-evident audit log. When an enterprise asks, JAK exports a HMAC-SHA256-signed evidence bundle that verifies byte-for-byte — across SOC 2, HIPAA, and ISO 27001 controls.
Bundle verified
runs847, 846, 845controls63 SOC 2 · all evidencedsignaturehmac:7a4c…f0d2 ✓
HMAC-SHA256 signedReplay-safe approval
Trust Layer
Built for controlled autonomy.
Six guarantees, every one wired into the runtime. Not policies. Not promises. Code paths reviewers can grep.
Human approval gates
Every external action — send, post, deploy, charge — pauses for an inline approval card. Replays with a different payload are rejected.
approval-node.ts · payload-bound
Source-grounded outputs
Research-class agents must cite. The verifier flags any claim under the citation-density threshold before delivery.
verifier.agent.ts · density ≥ 0.7
Tool maturity labels
Every tool carries an honest CI-enforced label: real, heuristic, llm_passthrough, config_dependent, or experimental. No tool ships unlabeled.
check:truth · 122 / 0 unclassified
Tamper-evident audit trail
Every workflow run, every approval decision, every external action emits an audit log row. Final evidence packs are HMAC-SHA256 signed.
audit-log plugin · bundle.service.ts
Self-hostable open-source core
JAK is MIT-licensed. Run it on your laptop, your VPS, or your cluster. Hosted ops are a convenience, not a lock-in.
github.com/inbharatai/jak-swarm · MIT
OpenAI-first runtime
OpenAI Responses API as the primary path with structured output. Anthropic, Gemini, DeepSeek, Ollama, and OpenRouter remain wired as optional fallback providers.
openai-runtime.ts · Responses API
JAK Shield
AI agents are powerful. JAK Shield makes them safe.
Before an agent touches your code, browser, files, email, GitHub, or business tools, JAK Shield checks permissions, scores risk, blocks unsafe actions, asks for approval where needed, and records every step in a tamper-evident evidence bundle.
Agent Firewall
Detects prompt-injection attacks and offensive-cyber requests (malware, exploits, credential theft, unauthorized scanning, phishing) BEFORE the LLM sees them. Defensive security work — audit my repo, harden auth, find CVEs — passes through.
Every tool call is classified across 6 risk tiers — READ_ONLY through CRITICAL_MANUAL_ONLY. Risky calls pause the workflow. Approval is bound to the exact payload via a SHA-256 hash; replays with modified payloads are rejected with HTTP 409.
packages/tools/src/registry/approval-policy.ts
Secure Tool Permissions
Per-tenant tool registry + industry-pack restrictions + Standing Orders (allowed-tools whitelist + blocked-actions list + budget cap + expiry). REVIEWER+ role required to install or run anything destructive.
JAK Shield supports defensive security work — repo audits, dependency scans, secret-leak detection, patch recommendations. Offensive work (writing exploits, generating malware, phishing kits) is blocked at the boundary.
docs/jak-shield-manifest.md
Audit Evidence Layer
Every workflow lifecycle event lands in AuditLog. AgentTrace fields are PII-redacted at write time. workflows.{goal,error,finalOutput,planJson,stateJson} are AES-256-GCM encrypted at rest. Final evidence bundles are HMAC-SHA256 signed and verify byte-for-byte.
apps/api/src/services/bundle.service.ts
Safety boundary
JAK Shield is built for defensive security, safe automation, permissioned workflows, and audit-ready agent execution. It does not support offensive hacking, malware generation, credential theft, phishing, unauthorized scanning, or exploit generation. Defensive work is allowed. Offensive work is refused.
When you need audit-grade
Enterprise-grade auditability when you need it.
You don’t need to think about SOC 2 on day one. Every workflow JAK runs is already tamper-evident, signed, and replayable — so when an enterprise customer asks, the evidence is already there.
63SOC 2 Type 237HIPAA Security Rule82ISO/IEC 27001:2022
182 controls seeded across three frameworks — 108 are operationally backed (evidence pulled from system activity) and 74 require reviewer attestation. LLM-driven control testing, reviewer-gated workpaper PDFs, HMAC-signed final evidence packs.