← Back to App
What is LISA?
LISA (Language Intelligence Semantic Anchoring) is a semantic compression platform that transforms documents and conversations into machine-optimized JSON format that any AI can interpret identically. LISA achieves 10-20× token reduction while preserving complete semantic fidelity — with cryptographic proof of integrity.
How LISA Resolves Ambiguity
AI systems process raw text probabilistically, burning compute cycles resolving ambiguities before reasoning can even begin. LISA pre-resolves these ambiguities through semantic translation, resulting in deterministic interpretation rather than probabilistic guessing.
The 10 categories of ambiguity LISA resolves:
- Referential Ambiguity — "Fix it" becomes explicit entity references
- Temporal Ambiguity — "Earlier" becomes specific timestamps
- Priority Ambiguity — Implicit importance becomes explicit weighting
- Completion Ambiguity — Open loops get explicit state (done/ongoing/planned)
- Social/Emotional Noise — "Thanks!" and pleasantries stripped entirely
- Reasoning Collapse — The "why" behind decisions is captured, not just "what"
- Role Ambiguity — "I'll do this" becomes explicit ownership assignment
- Dependency Ambiguity — Linear conversation becomes relationship graph
- State Ambiguity — Stream of changes becomes snapshot of current state
- Relevance Ambiguity — Only semantically significant content survives
The result: AI spends 95% of its compute on reasoning, not 40% resolving ambiguity. This is why LISA-compressed content enables better AI performance, not just better storage.
The Problem LISA Solves
Every time you feed a conversation or document to an AI, it spends 40% of its compute cycles resolving ambiguities — figuring out what "it" refers to, when "later" means, which "file" you mentioned. This happens every single time, across every platform, wasting tokens and producing inconsistent interpretations.
The problem isn't the AI. It's the format.
Human language is optimized for humans — rich with context, pronouns, implied causality, and social lubricant. But AI systems process this probabilistically, leading to:
- Inconsistent interpretations — Claude reads it one way, GPT-4 another
- Wasted compute — Re-parsing the same ambiguities on every query
- Context collapse — Long documents hit token limits, forcing you to summarize and lose fidelity
- No audit trail — Can't prove what the AI was told vs. what it inferred
LISA's Solution: Translate Before Inference
LISA doesn't compress text — it translates human language into machine-executable semantics. Before any AI sees your content, LISA resolves all 10 categories of ambiguity and produces a deterministic JSON structure that every AI platform (Claude, GPT-4, Gemini, Grok, Mistral, DeepSeek, Copilot, Perplexity) reconstructs identically.
The result: Cross-platform portability with 10-20× token savings, 95% compute focused on reasoning (not ambiguity resolution), and cryptographic audit trails proving exactly what context the AI received.
Think of it as professional translation for AI. Just as a skilled translator doesn't just swap words between French and English — they stabilize meaning across linguistic boundaries — LISA stabilizes meaning across the human-AI boundary.
How It Works
1. Upload Your Content
Paste text or upload files (.txt, .md, .json, .log, .docx) — AI conversations, technical documents, meeting notes, research papers, or any unstructured text.
2. Select Semantic Anchors
Choose how many semantic anchors to extract (6-24 depending on your tier). More anchors = higher fidelity for complex documents.
3. AI Compression
LISA's semantic engine extracts decisions, insights, entities, relationships, and dependencies — compressing 60,000 words into ~700 words (10-20× reduction) while preserving complete meaning.
4. Download Governed JSON
Get a platform-agnostic JSON file with:
- Semantic anchors (SA001, SA002...) — structured content with relationships
- Action vectors (AV001...) — tasks with owners, priorities, dependencies
- Reconstruction protocol — guidance for AI to expand back to full context
- Cryptographic hash (SHA-256) — immutable audit trail
- Session metadata — compression ratio, platform, timestamp
Choosing Your Anchor Count
Semantic anchors are the building blocks of LISA's compression. Each anchor captures a discrete concept with its context, relationships, and significance.
How many anchors do you need?
- 6 anchors (Free) — Short conversations, quick notes
- 12 anchors (Pro/Team) — Detailed conversations, small documents
- 24 anchors (Enterprise) — Complex documents, multi-topic discussions
Note: LISA generates anchors automatically — you never write them manually. The AI analyzes your content and extracts the most semantically significant elements based on the count you specify.
Pro tip: More anchors = higher fidelity, but also slightly larger file size. For most conversations, 6-12 anchors is optimal.
Using Focus Areas
Focus Areas let you guide LISA's compression toward specific themes in your content.
How it works
When compressing, optionally specify what to emphasize (e.g., "technical decisions", "action items", "compliance requirements"). LISA will prioritize semantic anchors related to your focus while still capturing overall context.
Common focus areas:
- Technical decisions — Architecture choices, implementation details
- Action items — Tasks, deliverables, deadlines
- Key insights — Novel ideas, breakthroughs, learnings
- Compliance & governance — Regulatory requirements, audit trails
- Stakeholder decisions — Business choices, strategic direction
Leave blank for balanced compression — LISA will extract anchors proportionally across all topics.
Using Your LISA Exports
LISA exports aren't just for continuity — they're queryable knowledge bases.
Once you have a LISA JSON export, you can:
1. Continue Conversations
Upload to any AI and say: "Reconstruct this and continue where we left off"
2. Query Compressed Content
Ask questions about the original content without re-uploading it:
- "What were the technical decisions in this document?"
- "List all action items with their owners"
- "What compliance requirements were mentioned?"
- "Summarize the key insights from this conversation"
3. Cross-Reference Multiple Exports
Upload several LISA files and ask:
- "What's the common theme across these three meetings?"
- "Which project has the most unresolved action items?"
- "Compare the technical approaches in documents A and B"
The Magic: Your 60,000-word document is now a 700-word semantic map that any AI can query instantly — 10-20× faster and 10-20× cheaper than processing the full text.
Works with: Claude, ChatGPT, Gemini, Grok, Mistral, Ollama (local models)
Reconstruction fidelity: >95% semantic preservation across all platforms
Subscription Tiers & Limits
LISA offers four tiers to match your compression needs:
| Tier |
Monthly |
Annual |
Daily Limit |
Max Anchors |
| Free |
$0 |
— |
5/day |
6 |
| Pro |
$19 |
$79/year (Save $149!) |
50/day |
12 |
| Team |
$59 |
$468/year (Save $240!) |
80/day |
12 |
| Enterprise |
$199 |
$1,188/year (Save $1,200!) |
200/day |
24 |
What counts as a compression?
One semantic analysis of your conversation or document. Each compression generates a governed JSON export with cryptographic signatures.
All tiers include:
- Cross-platform compatibility (Claude, GPT-4, Gemini, Grok, Mistral)
- Cryptographic audit trails (SHA-256 hashing)
- Unlimited queries of your LISA exports
- No vendor lock-in — you own your data
Report Types
LISA generates governance-verified reports from your compressed content:
FREE+ Quick Summary
Brief overview of key topics and main points. Perfect for quick reference.
PRO+ Executive Summary
High-level overview focusing on decisions, outcomes, and action items. Perfect for stakeholders who need the "what" without the "how."
PRO+ Technical Detailed
In-depth technical breakdown including implementation details, code snippets, architecture decisions, and dependencies. Ideal for developers and engineers.
PRO+ Detailed Analysis
Comprehensive report covering all semantic anchors, relationships, context, reasoning, and recommendations.
ENTERPRISE Custom Reports
Tailored report formats for specific business needs. Contact support to configure custom templates for compliance, audit, or domain-specific requirements.
All reports include: Dual-layer verification (anchor hashes + Merkle root), cryptographic proof of integrity, tamper-evident audit trails, and export as PDF or JSON.
API Access
Programmatic access to LISA's compression engine is available for Team and Enterprise tiers.
Authentication
All API requests require your license key in the Authorization header:
curl -X POST https://api.sat-chain.com/compress \
-H "Authorization: Bearer YOUR_LICENSE_KEY" \
-H "Content-Type: application/json" \
-d '{
"content": "Your document or conversation text here",
"anchor_count": 12,
"focus_areas": "technical decisions"
}'
Core Endpoints
POST /compress — Compress content into LISA format
Request body:
content (string, required) — Text to compress
anchor_count (integer, 6-24) — Number of semantic anchors
focus_areas (string, optional) — Guidance for extraction
Response: LISA JSON export with semantic anchors, action vectors, reconstruction protocol
POST /reconstruct — Expand LISA export back to full context
Request body:
lisa_export (object, required) — LISA JSON structure
Response: Reconstructed full-context text
GET /usage — Check your compression usage and limits
Response: Daily limit, remaining compressions, tier info
Rate Limits
- Team: 80 compressions/day
- Enterprise: 200 compressions/day
- Burst limit: 10 requests/minute
Privacy & Security
Your data security and privacy are foundational to LISA.
Data Storage Architecture
PostgreSQL Database (Server-Side)
We store only what's necessary for the platform to function:
- License keys — Authentication (hashed with bcrypt)
- Usage counters — Daily/monthly limits for rate limiting
- Anchor certificates — SHA-256 hashes for verification (NOT your actual content)
- Report anchors — Merkle root hashes for governance reports (NOT your actual content)
- Stripe sessions — Payment metadata for billing
- Share links — Temporary share URLs (optional feature)
- Snapshots metadata — Only if you use the Chrome extension's "Send to App" sync feature
Your Browser (localStorage)
Your actual content lives here — under your control:
- Extension-synced snapshots — Full LISA exports from the extension
- Manual uploads — Documents you upload via the web app
- Compressed outputs — Your generated LISA JSON files
- License key — For authentication (you can clear it anytime)
Key principle: We store hashes and metadata, not your actual content. Your conversations and documents stay in your browser unless you explicitly sync them.
What We DO NOT Store
- ❌ Your original conversation text
- ❌ Your uploaded documents
- ❌ Your LISA JSON exports (unless you choose extension sync)
- ❌ Any AI provider API responses
- ❌ Training data from your content
Data Handling
- Ephemeral processing — Content passes through our API for compression but is deleted from memory immediately after
- Zero retention — Your text is never written to disk on our servers
- You own your data — Export everything as JSON anytime, delete anytime
- No AI training — Your content is never used to train models or shared with third parties
Cryptographic Integrity
Every LISA export includes:
- SHA-256 hashing — Each semantic anchor has a cryptographic hash proving it hasn't been tampered with
- Merkle root verification — Reports include Merkle trees linking all anchors to a single root hash
- Immutable audit trails — Any modification to the export invalidates the cryptographic proof
- Anchor certificates — Stored in database for third-party verification (hash only, not content)
Compliance
- GDPR compliant — Right to access, export, and delete your data at any time
- SOC 2 Type II (in progress) — Enterprise-grade security controls
- Data portability — Export all your data in standard JSON format
Encryption
- All API requests use TLS 1.3 encryption
- License keys stored with bcrypt hashing
- Database connections use SSL/TLS
- Optional end-to-end encryption for Enterprise (contact sales)
Chrome Extension — Available Now (v0.49.1)
The LISA Chrome Extension is live and makes compression effortless — capture conversations with one click, no copy-pasting required.
What It Does
- One-click capture — Save AI conversations directly from supported platforms
- Automatic formatting — Extension handles conversation parsing and structure
- Instant compression — Compress captured conversations without leaving your AI chat
- Sync to web app — Optional "Send to App" syncs your snapshots to the LISA library for report generation
- Offline storage — All captures saved locally in your browser until you choose to sync
Supported Platforms
- ✅ Claude (claude.ai)
- ✅ Claude Code (terminal-based coding assistant)
- ✅ ChatGPT (chat.openai.com)
- ✅ Gemini (gemini.google.com)
- ✅ Grok (x.ai)
- ✅ Mistral AI (chat.mistral.ai)
- ✅ DeepSeek (chat.deepseek.com)
- ✅ Microsoft Copilot (copilot.microsoft.com)
- ✅ Perplexity (perplexity.ai)
Coming soon: Ollama local model support (offline compression with your own LLMs)
How to Use
- Install the extension from Chrome Web Store
- Visit any supported AI platform — A floating LISA button appears
- Click "Capture Conversation" — Extension saves the current chat
- Compress immediately OR sync to web app for reports
- Access your library — View all captured snapshots in the extension popup or web app
Privacy Note: Snapshots are stored locally in your browser by default. Using "Send to App" uploads snapshot metadata (not full content) to our server for report generation — you control when this happens.
Troubleshooting
Common issues and solutions:
"Rate limit exceeded" error
Cause: You've hit your daily compression limit for your tier.
Solution:
- Check your usage in the app:
/api/usage endpoint or extension settings
- Free: 5/day | Pro: 50/day | Team: 80/day | Enterprise: 200/day
- Upgrade your tier for higher limits, or wait until daily reset (midnight UTC)
"AI can't read my LISA file"
Cause: The AI platform may have size limits or JSON parsing issues.
Solution:
- Verify the file is valid JSON (use jsonlint.com)
- For very large files (>1MB), try uploading to Claude (largest context window)
- Use the reconstruction prompt: "Reconstruct this LISA semantic compression and continue the conversation"
- If the file is corrupted, re-compress from the original source
"Can I share snapshots with team members?"
Yes! Use the Share feature:
- In the library, click Share on any snapshot
- Generate a shareable link (expires in 7 days by default)
- Team members can view/download the LISA export via the link
- Enterprise: Custom expiration times and access controls available
"How long are snapshots stored?"
Forever (under your control):
- localStorage — Snapshots stay in your browser until you delete them
- Server sync — If you use extension "Send to App", snapshots persist on our server (but you can delete anytime)
- No automatic deletion — We never delete your data without your action
"Can I export my data?"
Absolutely:
- Individual exports — Download any LISA file as JSON
- Bulk export — Use "Merged" to combine multiple snapshots into one JSON file
- ZIP export — Download all snapshots as a ZIP archive
- Full data export — Contact us for a complete account data dump (GDPR right)
Extension not appearing on AI platforms
Solution:
- Refresh the page after installing the extension
- Check that the extension is enabled in
chrome://extensions
- Verify you're on a supported platform (claude.ai, chat.openai.com, gemini.google.com)
- Try disabling other extensions that might conflict
Still need help?
- Email: contact@sat-chain.com
- Pro+ users: Priority support within 24 hours
- Enterprise: Dedicated support with SLA guarantees
Roadmap
What's coming next for LISA:
v0.50 (Q2 2026) — Enhanced Compression
- 🎯 Document compression mode — Support for large PDFs, Word docs, technical manuals
- 📊 Batch compression — Upload multiple files at once
- 🔍 Advanced focus areas — More granular control over semantic extraction
- 💾 Google Drive integration — Auto-sync your LISA exports to Google Drive
v0.60 (Q3 2026) — Enterprise Features
- 👥 Team collaboration — Shared libraries, role-based access
- 🔐 SSO integration — SAML, OAuth for enterprise authentication
- 📈 Usage analytics — Dashboard for team compression insights
- 🤖 Custom AI providers — Bring your own API keys for any LLM
v1.0 (Q4 2026) — Full Platform
- 🌐 Multi-platform extensions — Firefox, Safari, Edge native support
- 📱 Mobile apps — iOS and Android LISA capture
- 🔗 Zapier/Make integration — Automate compression workflows
- 🧠 AI-powered insights — Automatic pattern detection across your compressed knowledge base
On the Horizon
- SAT-CHAIN integration — Full regulatory compliance automation
- Notion/Obsidian plugins — Direct integration with knowledge management tools
- Slack/Teams bots — Compress meeting transcripts automatically
- Local-first architecture — Run LISA completely offline with local LLMs