Mr. Latte


DeepThought

Liveproductivityhttps://deepthought.me

Open in the browser, structure thinking in seconds, publish the result as a link, and let other people fork it into their own draft. Zero install, free, with browser-side AI assistance.

Visit DeepThought

Positioning

DeepThought is a free mind map studio that lives entirely in the browser — quick to structure thinking, polished enough to publish, and open enough that someone else can take a published map and fork it into their own draft. It deliberately sits between “heavyweight team whiteboards” and “thin sketchpads inside note apps.”

Market and Problem

Mind mapping tools have split into two camps. (1) Collaboration boards (Miro / FigJam / Whimsical) — feature-heavy, gated by pricing as teams scale. (2) Mind-map widgets inside note apps — light, but the output is too plain to share. The gap in between — “structure alone, fast, with a publishable result that others can build on” — has been underserved.

DeepThought intentionally drops collaboration features and instead invests in publishing and forking as the social loop. Think alone, share lightly.

Core Audience and Personas

  • Students, knowledge workers, founders — Anyone who needs structured thinking right before a meeting, a draft, or a planning sprint
  • Bloggers, newsletter writers, course creators — People who want to publish their thinking trail visually alongside the prose
  • Refugees from heavier tools — Users who left Miro for being too heavy or Notion for being too thin

Value Proposition and Differentiation

  • Time-to-first-node ≈ 0 — No install, signup, payment, or tutorial gate. Open the URL → start typing. Friction removed at the system level
  • Publishable mind maps — Each map gets a URL. Others can browse, and if they like it, fork into their own draft to continue. The map becomes a shareable thinking artifact, not just a personal note
  • Browser-side AI assistance@runanywhere/web runs the model locally. Your data does not leave the browser, the LLM API bill is zero, and assistance keeps working offline. The only viable path to making AI assistance free by default
  • Free + opt-in account — Anonymous immediate use, lightweight authentication only at the moment of publishing

Core User Flow

  • Enter immediately: Hit deepthought.me, blank canvas appears (no login or tutorial)
  • Add nodes: Central topic → branch outward via keyboard / click
  • AI assist (optional): “Expand this node” runs through the in-browser LLM
  • Publish (optional): Map gets a URL → share. Others browse, or fork it into their own draft
  • Continue from someone else’s: Fork a published map and keep building

Draft → publish → fork loop

flowchart LR
    Open["Browser open
blank canvas instantly"] Edit["Edit nodes
(keyboard / click)"] AI["AI assist
browser-side LLM
(optional)"] Publish["Publish
URL assigned"] Browse["Other users
browse via URL"] Fork["Fork
copy into own draft"] Open --> Edit Edit -.-> AI AI -.-> Edit Edit --> Publish Publish --> Browse Browse --> Fork Fork --> Edit

Publish → browse → fork is a closed loop — taking someone else’s structure into your own thinking happens entirely inside the system.

Business Model Hypothesis

Monetization will be validated in stages.

  1. Pro workspace (hypothesis) — Unlimited published maps, private publishing, custom domain, team folders. Free remains genuinely useful; pricing covers control surfaces only
  2. Embeddable iframe licensing — A tool to embed maps in blogs, newsletters, and course pages
  3. Education / enterprise white-label — Schools and companies running DeepThought under their own domain

System Architecture (planning decisions become system structure)

Two planning decisions drove the architecture.

1. “AI assistance must run at near-zero cost to stay free.” → A typical LLM API integration scales cost linearly with users. DeepThought uses @runanywhere/web + WASM/WebGPU to run the model in the user’s browser. The user’s device pays the compute; the operator pays once for model file CDN delivery. The only path that lets AI be a default for a free tool.

2. “Publishing should be lightweight.” → Publishing skips a dedicated publish server. Maps are written to a Firestore published collection with a URL slug; browsing is static fetch + client render; forking copies the published map into the user’s drafts. No realtime collaboration server, no complex permission graph.

flowchart TD
    Web["Vite + React 19
web app"] LLM["Browser LLM
@runanywhere/web
(WASM/WebGPU)"] Auth["Firebase Auth
(only at publish)"] FS[("Firestore
drafts · published
users")] CDN["CloudFront → S3
deepthought.me/web"] Web -.-> LLM Web --> Auth Web --> FS CDN --> Web

The LLM edge is dotted — entirely client-side, no server hop.

Technology Choices and Trade-offs

  • Frontend: Vite + React 19 + TypeScript. Canvas rendered with custom SVG (zoom, pan, node edit are simple enough to own). Avoids external mind-map library so the data shape stays under our control for publish/fork
  • AI layer: @runanywhere/web + @runanywhere/web-llamacpp. In-browser LLM inference. First visit pays a few hundred MB model download; subsequent visits are offline + free
  • Backend / auth / storage: Firebase Auth + Firestore. Workload is light enough not to justify a self-hosted backend (same reasoning as The Weple’s Firebase choice)
  • Hosting: Build artifacts on the workspace standard — S3 (deepthought.me/web/) + CloudFront. SPA + react-helmet for OG metadata of published maps
  • What was dropped: Realtime collaboration (Miro’s territory — we own publish/fork), elaborate node styling (focus stays on structure), native mobile (browser-first; PWA validated before deciding)

Operational Automation

  • Model caching — LLM model files cached once via CDN + Service Worker, instantly available on return visits
  • Published OG auto-composition — Each published map URL auto-generates an OG image with a thumbnail of the map so social shares show what the map actually looks like
  • Build guardrails — tsc + theme-lint enforced; new components stay design-consistent by construction

Current State and Operational Signals

  • Status: Live. Instant open, node edit, publish, fork all operational
  • Started: 2026-02 (first release on Vite + React + Firebase + browser LLM)
  • Infrastructure: Firebase (Auth + Firestore) + S3 (deepthought.me/web/) + CloudFront
  • Verification signals: Anonymous-to-publish conversion rate, fork rate of published maps, AI-assist usage frequency under active monitoring

Retrospective and Next Hypotheses

  • What worked: Reducing entry friction to near zero — first node typed in under five seconds on average. The browser-LLM bet wins on cost, privacy, and offline at the same time
  • What I would redo: Exposing too many node styling options early — users spent time picking colors / shapes instead of building structure. Auto-styled nodes with fine-tuning available only at publish would have flowed better
  • Next hypotheses: (1) Pro workspace (unlimited / private / custom domain), (2) Embeddable iframe license, (3) Discovery / search across published maps (today is direct URL share only), (4) Browser-LLM model upgrade (re-evaluate the size / quality trade-off)

Comparable Engagements

The capabilities developed solo on DeepThought transfer cleanly to other domains.

  • Browser-LLM-based free or low-cost AI tools — Any free-tool category where API cost would otherwise block monetization. Move compute to the user’s device by design
  • Content tools where publish / fork / remix is the loop — Mind maps, notes, diagrams, code, design — domains where “build on someone else’s result” is natural
  • Zero-friction tool onboarding — Pushing all signup / payment / tutorial gates behind first-use without breaking the business model
  • Privacy-first AI products — Healthcare, legal, education domains where “data never leaves the browser” is the differentiator
  • Firebase-based lightweight publishing + social backends — Bundling auth, static publishing, and permission boundaries on a single platform

I prefer engagements where one person carries the work end to end. Reach me via /work-with-me or /contact.

Looking for a product partner? Founders, teams, businesses — from problem framing to launch.