Deepnote’s cover photo
Deepnote

Deepnote

Software Development

Data workspace where agents and humans work together.

About us

Deepnote is a data workspace where agents and humans work together. It's designed to simplify data exploration, accelerate analysis, and quickly deliver actionable insights for you and your team. Unlike outdated tools such as Jupyter, Deepnote is built with the next decade in mind. Deepnote gives anyone working with data superpowers. It unifies your data workflow through an integrated semantic layer, preparing your data for advanced AI applications. You can also leverage our AI data copilot to chat with your data, create charts, write code, or turn your AI notebooks into fully-fledged data dashboards or apps. Combine data, SQL or Python code, and visualizations side-by-side on a flexible canvas - enhanced with cutting-edge AI reasoning models. 🤖 Analyze with AI • Generate code and visualizations by describing your goal. • Auto-write, run, and debug code with AI. • Move faster with context-aware AI suggestions. 🔗 Unify • Connect to 60+ data sources like BigQuery, Snowflake, and PostgreSQL. • Combine Python and SQL in one notebook. • Build reusable ETL, analytics, and metric modules. • Create a semantic layer with shared definitions and trusted metrics. ⚖️ Scale • Instantly boost compute power, more included than Colab. • Schedule jobs and get notified with fresh results. • Organize work in projects and folders for team clarity. • Manage workflows via REST API. 🚀 Launch • Turn notebooks into dashboards or data apps, natively or with Streamlit. • Let users explore data with interactive inputs. • Share secure, live apps in one click.

Website
https://www.deepnote.com
Industry
Software Development
Company size
11-50 employees
Headquarters
San Francisco
Type
Privately Held

Products

Locations

Employees at Deepnote

Updates

  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    Opus 4.7 dropped yesterday. My experience so far: It's not meaningfully better than where 4.6 started, and on some benchmarks, it’s even worse. Better at some things. But auto mode has been shipping confident mistakes, and the reviews on Reddit and X support this. One thing I noticed, Anthropic reset everyone's usage limits on release day. Didn't matter how much you had left, which sucks for people who were saving up for the weekend (me). This happens with every major release now.

    • Chart titled “BullshitBench v2” ranking AI models by their ability to push back on incorrect claims, with Claude Sonnet 4.6 and Claude Opus models leading, showing around 83–91% clear pushback rates.
  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    Tokenmaxxing is the new proxy for company productivity. Last night: Zero agents running. Zero tokens being processed. They could have been doing something useful. That's the new waste. Not idle servers. Idle intelligence. It’s expected that a $500K engineer would spend at least $250K on tokens. If your engineers are doing manually what an agent could handle, you're paying senior salaries for junior work. How many tokens did you burn today, and was it enough?

    • Screenshot of a post by Jyoti Mann claiming Meta employees are “tokenmaxxing” and competing on an internal leaderboard called “Claudeonomics,” with total usage reaching 60 trillion tokens in 30 days.
  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    The cheapest way to run Claude is to make it talk like Kevin from The Office. Someone built a Claude Code skill that makes Claude talk like a caveman. It cuts token usage by 75%. A React re-rendering explanation that normally takes 69 tokens drops to 19. A web search task goes from 180 to 45. Bug explanations save up to 87%. Thinking tokens stay untouched, so reasoning quality is preserved. There was even a paper that found that forcing brevity actually improved accuracy by 26pp. Why waste many token when few do trick.

  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    Your company will be running dozens of agents by year's end. Most will be useless. All of them will be billing you. We've seen this before with dashboards. Someone creates the most important dashboard everyone should be looking at, then forgets about it 3 days later. Amplitude / Tableau / Power BI becomes a graveyard of dashboards. Messy, but harmless. Agents are different. When no one cares about an agent anymore, it keeps running. Consuming tokens. Every day. Forever. A forgotten dashboard has a viewing history. No one opened it in two years? Delete it. A forgotten agent has no equivalent. It just runs. The only way to find out if it's still useful is to turn it off and see if anyone complains. Nobody volunteers for that.

    • X post by Jakub Jurovych, talking about AI agents in companies.
  • Which AI data viz tool makes it hardest to ship a wrong answer? We gave 9 of them the same dataset and the same 4 prompts. Some nailed the math but couldn't show their work. Others produced confident charts built on wrong assumptions. Get the full breakdown with scores, screenshots, and the dataset so you can try it yourself. Link in comments.

  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    I thought TOON was a joke, but… It works. We just merged TOON output into deepnote/cli, so you can pipe CLI results into an LLM without paying the JSON tax (we’re seeing ~30% token savings in practice). What’s TOON? A lossless encoding of the JSON data model that stays human-readable, but is far more token-efficient. It uses YAML-like indentation for nested objects and a CSV-like layout for uniform arrays (declare fields once, stream rows). That combination is unexpectedly LLM-friendly. It reads like it was designed for models: fewer distractions, clearer structure, and great ergonomics for tabular-ish outputs. Just be careful, third-party benchmarks and our experience show it can underperform on deeply nested, irregular, non-uniform structures. But it works great for tabular data, and that’s our primary use case. If you’re already using Deepnote OSS, this is immediately useful anywhere you turn CLI output into LLM context (agent tools, eval harnesses, lightweight RAG, automated debugging).

    • Token-Oriented Object Notation (TOON)
  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    Anthropic just accidentally leaked their entire Claude Code source in an npm package. I went through the repo. Boy, is it messy. But it was also a fun read, and they've spoiled some upcoming releases. Under the hood - Anti-distillation: ANTI_DISTILLATION_CC injects fake tool definitions into API requests. If someone scrapes the traffic to train a competing model, they get poisoned data. - Undercover Mode: When Anthropic engineers use Claude on open-source repos, it prevents the AI from revealing itself or leaking codenames. Commits look human. No force-off switch. - Client attestation: every API request includes a computed hash to verify it came from a real install, not a wrapper or scraper. - They detect user frustration with a hardcoded regex chain for swear words. Not an LLM call (if anyone from Anthropic is reading this, you forgot 'jfc'). The unreleased features - "Dream" system: a background memory consolidation engine. Three-gate trigger (24h since last dream + 5 sessions + consolidation lock). Four phases: orient, gather, consolidate, prune. The prompt literally says "You are performing a dream." Read-only access to your project. It's purely reflective. - KAIROS: always-on Claude that doesn't wait for you to type. It watches, logs, and proactively acts on things it notices. 15-second blocking budget so it doesn't annoy you. Gets exclusive tools like PushNotification and SubscribePR. - Coordinator Mode: turns Claude Code into a multi-agent system. Research workers run in parallel, a coordinator synthesizes, implementation workers execute per spec, verification workers test. The prompt bans lazy delegation: "Do NOT say 'based on your findings.' Read the actual findings." - BUDDY: a full Tamagotchi pet system. 18 species across 5 rarity tiers. Gacha mechanics. Shiny variants. Stats including CHAOS, SNARK, and DEBUGGING. A possible April fools easter egg. - ULTRAPLAN: offloads complex planning to a remote Opus session with 30 minutes of thinking time. The internal culture - Model codenames: Tengu is Claude Code's internal project name (hundreds of feature flags start with tengu_). Fennec was the predecessor to Opus (migration path: fennec-latest → opus). Capybara is the model behind Opus 4.6 (hex-encoded in the buddy system to dodge build canaries). Opus-4-7 and Sonnet-4-8 are referenced as planned future versions. - The permission system has a mode called "yolo" which ironically means deny-all. Everyone's debating whether this matters. "It's just a client, there's no moat." Maybe. But every competitor now has a detailed blueprint of the most popular AI coding agent. The architecture, the feature roadmap, the multi-agent patterns, and the prompt engineering. Any team can point an agent at this repo and have a full summary in 20 minutes, just like I did. The leading AI labs now have to decide whether to look at the most useful codebase that just landed in their lap, or pretend they didn't see it. We all know what they'll do.

    • Claude Code leaked feature flags table: 16 internal flags including KAIROS (proactive agent), COORDINATOR_MODE (multi-agent orchestration), ULTRAPLAN (extended planning), BUDDY (companion easter egg), VOICE_MODE, DREAM system, and more — extracted from the accidentally published npm source map.
  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    Anthropic just accidentally leaked their entire Claude Code source in an npm package. I went through the repo. Boy, is it messy. But it was also a fun read, and they've spoiled some upcoming releases. Under the hood - Anti-distillation: ANTI_DISTILLATION_CC injects fake tool definitions into API requests. If someone scrapes the traffic to train a competing model, they get poisoned data. - Undercover Mode: When Anthropic engineers use Claude on open-source repos, it prevents the AI from revealing itself or leaking codenames. Commits look human. No force-off switch. - Client attestation: every API request includes a computed hash to verify it came from a real install, not a wrapper or scraper. - They detect user frustration with a hardcoded regex chain for swear words. Not an LLM call (if anyone from Anthropic is reading this, you forgot 'jfc'). The unreleased features - "Dream" system: a background memory consolidation engine. Three-gate trigger (24h since last dream + 5 sessions + consolidation lock). Four phases: orient, gather, consolidate, prune. The prompt literally says "You are performing a dream." Read-only access to your project. It's purely reflective. - KAIROS: always-on Claude that doesn't wait for you to type. It watches, logs, and proactively acts on things it notices. 15-second blocking budget so it doesn't annoy you. Gets exclusive tools like PushNotification and SubscribePR. - Coordinator Mode: turns Claude Code into a multi-agent system. Research workers run in parallel, a coordinator synthesizes, implementation workers execute per spec, verification workers test. The prompt bans lazy delegation: "Do NOT say 'based on your findings.' Read the actual findings." - BUDDY: a full Tamagotchi pet system. 18 species across 5 rarity tiers. Gacha mechanics. Shiny variants. Stats including CHAOS, SNARK, and DEBUGGING. A possible April fools easter egg. - ULTRAPLAN: offloads complex planning to a remote Opus session with 30 minutes of thinking time. The internal culture - Model codenames: Tengu is Claude Code's internal project name (hundreds of feature flags start with tengu_). Fennec was the predecessor to Opus (migration path: fennec-latest → opus). Capybara is the model behind Opus 4.6 (hex-encoded in the buddy system to dodge build canaries). Opus-4-7 and Sonnet-4-8 are referenced as planned future versions. - The permission system has a mode called "yolo" which ironically means deny-all. Everyone's debating whether this matters. "It's just a client, there's no moat." Maybe. But every competitor now has a detailed blueprint of the most popular AI coding agent. The architecture, the feature roadmap, the multi-agent patterns, and the prompt engineering. Any team can point an agent at this repo and have a full summary in 20 minutes, just like I did. The leading AI labs now have to decide whether to look at the most useful codebase that just landed in their lap, or pretend they didn't see it. We all know what they'll do.

    • Claude Code leaked feature flags table: 16 internal flags including KAIROS (proactive agent), COORDINATOR_MODE (multi-agent orchestration), ULTRAPLAN (extended planning), BUDDY (companion easter egg), VOICE_MODE, DREAM system, and more — extracted from the accidentally published npm source map.
  • Deepnote reposted this

    View profile for Jakub Jurových

    Deepnote7K followers

    Writing code is free now. Editing it well is the new premium skill. AI is incredibly fast at generating code, we already know that. It's also incredibly fast at generating the wrong abstraction, the unnecessary edge case handler, and the verbose version of something simple. To get the most out of AI, you need to look at the output and know what's excess. That takes taste. It takes knowing the codebase. It takes understanding the problem deeply enough to spot the simpler solution underneath.

    • Screenshot of a post by Kyle Gawley saying he removed 87 lines of AI generated code and replaced them with 2 lines of human code.

Similar pages

Browse jobs

Funding