AI

General-purpose AI copilots — the cognitive layer underneath every memo, model, and WhatsApp deal triage at an Indian fund.

6 tools in this beat
Claude
Featured

Anthropic's assistant favoured for long-context reasoning and writing

ChatGPT
Featured

The most widely adopted AI product, now a general reasoning layer

Perplexity
Featured

AI answer engine that cites its sources on every response

DeepSeek

Open-source reasoning models with a free chat app and cheap API

Gemini

Google's AI assistant plugged into Workspace, Search, and the browser

Manus

Autonomous AI agent that plans, browses, and delivers finished work

About this category
The Brief

AI tools have stopped being a budget line and become the substrate. Inside a 2026 fund, the first tab opened isn't email — it's a chat window. Inbound deck triage, memo skeletons, call summaries, market maps, thesis pressure-tests, LP update drafts, even reference-call prep all start in a copilot. The funds compounding fastest aren't the ones with the cleverest prompts; they're the ones who've moved firm context — playbook, ICP, portfolio, anti-portfolio — into reusable skills and shared assistants so every analyst inherits the partners' brain.

The India reality is messier than the Twitter version. Most teams quietly run two assistants — ChatGPT for breadth, Claude for long diligence reads — on personal logins, expensed in INR. Founder decks arrive in Hinglish, Tamil-English, regional WhatsApp voice notes; reasoning quality on code-mixed input still varies sharply by model. Sensitive diligence — cap tables, founder backgrounds, MCA filings — demands enterprise plans where training is disabled and data residency is auditable.

The journey isn't about which tool. Most funds use the same handful — Claude, ChatGPT, Perplexity, Gemini. It's about how deeply you use them.

How to approach this stack

How to approach this stack — depending on where your firm is.

  1. Beginner
    Daily chat. Paid seats of Claude or ChatGPT for deck triage, memo skeletons, call summaries, market scans. Free tiers leak data — start paid.
  2. Intermediate
    Same models, deeper leverage. ChatGPT Projects and Custom GPTs, Claude with skills and shared assistants, Notion AI over your wiki, Perplexity for sourced research, Gemini if your fund lives in Google Workspace. The firm's playbook starts living inside the tool.
  3. Advanced
    Claude Code, Codex, the API layer. Multi-tool routing — different models for different jobs based on token cost, latency, context window, security posture. Internal platforms for sourcing, screening, and memo automation. Manus for autonomous multi-step work; DeepSeek where INR-economics matter on high-volume API calls.
What to look for when buying

What separates a good ai from a bad one for a venture fund.

  1. 01
    Workspace integration.
    The assistant that wins is the one already inside Gmail, Slack, or Notion where your team spends 80% of the day. Everything else stays a tab.
  2. 02
    Team plan with shared context.
    Personal seats die when the analyst leaves. Team plans let you build skills and shared projects the whole firm inherits — and someone updates weekly.
  3. 03
    Enterprise privacy and data retention.
    Confirm training-off, audit logs, SSO, and a retention policy that matches your LP commitments before any cap table touches the model.
  4. 04
    Reliability and support.
    Even category leaders go down — Claude has had meaningful, repeated outages, and support response on consumer plans is uneven. Check the status page, support SLAs, and have a fallback model wired up.
Common pitfalls

Where ai stacks usually break.

  1. 01
    Personal accounts on firm data.
    Knowledge walks with the person. Skills, projects, and prompt libraries built on personal seats vanish when the analyst leaves — that's pure technical debt. Team plans aren't optional.
  2. 02
    Tool sprawl too early.
    Five assistants tested in parallel before one is dialled in usually leaves you with five half-used seats. Standardise on one; experiment with the rest only after the first is working.
  3. 03
    Clever prompts without team-wide skills.
    A partner's good prompts in a personal chat don't compound. Skills, projects, and shared assistants — owned by someone, updated weekly — do.
Also in the stack
Transcription6Research14Productivity4Vibe Coding4
Last reviewed · April 2026How we curate ↗