← writing

AI is a better VC than me. Now what?

February 12, 2026

If “being a VC” means producing the artifacts — memos, models, decks, risk matrices — then yeah: AI is better. It’s faster, cleaner, and more consistent at the surface-level work.

That’s the point. A big chunk of what a junior analyst used to be paid to produce is now a prompt and a quick edit.

AI basically replaced a big chunk of what a junior analyst at a VC firm used to produce.

Right now, AI can write investment memos, do first-pass financial models, create presentations, and spit out risk assessment matrices way faster than any junior. In a lot of cases it’s “better” in the sense that it’s cleaner, more complete, more structured, and more “consulting-grade” on the surface.

But I want to be precise about what I mean when I say “replaced,” because this is where nuance matters: AI replaced a lot of the outputs juniors were historically paid to generate. It did not replace the role in the full sense. The role is shifting. Firms still need humans for attention allocation (what even deserves time), reality-checking, relationship-building, and accountability (someone has to own the call, defend it, and live with the consequences). The “junior who formats and summarizes” is getting commoditized; the “junior who sources, verifies, and builds conviction” becomes more valuable.

Once you found a good startup, the rest of the work assigned to junior analysts can often be done by AI. Perhaps AI is not always good enough for a final presentable output, but a smart analyst with the tools at his disposal can produce something polished quickly.

I know this myself firsthand. Most of the tasks I do every day pass through Gemini or Claude or ChatGPT first. It tells me how it’s done and how to do it, but I almost never end up using it “as-is.” I apply my own judgment and cherry pick the best things AI provided, change it to my own liking, correct the parts that don’t fit the context, and finish up the work — class assignment or work task, doesn’t matter.

However, the important thing is that groundwork used to take up 80% of the time. Now it’s not. Your task becomes less “generate” and more “decide + verify + polish.”

And also: the “AI is better at models” thing is real, but conditional. AI can absolutely create a good template model and surface common pitfalls. But it can also be confidently wrong, make assumptions that are subtly broken, use outdated or invented comps, miss accounting nuance, or produce a model that looks high-quality while the logic is off. Modeling quality is assumptions + data hygiene + context. So the human value is less typing formulas and more pressure-testing inputs and sanity checking the story against reality.

What AI can’t do right now is real-world tasks and real-world truth-finding.

For instance, let’s say you ask AI to find 10 top startups in Tashkent looking for investment. Sure, it will make up a list, or it will pull whatever it can from what’s visible online. But when you read the list, you realize the info is often outdated and biased toward startups with press presence, English-language visibility, or “internet polish.” As someone who knows the market, you throw a lot of it away because you know what’s actually happening on the ground and you know that what’s online is not the full distribution of reality.

Example: I’ve seen startups that look perfect online — polished decks, clean LinkedIn presence, warm press — and then two operator/customer conversations reveal the product is barely used and the “traction” is basically a reseller scheme. I’ve also seen the opposite: a founder with almost zero online footprint, no English content, no hype — but when you talk to customers and ex-colleagues you realize they’re quietly building something real. AI can’t reliably get you that distinction because the distinguishing data is mostly not public.

So where do you get that information?

From people. From people around you, from friends who post things, from dinner conversations, from operator group chats, from private intros, from closed circles. This is the kind of information that is not posted on the internet or is posted in a distorted way. And a lot of the gold information for VCs is exactly there — in private.

LLMs can’t search for offline pitch sessions, dinner conversations, or private messages. They can’t call the “one person who knows” and get the real story. They can’t earn trust. They can’t get someone to say the quiet part out loud.

And yes, even if an LLM has access to the public internet better than I do, that doesn’t automatically translate to “better truth.” The public internet is now full of incentive-driven content, marketing, and AI-generated slop. It’s searchable, but it’s not always reliable. If anything, searchability is becoming less correlated with signal.

At the same time, I don’t want to romanticize “offline” either. Gossip is also biased. People have agendas. Social circles amplify certain narratives. “Private info” can be wrong or malicious. So the real edge is not “offline > online.” The edge is triangulation: combining offline signals, online artifacts, and direct tests (product usage, customer calls, reference checks, competitor conversations) and then making a judgment call under uncertainty. That’s where humans still win.

My default “truth-finding” loop is simple:
– Use the product for 20 minutes (or watch a live demo and ask dumb questions).
– Talk to 3 customers (not the ones the founder handpicks if you can avoid it).
– Talk to 2 people who worked with the founder (ex-colleague / investor / operator).
– Ask one competitor or adjacent operator “what’s the real story here?”
– Then compare: does the narrative survive contact with reality?

An LLM can find me 10 startups, but it will never know how legit a founder is who never gave an interview to the press. It can try to infer trustworthiness based on a LinkedIn presence, but it can’t actually validate character the way humans do — through repeated interactions, consistency over time, how they treat people in the room, what their former colleagues say privately, how they respond when things go wrong, whether their claims survive basic scrutiny.

And in this age, “real” online is slippery. What you see on the internet can’t be trusted as a direct signal of intelligence, taste, or character. It can be used as a clue, but not a conclusion. The part that still matters is the choices people make and the consistency of those choices: do they choose to post clearly AI-generated slop, do they choose to claim fake traction, do they choose to be precise or vague, do they show up in real life the same way they perform online. Some decisions are still being made by humans, and those decisions still leak signal.

So, the real-life information you gather from meeting people face to face is still immensely valuable — and it’s where an edge can exist.

I sincerely believe there isn’t really a junior-level “production task” I can do better than AI. My first-pass memos won’t be as fast as AI’s. My first-pass models won’t be as fast as AI’s. Even in law, raw recall is becoming cheap; what’s expensive is judgment and accountability. The “knowledge typing” part is getting attacked hardest.

But I also think it’s misleading to conclude “so humans are doomed.” What’s happening is: the baseline is rising. The commodity work is collapsing in value. That forces humans to move up the stack.

What I know is that I still have judgment that helps me differentiate between what AI gives me and what is considered good work in context. I can still differentiate output from what is expected in a given firm / class / market. I can spot when something is beautifully written but strategically wrong. I can separate “internet truth” from “real-world truth” by validating through people and by doing direct checks. I can decide what to ignore, what to investigate, and what to bet on.

It’s true AI has the biggest attack on knowledge work — and especially on young upcoming grads like me. I feel anxious about the future too, just like everyone in my shoes looking at the job market and wondering what gets automated next.

You might argue that the “AI is scary” narrative is exaggerated, like 2023 hype cycles, like crypto and NFTs, like whatever trend people overreacted to. Sure, you can argue hype exists. But you can’t argue the world will stay the same, the job market will stay the same, and the human value equation will stay the same. Even if specific hype waves fade, the capability trend doesn’t reverse. The baseline has moved.

This won’t “pass” in the sense of “we go back.” The feeling won’t pass and the world won’t be the same again. It changed and it will remain changed. The only real response is to adapt.

And for me, adaptation is basically this: stop competing with AI on typing and formatting. Use AI to compress the groundwork. Spend human time on what AI can’t do well: getting real access, building trust, sourcing through people, triangulating truth, doing real diligence, and making accountable judgment calls. That’s where the edge still is.

AI drafts. Humans verify.