← Back

I Kept Re-Explaining My Project to AI — So I Built a System Instead

March 17, 2026 · 8 min read

Two years ago I made a video about what product managers actually do. I said the core of PM work isn’t meetings, writing docs, or arguing with engineers — that’s all surface. The real value comes down to three things:

Find the right problem. Design the right solution. Make change happen.

Two years later, I still think that’s right. But something happened that made me realize there’s a fourth skill now — one I had to stumble into myself.

The moment that made me rethink everything

For the first year of using AI at work, I did what most PMs do. I’d open ChatGPT, paste in some context, get a decent output, copy it back into my doc. Useful. But every single conversation started with the same ritual:

“I’m a PM working on a consumer AI feature. Our product does X. The user base is Y. We’re in this stage of the project. The constraints are Z. We just got this feedback from leadership. The privacy requirements are…”

Ten minutes of setup before I could ask a single real question.

At first I didn’t think much of it. Then weeks turned into months, and my project accumulated layers — user research findings, compliance commitments, stakeholder feedback, design decisions, technical trade-offs. The context became so dense that I couldn’t even dump it all into a conversation anymore. I’d forget to mention a key constraint, get an answer that ignored it, then realize the gap three iterations later.

One day I was re-explaining our privacy architecture to ChatGPT for what felt like the fifth time that week, and a thought hit me that I couldn’t shake:

Why am I teaching the same thing over and over to the smartest tool I’ve ever used?

That’s when the PM instinct kicked in. Not “this tool is broken” — but “there’s a workflow problem here, and I should diagnose it.”

The diagnosis

I’m a PM. I solve workflow problems for users. So I did what I’d do for any user pain point — I mapped the actual problem.

The issue wasn’t ChatGPT. The issue was a mismatch between how AI works and how PM work works.

An engineer can paste a function into ChatGPT and say “refactor this.” The context is self-contained in the code. But PM work is the opposite — it’s all context. The right answer to “should we prioritize this feature?” depends on business goals, user research, technical constraints, stakeholder politics, timeline pressure, and what happened in last Tuesday’s review meeting. None of that is self-contained. It lives in your head, your docs, your Slack threads, your memory of that one offhand comment your VP made.

AI has an extraordinarily powerful brain. But it has no memory of your world, no access to your tools, and no understanding of your specific situation. It’s like hiring the smartest consultant on earth, then making them start from zero context every single morning.

That was the real problem. And once I framed it that way, the solution was obvious — at least in principle.

An article, and the idea that clicked

Around that time, I read an article about the concept of a “second brain.” The framing that stuck with me was simple:

AI has the brain. But it doesn’t have your context, your hands, or your personality. To make it truly yours, you need to give it all three.

  • Context: a knowledge base of your actual project materials — not summaries, but the real docs you work with
  • Hands: the ability to take action — read your messages, write files, search your tools
  • Personality: a system prompt that encodes how you think, what you value, what your role requires

That night, I started building.

The first experiment

I didn’t go big. I tried a few tools — Kuse, NotebookLM, ChatGPT Projects, a couple of dedicated AI knowledge base apps. I fed them a handful of documents. Nothing sensitive — just some general product specs, a few user feedback summaries. Low stakes, just to see what would happen.

The first few conversations were unremarkable. Then I asked a question that would normally require me to re-explain three documents worth of background.

The AI just… answered. Correctly. With the right context. Without me having to set anything up.

Then it did something I didn’t ask for: it connected a point from our product spec to a piece of user feedback I’d uploaded separately, and surfaced a contradiction I hadn’t noticed.

That was the aha moment. Not “this is faster.” But: this thing can see across my documents in a way I can’t — because I’m too close to each one individually.

I went from skeptic to believer in one conversation.

Building the real system

That initial experiment was a proof of concept. But the tools I’d tried had limits — they couldn’t access my actual work channels, couldn’t take actions, couldn’t be customized deeply enough.

Then I got lucky. A dev colleague at my company had built an internal agent tool that was compliant with our enterprise security requirements. I migrated my setup there, and that’s when things got serious.

I built it in layers, over weeks:

First, the knowledge base. I loaded my real project materials — PRDs, privacy whitepapers, UXR memos, technical architecture docs, stakeholder maps. About 30 files that represent everything I know about my project. Then I had the agent generate a “consolidated summary” — one master document that synthesizes all the source materials into a single reference. That became its working memory.

Then, the skills. I taught it to read my emails and Slack messages, triage them by priority, and surface what needs my attention. I gave it the ability to draft documents in my voice. I built a skill for creating presentations. Each skill was a set of instructions — not code, just clear descriptions of what I need and how I want it done.

Then, the personality. I wrote a system prompt that doesn’t just say “you are a helpful assistant.” It encodes how I want to be challenged. Always ground your answers in the context I gave you — don’t make things up. Always be user-centric — if we’re debating a feature, start with the user’s problem, not the stakeholder’s request. Don’t be a yes-man — if my idea has a hole, say so. Play the challenger role. I’d rather be told “this doesn’t hold up” by my agent at 10pm than by my VP at 10am.

The result: I went from spending two-plus hours a day on information triage to having a prioritized action board waiting for me when I open my laptop. The agent reads what came in overnight, classifies it, flags what’s urgent, and drafts suggested responses. I spend my time on decisions, not on collecting the information needed to make them.

What this taught me about PM skills in the AI era

Here’s the thing I didn’t expect: building this agent was the most PM thing I’ve done in years.

Think about what it required:

  • Finding the right problem: diagnosing that the pain wasn’t “AI is bad” but “the workflow between me and AI is broken”
  • Designing the right solution: mapping the architecture — what context does it need, what skills, what personality
  • Making change happen: iterating week after week, tuning the system, teaching it new capabilities

It’s the same three skills I talked about two years ago. But pointed inward — at my own work, instead of at a product for users.

That’s the new dimension I didn’t see coming: the most valuable PM skill in the AI era is treating yourself as a user and building a system for your own needs.

Not “using AI tools.” Everyone does that. Not “being good at prompting.” That’s necessary but not sufficient. The real shift is thinking in systems — decomposing your workflow, identifying which parts are low-value information processing that AI can do, which parts are high-value judgment that only you can do, and then building the infrastructure that lets you spend all your time on the second category.

The framework, updated

Two years ago:

  1. Find the right problem
  2. Design the right solution
  3. Make change happen

Still true. Still the core of what PMs do. But now there’s a fourth:

4. Build the system that makes you dangerous.

Not dangerous as in reckless — dangerous as in “this one person with their AI system is outperforming what used to require a small team.” The PM who has a tuned agent reading their communications, drafting their docs, surfacing insights from their data — that PM has a fundamentally unfair advantage over the one who opens ChatGPT and starts from scratch every time.

Two years ago I said “academic background, internship experience, English skills, and technical background aren’t required PM skills.” That’s still true. But I’d add: AI capability isn’t a technical skill either. It’s a way of thinking. It’s the instinct to look at your own pain points the same way you look at your users’ pain points, and build something to fix them.

The framework from my video still holds. But the PM who only applies those three skills to their product is leaving half the value on the table. The PM who also applies them to their own workflow — who builds, tunes, and evolves an AI system around themselves — that’s who’s going to be impossible to compete with.

Stay curious. Stay building. And this time, build for yourself too.