
Source: https://unsplash.com/photos/team-collaborating-around-a-whiteboard-in-a-modern-office-K0aM-ztA76Q
Without Coding, Bootcamps, or Faking It in Meetings
You're in a sprint review. The data scientist is explaining why the model's precision improved but recall dropped. Everyone nods. You nod too. You write it down - "precision up, recall down" - knowing you'll never look at those notes again because you have no idea what they mean.
Later, someone asks if you have thoughts. You say something vague about "trade-offs" and move on.
You walk out wondering: Am I supposed to understand this? Is everyone else following along, or are they faking it too?
If you're a non-technical product manager trying to learn AI, this is not a capability problem. It's a resource problem. Every learning path you find is built for the wrong person.
The pressure on PMs to understand AI has never been higher. Job descriptions now list "AI/ML fluency" as a baseline requirement. Your company just announced an AI initiative and everyone is looking at you to figure out where the product fits in.
So you search "how to learn AI as a product manager" and immediately feel worse.
There are six-month bootcamps. Courses that start with Python. Articles that open with "first, understand linear algebra." You try Andrew Ng's AI for Everyone course, get through week one, and then a product launch happens. Three months later, you're back at square one, still nodding in meetings.
The problem isn't motivation. It's that almost every AI learning resource in existence is built for people who want to become data scientists or engineers. You don't want to become a data scientist. You want to stop feeling like an imposter in a room that contains one.
That requires a fundamentally different kind of learning - and a completely different standard of success.
There is a persistent, deeply unhelpful myth that to work effectively with AI, you need to understand it at a technical level. You need to code. You need to grasp the maths. You need to take a serious course before you're "ready."
This is wrong. Let's be specific about what you don't need.
You don't need to build models. You don't need to understand backpropagation. You don't need Python, TensorFlow, or PyTorch. You don't need to know the difference between a convolutional neural network and a recurrent one. You don't need to distinguish between random forests and XGBoost. Your data scientists know all of this. That's exactly why you hired them.
Your job is different. Your job is to:
That doesn't require a computer science degree. It requires AI fluency - the kind you can build in weeks, not years. And it starts with knowing exactly which concepts matter.
This is not a comprehensive AI curriculum. It is the minimum viable AI literacy that will let you hold your own in any meeting.
The vocabulary hierarchy. AI is the broad field - machines doing things that seem intelligent. Machine learning is a subset - systems that learn from data rather than following explicit rules. Deep learning is a subset of that - neural networks with many layers. Generative AI is the newest wave - models that create text, images, code, and more. You need to understand this hierarchy, not memorise every node within it.
The three types of machine learning. Supervised learning: the model learns from labelled examples, like predicting churn based on past user behaviour. Unsupervised learning: finding patterns without labels, like clustering customers into segments with no pre-defined categories. Reinforcement learning: trial and error with feedback, like training a recommendation system that improves as users engage.
The terms that will come up constantly. Training data (what the model learns from). Inference (when the model makes predictions on new data). Overfitting (when it memorises the training data instead of generalising - performs brilliantly on old data and poorly on new data). Model drift (when performance degrades over time because the real world changes). Hallucination (when a generative AI model confidently generates false information).
Precision vs. recall - the example that makes it concrete. Imagine you're building a fraud detection system. Precision answers: of all the transactions we flagged as fraud, how many actually were? High precision means fewer false alarms - legitimate customers aren't getting their cards blocked. Recall answers: of all the actual fraud that happened, how much did we catch? High recall means fewer fraudsters slip through.
There's always a trade-off, and the right balance depends entirely on your product. A spam filter can tolerate some false positives - a mildly irritating but recoverable mistake. A cancer screening tool cannot - missing a real case is catastrophic. When your data scientist says "precision improved but recall dropped," they're telling you something important about a product decision that has user and business consequences. That's the level at which you need to understand it.
The PM-level judgment. Beyond vocabulary, you need to know when AI is the right tool and when it isn't. If simple business rules get you 90% of the way there, AI may not be worth the complexity, cost, and maintenance overhead. The questions you should always ask: Do we have the data? Is it clean and representative? What happens when the model is wrong? What's the cost of a false positive versus a false negative? Is explainability critical, or can we tolerate a black box?
You should also know what not to ask. "Can you just make it more accurate?" is not a useful question. Your data scientists are already trying. The useful question is: "What would it take to improve recall without degrading precision, and what would that cost in additional training data and compute time?"
This is not a curriculum. It is a sequence. Four weeks of focused, intentional engagement - not six months of intermittent studying.
Week 1 - Build the Foundation
Start with Andrew Ng's AI for Everyone on Coursera. It runs approximately 6 hours across four modules, is specifically designed for non-technical professionals, and gives you the vocabulary and mental models you need as a foundation. Watch at your own pace. Don't obsessively take notes you'll never review - absorb the concepts and let them settle.
The goal of week one is not mastery. It is fluency of terrain: understanding what the field looks like, what the major concepts are, and where the important questions live.
Week 2 - Go Hands-On with One Real Problem
Pick one AI technology - large language models are the easiest entry point - and use it to solve something real in your own work. Summarise user research with Claude. Draft product requirements with ChatGPT. Automate a tedious task. Write a prompt, evaluate the output, figure out why it failed, iterate.
The goal is not to become an expert. It's to feel what AI can do, where it breaks, and what it's like to work with it. That embodied understanding is more valuable than anything you can read.
Week 3 - Learn Through Immersion, Not Study
Stop actively studying. Start curating your environment. Follow three to five AI practitioners on LinkedIn whose thinking you respect. Subscribe to The Batch, DeepLearning.AI's weekly newsletter - it's written for both practitioners and business leaders and gives you a reliable signal about what matters in the field each week.
This is passive exposure - but the right kind. You're letting the language become familiar through repeated natural context, not forced memorisation.
Week 4 - Practice Translation
Explain what you've learned to a non-technical person. If you can make a friend understand the precision-recall trade-off using a real example - not by reciting a definition - you actually understand it. Ask your data scientist one question you've been afraid to ask. Most of them genuinely enjoy explaining their work to someone who is curious and respectful of their expertise.
By the end of the month, you won't be an expert. But you'll be fluent enough to contribute, to ask questions that add value, and to stop pretending.
Here is where most guides leave you: "Good luck, go learn."
But there is a structural gap between completing a course and actually retaining what you learned. Technical vocabulary is especially unforgiving. Terms like "overfitting" and "inference" don't appear in your daily life. You learn them, feel confident for a week, and then someone uses the term in a meeting and your mind goes blank.
Hermann Ebbinghaus demonstrated this over a century ago. The forgetting curve is steep: 67% of new information disappears within 24 hours without active reinforcement (Ebbinghaus, 1885; replicated by Murre & Dros, PLOS ONE, 2015). Two weeks after Andrew Ng's course, two-thirds of what you absorbed is gone.
The research on what actually prevents this is clear. Roediger & Karpicke (2006) established that active recall testing produces 80% retention after one week, compared to 34% for passive review (Psychological Science, 17(3)). The difference between knowing something and merely having encountered it is whether you have actively retrieved it - not once, but repeatedly over time.
This is the AI fluency gap: the space between completing a learning resource and having knowledge available when you need it in a real meeting. It's the gap that courses don't close, newsletters don't close, and passive watching doesn't close. A system that builds active recall and spaced repetition into your daily rhythm is the only thing that does.
Curo is not a course. It is not a flashcard app. It is not ChatGPT. It is a proactive AI-powered learning companion - one that structures your path, teaches you step by step, and stays with you when you get stuck. (See how Curo works →)
Here's what that means specifically for a PM building AI fluency:
It turns any content into structured learning. You can bring any material you've already found - an article your data scientist shared, a URL, a document - and Curo transforms it into a structured, whiteboard-style, interactive session tailored to your current level. You don't need to start from scratch. If you found it, Curo teaches it.
It adapts to your actual knowledge gaps. You're not being walked through a generic AI curriculum built for someone else. Curo builds the path from where you actually are - what you understand, what you've been told to learn, and what's coming up in the next sprint. If you already understand supervised learning, it moves on. If you're confused about model drift, it re-explains from a different angle until the concept resolves.
Retention is built in - not a separate habit. The spaced repetition happens within your sessions automatically. Concepts you learned last week surface at the point where they're about to fade. You're not maintaining a separate Anki deck. You're just using Curo.
It's there when you're stuck. That moment when your data scientist uses a term you've heard four times but still can't explain - Curo works through it with you. No judgment. No having to admit in a meeting that you don't know. You work it out before the meeting happens.
For the first time, having a companion who builds your AI vocabulary, structures your path, and makes sure what you learn is there when you need it - that doesn't require a $100-an-hour technical mentor or a six-week bootcamp. Curo makes that level of personalised learning available to any PM, regardless of technical background. (Explore pricing →)
| Tool | What it does well | Where it fails for PM AI fluency | What Curo does differently |
|---|---|---|---|
| Andrew Ng's AI for Everyone | Excellent conceptual foundation, accessible, free to audit | One-time passive video - no retention, no adaptation to your specific gaps, no recall practice | Active, adaptive follow-up that makes the concepts stick after the course |
| ChatGPT / Claude | On-demand answers to any question you can think to ask | Reactive - if you don't know what to ask, it can't guide you; no curriculum; no memory; no retention | Proactive - builds a path, checks understanding, returns concepts at optimal intervals |
| Coursera / bootcamps | Structured, credentialed, comprehensive | Months-long commitment, wrong level for most PMs, passive video format, knowledge fades without reinforcement | Fits in 10-20 minute gaps; adaptive to your actual level and gaps |
| Anki / flashcard apps | Spaced repetition if you maintain them | Requires manual card creation; generic; breaks down when life gets busy | Spaced repetition built in automatically - no cards to create or maintain |
| Technical mentors / data science team | Real expertise, personalised context | Expensive, scheduling friction, creates awkward dynamics when you're their PM | Always available, no awkwardness, fraction of the cost |
| Podcasts / newsletters | Passive domain exposure, keeps you current | 67% forgotten within 24 hours; no structured path; no active recall | Active, interactive, sequenced around your specific learning goals |
The honest summary: Andrew Ng's AI for Everyone is a genuinely excellent starting point - it belongs in week one of the 30-day path above. ChatGPT and Claude are useful for answering specific questions. The problem isn't any individual resource. It's the absence of a system that connects them, sequences them, and ensures that what you learn on Monday is still there on Friday. Curo is that system.
Right fit if:
Probably not the right fit if:
The non-technical PMs who will be most effective in AI-era product teams aren't the ones who eventually find time for a bootcamp. They're the ones who build a daily learning practice that fits inside their existing schedule - 10 minutes before stand-up, a commute, the gap between two calls.
The vocabulary is learnable. The concepts are learnable. The judgment - knowing when to push, what to ask, how to translate - is a skill that compounds over weeks of consistent small sessions.
You don't need another course. You need a system that turns what you already find into knowledge you can use.
Start free at curohq.com → No credit card. No setup. Just bring the AI concept you need to understand before your next meeting.
How long does it take a non-technical PM to learn enough AI to be effective?
Four weeks of focused, intentional engagement gets most non-technical PMs to fluency - the ability to follow technical conversations, ask good questions, and make informed trade-off decisions. This is different from expertise. Expertise takes years of practitioner experience. Fluency - which is what you actually need - is achievable in a month. The key constraint is retention: without a system that reinforces what you learn, the knowledge fades within days.
Do I need to learn Python or coding to work with AI as a PM?
No. Your job is not to build models - it's to make decisions about what to build, why, and for whom. The skills you need are conceptual (understanding how models work and what they can't do), conversational (knowing what questions to ask your data science team), and translational (explaining AI decisions to stakeholders). None of these require code.
What is the difference between precision and recall, in plain language?
Precision answers: of everything we flagged, how often were we right? Recall answers: of everything that should have been flagged, how much did we catch? A fraud detection system with high precision has few false alarms - but might miss real fraud. High recall catches more fraud - but generates more false alerts. There's always a trade-off, and the right balance depends on the cost of each type of mistake in your specific product.
What AI concepts does a product manager actually need to know?
The essentials: the hierarchy of AI/ML/deep learning/generative AI; the three types of machine learning (supervised, unsupervised, reinforcement); key operational terms (training data, inference, overfitting, model drift, hallucination); precision vs. recall and why the trade-off matters; and when AI is the wrong tool. Beyond vocabulary, you need the judgment to ask: do we have the data? What are the failure modes? What's the cost of being wrong?
How is Curo different from just watching Andrew Ng's AI for Everyone?
Andrew Ng's course is an excellent foundation - it's genuinely the best starting resource for non-technical PMs and belongs in week one. The problem is what happens after: without active recall and spaced repetition, most of what you absorbed fades within days. Curo picks up where the course ends - reinforcing vocabulary, building adaptive paths around your specific gaps, and surfacing concepts when you're about to lose them. Think of it as the retention layer the course doesn't include.
Can I use Curo to learn specific AI concepts I keep encountering at work?
Yes. You can bring any content - a URL your data scientist shared, an article, a technical document - and Curo transforms it into a structured learning session designed around your level. If there's a specific concept you keep half-understanding (RAG, fine-tuning, model evaluation), you can start there and Curo will build outward from it.
How is Curo different from just asking ChatGPT to explain AI terms?
ChatGPT answers the questions you ask. When you don't know what to ask - which is precisely the state of a PM who's confused - it can't guide you. It has no curriculum, no memory across sessions, and no mechanism to ensure you remember what it explains. Curo is proactive: it builds your path, checks your understanding before moving on, and returns concepts at the optimal moment before you'd lose them. The difference between someone who answers questions and someone who teaches.
What if I already know some AI basics - will Curo start from scratch?
No. You tell Curo what you already understand and where your gaps are, and it builds from there. If you've done AI for Everyone and understand the broad landscape but get lost when your data scientist starts talking about evaluation metrics, Curo starts at evaluation metrics. You're not re-covering ground you've already covered.