Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

La Méthode Verso Supply

62 members ‱ Free

The Build Room

2.5k members ‱ $67/month

AI Money Lab

51.8k members ‱ Free

JUSTANOTHERPM

983 members ‱ Free

3 contributions to JUSTANOTHERPM
AIPMA | Module 1 Activity | Coh 001
Please share a document with the LLM's name, prompt and the learning summary of session. Please include a visual (optional) Also share in the comments below how would you define "good quality" in this case, and how would you measure success of the "Online classes learning summariser" feature
1 like ‱ 10d
Quality Control (What does “good quality” mean for this summariser?) - 1. Zero Hallucinations (Grounding): We need strict guardrails on Grounding. If Sid didn't explicitly say it in the transcript, it doesn't exist in the summary. The model absolutely cannot invent new frameworks or 'best practices' just to fill space. We want a faithful mirror of the session, not a creative writing exercise so we might have to set a high accuracy score to start. - 2. Signal Over Noise (Data Strategy): The model needs to be smart enough to separate the 'signal' from the 'noise.' It should be able to ignore the casual chit-chat—like the weather or Zoom logistics—and zero in on the core mental models like the Deterministic vs. Probabilistic shift. If it captures the fluff but misses the framework, the data strategy failed. - 3. Actionable Intelligence (The "So What?"): A good summary doesn't just parrot back definitions; it captures the practical implications. We need to make sure the output connects the dots for the user—specifically, how these concepts change their day-to-day work in scoping, measuring, or designing features. If someone reads this and doesn't know what to do differently tomorrow, the tool missed the mark. - 4. User-Centric Formatting (Solution Design): Let’s treat the summary format as part of our Solution Design. We need to optimize for scannability—bold headers, clean bullets, and tables for comparisons. The user experience should allow a student to refresh their memory on the entire session in under two minutes. If it's not digestible, it's not useful. --------------------- Measuring Success: Beyond the Thumbs Up - 1. Track "Edit Distance" (Implicit Feedback): It shouldn't be enough to look at whether the user kept the summary; we should look at how much they changed it. If a user saves the summary without touching it, the model’s 'best guess' was a win. But if they have to rewrite half the text or delete whole sections, our quality is missing the mark. While supporting adjustable output, we want low edit rates—that means we actually saved them time. - 2. Look for the "Content lifecycle" (Action Taken): The strongest signal isn't a survey response; it's what users do with the content. If a student actually copies a code snippet or a comparison table out of the chat and into their notes, that’s 'super feedback.' It proves the content was accurate and useful enough to live outside the tool." - 3. The "Recovery" Metric (User Agency): We all hate hitting a dead end. We could track how effectively users can fix a bad result. If they use a 'Regenerate' or 'Focus on the 7 Questions' button and then accept the output, that’s a success story. It means our recovery tools work, allowing the user to steer the AI back on track without rage quitting. - 4. Benchmark Against a "Golden Set" (Evals). We need a reality check. We could compare the AI's output against a 'Golden Summary'—a manual version written by an expert or a pool of higher performing models. We can rate the AI on a 1–10 scale based on how well it captures the same key insights the expert or pool of models did. This gives us a concrete baseline to adjust our prompts and measure if we're actually getting better.
1 like ‱ 9d
@Sid Arora @Sid Arora Thanks Sid, this is really useful 🙌 1. On actionable intelligence, I see the tension you are pointing at: we want the summary to be useful, but the moment we ask for a “so what,” we risk pushing the model into interpretation instead of reflection which is probably the hardest constraint to get right. We may need to redefine “actionable” as surfacing implications already stated, not generating new ones. This can prove a fine tuning challenge for the model to grasp the instructions subtleties. Dunno how to solve it đŸ€·â€â™‚ïž 2. On success metrics, your questions highlight a real risk of designing things that sound good but are hard to monitor or scale. We could pressure-test each metric for objectivity and automation early. For the Golden Set, I was thinking a small, stable expert baseline with clearly defined key points, so as to compare coverage and grounding rather than subjective writing quality. Again not sure how to đŸ€” Realising now I went full “classic PM mode” here: I set the what and the why and expected R&D to figure out the how. But in an AI PM world, the data and evals too are now on US 😅 It’s a whole new game for me đŸ€Ł
Welcome Aboard - Start Here - Introduce Yourself
Hey there, And a warm welcome to our vibrant community. This is where you start the journey towards making your dreams a reality. This community is not just about product management. Instead, it is about sharing your aspirations, your ambitions, your goals. And then learning the things that will help you achieve the same goals. And the best way to give back to those who helped you along the way is to pass it forward. Help others who are in similar situations as you were by guiding them and sharing the lessons that you learned in your journey So without further ado, let's do this. Let's do it together. Let's meet our professional goals and help others meet theirs. A short intro to this community: You will find three major sections: Community: where you can post and read all the posts on all topics (or choose to filter the ones that are of most interest) Classroom: this is where you will find all the courses and challenges. You will automatically have access to all the FREE resources and the paid courses that you've already bought. Events: this is where you can find a calendar of all the upcoming (and past events) you can RSVP, get access to sign up links and recordings. With that said, enough about the community, let's know you a little bit more. Tell us: - Where you’re from - What you do - What you’re looking to learn or achieve here - A fun fact about yourself Excited to grow and learn with you!
0 likes ‱ Dec '25
@Bahar Malekshahi that’s an amazing journey you got yourself into. You’ll see there are quite many similarities when building a client or a product case 😉 Would love to hear more about the law you practiced? All the best
1 like ‱ 10d
Hi everyone, I am Manu and currently living nearby Stavanger, Norway. I am a product manager with experience in delivering platforms for Edtech and EHS industries. I am really excited to be part of the program and look forward to learn from you all.
The resume is usually not the problem...
Hey everyone! Last month, I was mentoring Priya, a PM with 3 years of solid experience at a decent startup. She'd been applying for 2 months straight - sent out 47 applications - and got exactly 2 phone screens. She was frustrated, confused, and starting to think the market was just impossible. Then I looked at her resume. Within 10 seconds, I spotted 4 major issues that were killing her chances before any human even saw her application. Her bullets read like job descriptions, not achievements. Her formatting was confusing ATS systems. Her best work was buried on page 2. We spent 90 minutes rewriting it using a simple framework I've developed - turning generic task lists into sharp, compelling stories that show exactly how she thinks and what impact she drives. Result? 5 interview requests in the next 2 weeks. The thing is - Priya isn't alone. With layoffs hitting hard and every PM role getting 200+ applications, your resume needs to be bulletproof. Most PMs think their resume is fine, but it's actually sabotaging them at every step. That's exactly why I created Resume Booster. It's the same framework and system I used with Priya - now packaged into a step-by-step course that's helped 1000+ PMs fix their resumes and land roles at companies like Amazon, Google, Flipkart, Zomato, and dozens of high-growth startups. Here's what you get: ✅ The exact framework that turns boring bullets into compelling stories ✅ 3 ATS-friendly templates (Junior, Mid, Senior) you can copy-paste ✅ The "Why-What-Impact" formula that makes recruiters stop scrolling ✅ Tailoring strategies that make you look perfect for each role ✅ Outreach templates that actually get responses There are 3 ways to get access to Resume Booster: #1 🎯 FREE (first 20 only): Comment below with your biggest struggle with creating a resume that actually gets you shortlisted for PM interviews. First 20 responses get the full course FREE. #2 💰 $7: Grab it now before the price goes up
0 likes ‱ Jun '25
My biggest challenge is that I have been sending hundreds of applications but don’t get any replies despite having what I was told is a strong CV. I don’t get it, I never had any problems before but now I don’t even get a screening call
 Desperate to understand what needs to improve and how

1-3 of 3
Manuel Velasco
1
1point to level up
@manuel-velasco-2409
On a Product Management discovery journey. What does it mean to be a PM in the age of AI?

Active 4d ago
Joined Apr 18, 2025
Powered by