Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

JUSTANOTHERPM

983 members • Free

6 contributions to JUSTANOTHERPM
AIPMA | Module 1 Activity | Coh 001
Please share a document with the LLM's name, prompt and the learning summary of session. Please include a visual (optional) Also share in the comments below how would you define "good quality" in this case, and how would you measure success of the "Online classes learning summariser" feature
1 like • 9d
Quality: These are some facets to test quality - accuracy, completeness, readability etc. To test on these, I’ll feed 2 different responses to an AI tool to compare: 1. What key learning did a response miss? (Completeness) 2. What key learning did a response have in contradiction to another response? (Interpretation issues / hallucinations) 3. Rate the responses on complexity of language (not very sure about this as mostly all models do well on readability - so this could be ignored). Manual testing is another option. Asking a manual reader to read 2 responses and compare. Success: Thumbs up, Copy, Download, fewer follow on prompts for revision. Preference for detailed vs short (quick, actionable insights) summaries is subjective. The nature of revisions sought by the users can reveal that preference. Better still, give the users the option to select that upfront. End the summary with feedback questions like - Was this summary helpful? Did I miss anything important?
Week 2 Activity 1: What tech stack does your product need
Submit your answer here. Keep it simple. Just explain in simple English. Be sure to call out "why" you think you need or don't need a specific aspect in your product. Let's go 👇
0 likes • 16d
@Phil L Great problem to solve and its akin to bringing a human touch to AI. The difficult part I reckon will be to get personas of hiring team.
0 likes • 13d
@Manasa Shetty thanks for the suggestions.
Week 4 Activity
Look at a product of your choice and apply the AI PM lens to it.
1 like • 16d
Week 4: Product: Claude Job: Create a succinct Jira work item for a feature. 1. What's the real job this solves? Create a ticket. Add high level details of the feature after searching web about best practices for that feature. 2. How does the system stay grounded? Claude asked the team some questions and clarifications, mostly with Yes/ No answers. It went through 2-3 rounds of questions based on the answers the team provided. Interesting to note that the exact same query on ChatGPT just produced results without asking any questions. 3. What's the context the model receives? Keep it short, add risks, dependencies and any important notes, be conversational but brief, search web. 4. What are the failure modes? Claude never claimed that the requirements were perfect. It always ended by asking the team if they thought it required any adjustments. It also asked very intelligent questions to begin with. On the contrary, ChatGPT often claimed the requirements it generated are “excellent” and “production ready”. 5. What trade-offs did the team make? The team chose accuracy over speed. Claude results were sufficiently detailed and covered most scenarios. On follow on prompting, it also produced very accurate model illustrations that invoked a lot of confidence. 6. What's the role of the UI? UI was very neat with all the necessary action buttons - copy, retry, edit. The illustrations generated were very good wireframes of what those screens could look like. It always ended its response with 1 or more questions on whether the response was helpful, accurate, or whether it needed adjustments.
Week 3 Activity: Does it really need AI
For the idea that you thought of in the Week 2 activity, share the following: Deliverable #1: Share the scores on each dimension and share a short description of why you rated it like that. Deliverable #2: Share the total score (Total Score = add all three) Deliverable #3 : What does your score tell you about your idea?
0 likes • 16d
I earlier thought about reading invoices accurately. But over the past few days, I have advanced the problem statement to building an early warning systems based on past repayment behaviour of an SMB borrower for an invoice-discounting lender. When an invoice-discounting company receives a new discounting request from an existing SMB customer, it can look at the not just reading the invoice but also pattern of repayments from this customer and other customers to predict default risk. 1. Data Readiness (1-5): 5 because discounting company has lots of data of past defaults. This includes detailed logs, part vs full payments, supplier recommendations, utilisation patterns, delayed payment behaviour and a lot more. 2. Output Type (1-5): 3 = Mix of rules and judgment-based decisions (Hybrid works.) There are many simple rules - increasing delays in past repayments, negative feedback from counterparty etc. There are also many subjective factors - changes in discounting request patterns, delayed submission of invoices, industry headwinds, detecting fraudulent invoices. 3. Error Tolerance (1-5): 3 = Errors acceptable with human review or correction (Human-in-the-loop works.) We can have an underwriter review the problems identified by AI and take a call. Total Score: 11. Strong AI candidate though at the bottom of the range (11-15).
Week 1, Activity 2: Personal Inventory
Submit your problem mapping here. 👇 How to Submit 1. Fill out the template from the essay 2. Post your response in the comments below 3. Read at least 2 other people's ideas and leave thoughtful feedback. Let's think this through. 👇
0 likes • 16d
@Peculiar Ediomo-Abasi Thanks!. Great suggestions. There are many ways in which such capabilities can evolve and be offered.
0 likes • 16d
@Sid Arora Thanks for the feedback, Sid. Will make note of these 2 uncertainties and think of more.
1-6 of 6
Akshun Gulati
2
12points to level up
@akshun-gulati-5248
Fintech co-founder, building next gen loan origination software for banks and NBFCs in India.

Active 13h ago
Joined Dec 31, 2025
Powered by