Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

JUSTANOTHERPM

1k members • Free

2 contributions to JUSTANOTHERPM
AIPMA | Week 2 Activity | Coh 002
This week we have got 5 activities that put everything from Module 2 into practice: 1. Fix the Prompt — Take broken prompts and rewrite them using the 5 Elements framework 2. Diagnose the Failure — Figure out why an AI product is giving bad output (hint: it's almost never the model) 3. Design the Context — Map out all 6 context components for a real product scenario 4. Classify the Approach — Decide whether a feature needs a simple prompt, RAG, an agent, or fine-tuning 5. Write a System Prompt — Write a production-quality system prompt from a product brief, then test it live This doc has all 5 activities. Here's what to do: → Make a copy of the doc → Work through the activities → Link your completed copy as a comment on this post
1 like • 4d
Attached is my week 2 activity. I was surprised this took longer than the agent activity haha.
AI PMA | Activity | Week 1
Please share a document with the LLM's name, prompt, and the learning summary of session. Please include a visual as part of the learning summary. (I recommend using Google Nano Banana for it) Finally, please share: - How would you define "good quality" in this case - How would you measure success of the "Online classes learning summariser" feature
0 likes • 10d
I've attached my Week 1 assignment. Model used: Claude Sonnet 4.6. To the additional questions: 1) Measuring good quality: I would primarily look at accuracy and usefulness of the summary. - Accuracy meaning how much of it is grounded by the actual content in the transcript, and having 0 or near 0 hallucinations. - Usefulness, based on the objective of the summary and the course - does it appropriately capture the *key* learning points to enable for instance someone who did not attend the course to grasp the most important takeaways. I believe this might be judged by both the facilitator (Sid) and by the students, because the facilitator will have the deepest knowledge of the session and the points they were trying to make, and the students of the course will be the best to judge how those points actually landed. 2) Measuring success: Partly related to measuring good quality, I would try to create a score card of the dimensions that make up good quality (and this would include additional things like readability/style), and ask both experts and audience to score the summary, and perhaps use it for different sessions to see how consistent the scores hold up, assuming the course objectives remain unchanged.
1-2 of 2
Sheila L
1
4points to level up
@sheila-l-9698
PM

Active 7h ago
Joined Apr 2, 2026
Powered by