Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

Online Business Friends

85.6k members • Free

AI First Designer School

1.4k members • $249

JUSTANOTHERPM

983 members • Free

AI Agent Automation Agency

2.7k members • Free

Software Developer Academy

26.6k members • Free

AI Developer Accelerator

11k members • Free

University of Code

5.3k members • Free

6 contributions to JUSTANOTHERPM
Week 4 Activity
Look at a product of your choice and apply the AI PM lens to it.
2 likes • 19d
**Product Name: GitHub Copilot** **1. What is the real job this product solves?** Not what the AI does technically, but what the user actually needs. The real job this product solves is increasing the speed of code generation during the coding (implementation) phase. Focusing on engineers: When users move from code design to implementation, they spend a significant amount of time on: - How to use frameworks (React, Next.js, FastAPI, etc.) - Language syntax - Coding best practices - Algorithm selection and implementation details Although engineers may understand these concepts when reading documentation, they still need to constantly refer back to docs while actually writing code. This is one of the most time-consuming parts of coding. Keeping all these rules in mind while coding quickly is difficult even for senior engineers. This is where GitHub Copilot comes in. GitHub Copilot understands common coding patterns, best practices, and algorithms, and assists engineers during implementation. As a result, engineers can write code significantly faster. In this way, GitHub Copilot improves engineer productivity by accelerating code generation during the coding phase. **2. How does the system stay grounded?** GitHub Copilot is a coding assistance tool and is trained on a large corpus of publicly available code on GitHub. It is primarily designed to handle coding-related tasks and does not aim to provide strong grounding guarantees. In practice, grounding is relatively weak: all judgment and validation of the generated code is delegated to the user. The engineer is responsible for deciding whether the output is correct, appropriate, or safe to use. **3. What context does the model receive?** The model receives context such as: - The current code - The folder / repository structure - User prompts - Code learned during pre-training from GitHub Additionally, when users explicitly describe how they are trying to solve a problem, this information further improves the relevance and quality of the generated code.
Week 3 Activity: Does it really need AI
For the idea that you thought of in the Week 2 activity, share the following: Deliverable #1: Share the scores on each dimension and share a short description of why you rated it like that. Deliverable #2: Share the total score (Total Score = add all three) Deliverable #3 : What does your score tell you about your idea?
0 likes • 19d
**Product Link** https://www.skool.com/justanotherpm/week-1-activity-2-personal-inventory?p=72733c15 **Total Score: 11 / 15** While this product does not rely on proprietary training data, it has a clear justification for using AI because it focuses on interpretation rather than prediction or automation. **Score Breakdown** 1. Data Readiness – 1 / 5 At this stage, there is not enough historical or labeled data to train or improve a model. Therefore, a score of 1 is appropriate under the standard definition of Data Readiness. However, this product does not assume an AI system that learns from examples to optimize accuracy. Instead, it leverages the LLM’s existing knowledge of human cognition, reflection, and learning behavior to interpret user-generated text in real time. User journals are not treated as training data. They function as contextual input for joint interpretation, enabling deeper and more personalized introspection rather than model improvement. 2. Output Type – 5 / 5 The output is inherently subjective and judgment-based, with no single correct answer. The value lies in surfacing patterns, points of tension, and meaningful questions from journal entries. This is an ā€œit dependsā€ type of problem, where interpretation and perspective matter more than deterministic correctness. This aligns strongly with the strengths of LLMs. 3. Error Tolerance – 5 / 5 The AI serves only to provide interpretations and suggestions. It does not execute actions or enforce decisions. All final decisions and actions remain with the user. As a result, even if an interpretation is imperfect, it can be ignored, revised, or reinterpreted, and the risk of causing serious harm is low. Compared to rule-based or autonomous systems, the impact of errors is highly limited.
Week 2 Activity 1: What tech stack does your product need
Submit your answer here. Keep it simple. Just explain in simple English. Be sure to call out "why" you think you need or don't need a specific aspect in your product. Let's go šŸ‘‡
0 likes • 19d
**Product Link** https://www.skool.com/justanotherpm/week-1-activity-2-personal-inventory?p=1a246f12 **LLMs** LLMs are essential for this product. User journals are written in free-form text or voice and contain strong contextual, emotional, and intentional signals. These characteristics make rule-based processing insufficient. Core functionalities such as daily summarization, theme extraction, weekly reflection report generation, and the creation of reflective questions all require semantic understanding and flexible interpretation of language. In addition, the product needs to organize how daily journal entries relate to user-defined goals. This requires interpreting text as contextual meaning rather than applying fixed rules or keyword matching, making an LLM a fundamental building block. **RAG** RAG is also an important building block. All generated outputs are based on the user’s own past and present journal data rather than general or external knowledge. To achieve this, the system must retrieve relevant information such as historical journal entries, recent summaries, and currently defined goals from a database and inject them into the generation process. This ensures that reflections are grounded in the user’s actual history and current context. Because retrieval and generation are tightly coupled in this workflow, a retrieval-augmented generation architecture is well suited to the product’s requirements. **Embeddings** Embeddings play an important role as well. Users often express similar thoughts, emotions, or ideas using different wording across days or weeks. To detect recurring themes and longer-term patterns, the system must capture semantic similarity instead of relying on exact keyword matches. Embeddings enable meaning-based comparison across differently worded journal entries, allowing related content to be grouped and analyzed even when surface-level expressions vary. This semantic layer is essential for identifying trends and shifts over time.
Week 1, Activity 2: Personal Inventory
Submit your problem mapping here. šŸ‘‡ How to Submit 1. Fill out the template from the essay 2. Post your response in the comments below 3. Read at least 2 other people's ideas and leave thoughtful feedback. Let's think this through. šŸ‘‡
0 likes • 19d
@Sid Arora Based on your advice, we revisited both the **feature set** and the **KPIs**, and simplified the structure. **On the feature side**, the product is organized as follows: * **Daily** * Text or voice journaling to capture daily thoughts and actions * Free-form goal setting, shown alongside daily entries as a reference point * **Weekly / Monthly** * Aggregation of journal entries over time * Organization and summarization of recurring themes and trends * Visibility into how thinking and behavior patterns change across periods * **Insights** * Lightweight insights focused on repeated themes and patterns * No recommendations or prescriptive elements in the initial phase Overall, the flow is intentionally simple: **daily capture → periodic organization → lightweight pattern visibility**. **On the KPI side**, we avoid directly scoring growth or success. Instead, we look at minimal proxy signals to confirm two things: whether the product is being used consistently, and whether the output resonates at all. * Usage-related indicators: * Access frequency * Journal entry rate * Streaks * Weekly reflection open rate * For AI output specifically: * A simple reaction to weekly reflections indicating whether there was a realization or not The goal here is not to judge awareness or behavior, nor to have the AI determine growth, but to confirm that the reflection experience itself is functioning and sustainable. This structure keeps the MVP lightweight, avoids heavy assumptions around data collection or integrations, and stays close to the reflective-mirror role you described.
0 likes • 19d
Here’s a concrete view of the **product functionality we’re currently considering**, focusing purely on *what the product does*. --- ## Product Features (Functionality Only) ### 1. Daily Journaling (Text / Voice) * Users can freely capture thoughts and actions via text or voice * No scoring, evaluation, or judgment * Designed for low-friction, everyday use --- ### 2. Goal Setting (Optional, Flexible) * Users can define goals in free text * Numeric or measurable goals are optional * Goals can be updated or redefined at any time --- ### 3. AI-Based Organization & Summarization * AI summarizes daily journal entries * Extracts and organizes themes over time * Focuses on structuring, not advising --- ### 4. Weekly Reflection Reports * Automatically generated weekly summaries * Highlights: * Recurring themes * Patterns in thinking and behavior * Differences from previous weeks * Includes reflective questions (no recommendations) --- ### 5. Monthly Reflection Summaries * Aggregation of weekly reflections * Long-term theme visualization * Supports optional goal redefinition --- ### 6. Lightweight User Feedback Signal * Simple reaction to weekly reports (e.g. ā€œI had a realizationā€) * No comments or ratings required --- ### 7. Progressive Insight Depth (Phased) * Initial focus: capture → summarize → reflect * Insights remain descriptive * Recommendations intentionally deferred to later phases
Week 1, Activity 1: Spot the Paradox in Real Products
Submit your analysis here. šŸ‘‡ How to Submit 1. Fill out the template from the essay 2. Post your response in the comments below Then Read & Respond: Once you've submitted, read at least 2 other people's responses and leave thoughtful feedback. Let's go. šŸ‘‡
2 likes • Dec '25
@Peculiar Ediomo-Abasi ManasaĀ Shetty Thank youšŸ˜€ I’ll keep learning.
1 like • Dec '25
@Ajay Krishnan I really appreciated the perspective of treating AI output as a draft rather than a source of truth. That framing helped me see the design decision in a new way. What this post made me reflect on is that current AI systems are fundamentally probabilistic. They can estimate signals reflected in audio waveforms—such as pitch, tone, and intonation—to some extent, but there is still room for growth in how those signals are interpreted. While we’re not quite at that level yet, the fact that we can reasonably expect to get there says a lot about how remarkable the progress of AI has been. It also resonates with what we already see in areas like sales enablement tools and in NLP more broadly, where modern machine learning approaches are applied to make sense of complex, context-dependent signals. In that context, it feels natural that AI is essential for making these kinds of features work in real products. Overall, this was a very user-centered way of framing the problem, and it surfaced insights I wouldn’t have arrived at on my own. Thanks for sharing.
1-6 of 6
Masahiro Teramoto
2
10points to level up
@masahiro-teramoto-8123
SRE Engineer

Active 4d ago
Joined Oct 11, 2025
Powered by