Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

STR Ops Vault

38 members • $3,000/y

Community Builders - Elite

245 members • $99/month

AI for CRE Collective

573 members • $49/month

HOTEL Launch

72 members • Free

Automate Business AI

5.3k members • Free

AI Pioneers

8.4k members • Free

Ai Automation Vault

15.5k members • Free

Automate What Academy

2.5k members • Free

Community Lab

561 members • Free

16 contributions to Automate What Academy
🍎 Apple Taps Gemini ✦
I find it fascinating that a company as big as Apple, still has not been able to figure out AI! Apple just confirmed it's teaming up with Google to power the next generation of Siri using custom Gemini models, and it sounds like the partnership goes way beyond just voice commands. The short version is that Apple chose Google's AI as the core tech behind future Apple Intelligence features. A few things jumped out at me: • custom Gemini models powering not just Siri but more Apple Intelligence features across the whole ecosystem • potential for major upgrades on iPhone, iPad, Mac, Vision Pro, even Apple Watch • Apple still keeping user data private inside its own systems • Google simply providing model and cloud tech while Apple controls the experience • both companies going public with the deal which raises the stakes for performance • signals a trend where even the biggest companies rely on each other for AI acceleration • could push more cross platform pressure and innovation in consumer facing AI I’m curious what you guys think. Does this partnership make you more excited about Apple Intelligence or more skeptical that Apple needs Google at all? Read the full article here: https://www.theverge.com/ai-artificial-intelligence/860989/apple-google-gemini-siri-ai-deal-what-it-means
🍎 Apple Taps Gemini ✦
If that doesn’t start to blur lines, I don’t know what does. That’s kind of incredible.
AI 2027
Have you guys heard of these predictions? I made this video on Google NotebookLM to sum it up. Here is the full article: https://ai-2027.com/
AI 2027
3 likes • Aug '25
Yes 10x
Overthinking Hurts AI (Sometimes)
Turns out, giving AI models more time to “think” doesn’t actually make them smarter, in fact, it can make them worse. Anthropic just dropped some eye-opening research showing that large language models perform worse on certain tasks when allowed to reason longer, a phenomenon they call “inverse scaling.” This is huge because most of the AI industry assumes that throwing more compute at a problem (especially during test-time) will improve accuracy and reasoning. But this research shows that more isn’t always better, sometimes, it just reinforces bad patterns or leads models off track. One practical takeaway: If you're building automations or tools that rely on complex AI reasoning (and using tools like Claude or GPT), don’t just assume longer processing = better performance. Test across different reasoning lengths and keep things concise when possible, especially for simple tasks. I’m really curious… have any of you noticed “overthinking AI” behaviors in your projects? Maybe a time where more time or context actually made things worse? Read the full article here: https://links.tldrnewsletter.com/lwwuXT
Overthinking Hurts AI (Sometimes)
1 like • Aug '25
I think that’s a really good point and I rarely hear anybody talk about that Sometimes I will set things to research to get deeper and maybe come at it from different angles but I’ve also found that you can just prompt it better to get similar results.
New Video – GPT-5 Overview
Check out this 2 minute video of the new gpt-5 model! Excited to start building AI agents with it!
2 likes • Aug '25
I still don’t have access on got but hopefully soon
OpenAI is finally...open?
OpenAI just dropped two powerful open models that anyone can download, run, and use commercially for free. - 120B parameter model that runs on a single 80GB GPU - 20B parameter model that works on 16GB VRAM (can run locally) - Performance rivals o4-mini on math, MMLU, coding, and health benchmarks - Uses Mixture-of-Experts architecture (only ~5B active params per token) - Supports low, medium, and high reasoning modes - Logs full chain-of-thought for transparency - 128K token context window (great for long docs and retrieval tasks) - Apache 2.0 license lets you use, modify, and monetize freely - Safety-tested against bio, cyber, and misuse risks This levels the playing field, real reasoning power is now open to everyone. Read more: https://openai.com/index/introducing-gpt-oss/
OpenAI is finally...open?
1 like • Aug '25
How should we take advantage of this
1-10 of 16
Matthew McCall-Stillman
3
26points to level up
@matthew-mccall-stillman-3205
⚡️ Airbnb Coach | Broker | Proud Michigan Dad 🔥Scale in Real Estate with Flips, BRRRRs, Airbnbs a& Hotels

Active 1d ago
Joined Nov 15, 2024
ENTJ
Powered by