Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

AI Agents Academy

416 members • Free

AI Automation Agency Hub

277.3k members • Free

Ai Titus

843 members • Free

No-Code Nation

3.1k members • Free

Learn Voice AI

372 members • Free

Applied AI Academy

2.4k members • Free

Alessandro's AI Community

1k members • Free

Brendan's AI Community

22.2k members • Free

Tinocode

1.9k members • Free

13 contributions to Ai Titus
Use simple text prompts to accurately separate any sound from any audio or audio-visual source.
SAM Audio separates target and residual sounds from any audio or audiovisual source—across general sound, music, and speech. Very Cool! https://ai.meta.com/samaudio/
2 likes • 7d
Thanks @Titus Blair
Google Competing with N8N?
Very interesting worth checking out... https://workspace.google.com/blog/product-announcements/introducing-google-workspace-studio-agents-for-everyday-work
2 likes • 20d
Google is awesome, as expected. This isn't the first time I've worked on projects using their products, but if you use more than one, it starts to annoy me. Do you get along with it easily?
2 likes • 20d
@Titus Blair Thanks!👍
🎅🏻 Advent of Agents 2025
25 days to master AI Agents with Gemini 3, Google ADK, and production templates. Daily tutorials with copy-paste code. Start here Read the Introduction to Agents white paper 100% free. 🙌🏻
2 likes • 23d
@Mišel Čupković Thanks, that's great!
Next Big Leap in LLM/AI...
Worth reading and keeping an eye on.. Introducing Nested Learning: A new ML paradigm for continual learning We introduce Nested Learning, a new approach to machine learning that views models as a set of smaller, nested optimization problems, each with its own internal workflow, in order to mitigate or even completely avoid the issue of “catastrophic forgetting”, where learning new tasks sacrifices proficiency on old tasks. The last decade has seen incredible progress in machine learning (ML), primarily driven by powerful neural network architectures and the algorithms used to train them. However, despite the success of large language models (LLMs), a few fundamental challenges persist, especially around continual learning, the ability for a model to actively acquire new knowledge and skills over time without forgetting old ones. When it comes to continual learning and self-improvement, the human brain is the gold standard. It adapts through neuroplasticity — the remarkable capacity to change its structure in response to new experiences, memories, and learning. Without this ability, a person is limited to immediate context (like anterograde amnesia). We see a similar limitation in current LLMs: their knowledge is confined to either the immediate context of their input window or the static information that they learn during pre-training. The simple approach, continually updating a model's parameters with new data, often leads to “catastrophic forgetting” (CF), where learning new tasks sacrifices proficiency on old tasks. Researchers traditionally combat CF through architectural tweaks or better optimization rules. However, for too long, we have treated the model's architecture (the network structure) and the optimization algorithm (the training rule) as two separate things, which prevents us from achieving a truly unified, efficient learning system.
2 likes • 23d
The progress is wild. AI went from sounding like a clinical schizophrenic who needed a whole team of PhD psychiatrists… to a mildly autistic, slightly forgetful guy who’s now cleared to roam freely among the general population. Thanks @Titus Blair😉
1-10 of 13
Rey Bond
3
27points to level up
@rey-bond-1602
Merging human creativity with AI

Active 2h ago
Joined Sep 5, 2025