Towards Compact, Self-Observant AI: A Speculative Exploration — Just Around the Corner?
I’ve been wondering what might happen if everything below collapsed into one unified AI model with self-awareness, meaning we (or at least the builders) can observe and understand what’s going on inside:
  • Latent spaces (compressed, low-dimensional representations that capture underlying data features)
  • Liquid Neural Networks (LNNs). which I started creating myself two years ago, based on MIT’s innovations
  • Tokenformers (which may have the potential to replace transformers)
  • Hybrid models (combining multiple AI techniques for greater robustness and versatility)
  • New lenses on paradigms
  • Some form of self-emergent empathy
I suspect such a model could be surprisingly compact, potentially only a few million parameters and sized somewhere between 30 to 50 MB, maybe up to 100 MB. Of course, this is a rough estimate, but it’s exciting to imagine. Such models could easily run on phones without problems, and I believe they might even outperform current models like OpenAI’s and Claude’s.
Let’s see how far off I am, and when this actually happens.
This isn’t about reaching the end of prompting, context engineering, red teaming, or agentic AI workers—those will continue evolving. But I believe the next paradigm shifts are just around the corner.
What are your thoughts on this?
P.S. If you’re wondering where the research papers or articles on this are...I don’t have any to share yet. These are my own thoughts, exploring the future possibilities of AI.
3
5 comments
Holger Morlok
5
Towards Compact, Self-Observant AI: A Speculative Exploration — Just Around the Corner?
powered by
Digital Roadmap AI Academy
skool.com/digital-income-streams-8409
Teaching Coaches and Entrepreneurs how to 10x their lead gen, scale to $7-Figures, become irreplaceable w/ AI-powered Marketing & Content Strategies.
Build your own community
Bring people together around your passion and get paid.
Powered by