User
Write something
📰 AI News: Tokyo Startup Claims It Built A Brain Inspired AGI That Teaches Itself
📝 TL;DR A little known startup led by a former Google AI veteran says it has built the first AGI capable system that can learn new skills on its own, without human data or hand holding. The model is said to mirror how the brain’s neocortex works, but outside experts are extremely skeptical and there is no public proof yet. 🧠 Overview A company called Integral AI, founded by ex Google researcher Jad Tarifi, has announced what it calls the first AGI capable model. The system is designed to learn new skills autonomously in both digital environments and with robots in the physical world, using an architecture that is explicitly modeled on the layered structure of the human neocortex. The claims are bold, and they land in a moment where big players openly say AGI is still ahead of us, which is why the announcement is being met with a mix of curiosity, side eye, and memes. 📜 The Announcement On December 8, 2025, Integral AI publicly claimed it has successfully tested a model that meets its own definition of AGI capable. The startup says its system can teach itself entirely new tasks in unfamiliar domains, without pre existing datasets or human intervention, while remaining safe and energy efficient. The founders frame this as a foundational step toward embodied superintelligence and position their architecture as a fundamental leap beyond current large language models. At the same time, there is no peer reviewed paper, open benchmarks, or independent verification yet, so for now this is a marketing claim rather than an accepted scientific milestone. ⚙️ How It Works • Brain inspired architecture - Integral says its model grows, abstracts, plans, and acts in a layered way that mirrors the human neocortex, with higher levels building increasingly abstract world models on top of raw sensory data. • Universal simulators - The first piece is a simulator that learns a unified internal model of different environments from vision, language, audio, and sensor data, then uses that internal model to reason and predict across many domains.
1
0
📰 AI News: Tokyo Startup Claims It Built A Brain Inspired AGI That Teaches Itself
📰 AI News: OpenAI Says Enterprise AI Is Now Saving Workers An Hour A Day
📝 TL;DR OpenAI just released its first State of Enterprise AI 2025 report, and the numbers are big. Across nearly 100 enterprises, workers using AI say they save 40 to 60 minutes a day and 75 percent report better speed or quality in their work. 🧠 Overview This new report looks at how real companies are actually using tools like ChatGPT Enterprise and the OpenAI API, not just what is theoretically possible. It blends a survey of 9,000 workers with live product usage data to show where AI is driving value and where organizations are getting stuck. For anyone building an AI-powered business or career, it’s a clear signal that AI is moving from experiments to embedded workflows. 📜 The Announcement On December 8, 2025, OpenAI released The State of Enterprise AI 2025, its first deep dive into how enterprises are adopting AI at scale. The report covers nearly 100 companies and 9,000 workers across industries, focusing on ChatGPT Enterprise and API usage patterns. It finds that 75 percent of workers using AI say it improves the speed or quality of their work, with typical users saving 40 to 60 minutes daily and heavy users gaining more than 10 hours a week. ⚙️ How It Works • Survey plus usage data - OpenAI combined a survey of 9,000 employees with real ChatGPT Enterprise and API telemetry to understand not just what people think about AI, but what they actually do with it day to day. • Time and quality impact - Across the sample, 75 percent of workers say AI improves speed or quality, with most saving 40 to 60 minutes per day and heavy users reporting more than 10 hours a week reclaimed from routine tasks. • Cross-functional adoption - IT, marketing, product, HR, and engineering teams all report gains, from 87 percent of IT workers seeing faster issue resolution to 73 percent of engineers shipping code faster and 75 percent of HR professionals improving employee engagement. • Frontier users pulling ahead - The top 5 percent of workers send around 6 times more AI messages than the median, and frontier firms send roughly 2 times more messages per seat, showing that deeper engagement with AI compounds performance gains.
📰 AI News: OpenAI Says Enterprise AI Is Now Saving Workers An Hour A Day
📰 AI News: Apple Will Pay Google $1 Billion Annually to Fix Siri—Because Its Own AI Isn't Ready
Apple is finalizing a deal to pay Google roughly $1 billion per year for a custom 1.2 trillion-parameter Gemini model to power the long-promised overhaul of Siri, according to Bloomberg. The agreement—codenamed Project Glenwood, represents Apple's admission that its in-house AI models aren't competitive enough to deliver the Siri upgrade promised at WWDC 2024. The revamped Siri is expected to launch in spring 2026 with iOS 26.4, but Apple plans this as a temporary solution while developing its own 1 trillion-parameter model. The announcement: On November 5, 2025, Bloomberg reported that Apple is finalizing an agreement to license Google's 1.2 trillion-parameter Gemini AI model for approximately $1 billion annually to power Siri's upcoming overhaul. The custom Gemini model will handle Siri's summarizer and planner functions—the components that synthesize information and execute complex multi-step tasks—while some features will continue using Apple's in-house models. The partnership, overseen by Apple executive Mike Rockwell under Project Glenwood, follows an extensive evaluation period where Apple also tested models from OpenAI and Anthropic before selecting Google based primarily on cost considerations. The upgraded Siri is targeted for spring 2026 release alongside iOS 26.4. What's happening: Google's 1.2 trillion-parameter Gemini model dwarfs Apple's current AI capabilities, which use a 150 billion-parameter cloud-based model and a 3 billion-parameter on-device model. The massive parameter difference—8x larger than Apple's cloud model—represents a fundamental gap in AI capability that Apple cannot close quickly. For context, parameters measure how an AI model understands and responds to queries, with more parameters generally indicating greater capability. The Gemini model will run on Apple's Private Cloud Compute infrastructure, meaning user data will not be shared with Google despite Google providing the underlying AI technology. This architecture allows Apple to maintain its privacy-first positioning while leveraging Google's superior AI capabilities. Apple has emphasized that its Private Cloud Compute servers process AI workloads without exposing user data to third parties.
📰 AI News: Apple Will Pay Google $1 Billion Annually to Fix Siri—Because Its Own AI Isn't Ready
📰 AI News: “We’re creatives, this is what AI has done to our jobs”
📝 TL;DR AI is already changing creative work at the client level, not just in headlines. Real creatives are seeing their roles, rates, and daily work reshaped by tools like ChatGPT, image generators, and AI video. 🧠 Overview A recent feature follows a group of creative professionals talking honestly about how AI has changed their jobs, for better and for worse. Some are using AI to work faster and win more business, others are watching clients cut budgets or try DIY with AI tools instead. Underneath it all is a bigger question: what does a sustainable creative career look like in the age of AI. 📜 The Announcement The piece shares stories from working creatives who have already had to adapt their day to day work because of AI. They describe shifts in client expectations, pricing pressure, and how much of their time is now spent directing or editing AI rather than creating everything from scratch. It is not theory or future predictions, it is a snapshot of what AI is doing to real creative jobs right now. ⚙️ How It Works • Clients expect “AI speed” - Turnaround times that used to feel fast now seem slow compared to a prompt and a click, so creatives are under pressure to deliver more, quicker. • More work starts with AI - Briefs that once began with a blank page now often start from an AI draft or AI moodboard that a human then shapes, edits, or rebuilds. • Prices are being squeezed - When clients see AI output that is “good enough,” some push for lower fees or fewer billable hours for human work. • Roles are quietly shifting - Creatives are spending more time curating, editing, and art directing AI output instead of crafting every element by hand. • New skills are rewarded - Those who can combine taste, strategy, and smart prompting are turning AI into a competitive edge rather than a threat. • Emotional impact is real - Many report feeling both excited and unsettled, proud of what they can build with AI and worried about long term job security.
📰 AI News: “We’re creatives, this is what AI has done to our jobs”
📰 AI News: Meta Signs Big AI Deals With Major News Publishers
📝 TL;DR Meta just cut multiple AI licensing deals with big-name news publishers so its AI assistant can answer news questions using their content in close to real time. This is a big shift in how news gets distributed and how AI gets its information. 🧠 Overview Meta has signed a wave of commercial agreements with news organizations including USA Today, People Inc, CNN, Fox News, The Daily Caller, Washington Examiner, and Le Monde. These deals let Meta AI pull in and link to fresh articles when users ask news-related questions. It is another sign that AI assistants are becoming a primary gateway to information, not just a fun add-on. 📜 The Announcement On December 5, 2025, Meta confirmed it has struck several AI data and content licensing agreements with multiple news publishers. The goal is to feed Meta AI with timely, trusted reporting so it can respond to user questions with current information and links to original stories. Financial terms were not disclosed, but Meta says more partnerships and features are coming as it races to boost engagement with its AI products. ⚙️ How It Works → Meta AI plugs into publisher content - When a user asks a news-related question, Meta AI can now pull from the partnered outlets, summarize key points, and surface links back to their articles. That makes the chatbot feel more like a live news briefing than a static encyclopedia. → Real-time style updates for users - Instead of relying only on older training data, Meta can now serve fresher information via these feeds. It is a move to keep its answers relevant as news changes hour by hour. → Publishers get paid and amplified - Rather than scraping content and hoping no one complains, Meta is paying for structured access. In return, publishers get licensing revenue and prominent placement inside one of the most widely used consumer AI systems. → More deals are likely coming - Meta has signaled this is only the start. Expect more verticals, more regions, and more niche outlets to be added as competition in AI assistants heats up.
📰 AI News: Meta Signs Big AI Deals With Major News Publishers
1-30 of 85
The AI Advantage
skool.com/the-ai-advantage
Founded by Tony Robbins & Dean Graziosi - AI Advantage is your go-to hub to simplify AI, gain "AI Confidence" and unlock real & repeatable results.
Leaderboard (30-day)
Powered by