These MIT hackathon winners built an AI that can control your body
Now this is news ... At MIT Hard Mode 2026, a six-person team built an AI system that can temporarily move your hand for you. Called Human Operator, it combines a vision-language model, voice input, and electrical muscle stimulation – a technology that sends small currents through the skin to contract specific muscles – to physically guide a user’s hand and wrist through unfamiliar movements. The team describes it as a “human augmentation tool” designed to help people learn or perform actions they couldn’t manage on their own. Rather than displaying instructions or providing feedback after the fact, the system intervenes in the moment, nudging the body directly toward the right motion. Peter He, Ashley Neall, Valdemar Danry, Daniel Kaijzer, Yutong Wu, and Sean Lewis built the project and took first place in the “Learn Track.” The Hard Mode hackathon runs for 48 hours at MIT Media Lab, focused on intelligent physical systems that can sense, adapt, and respond to people in real time. How the Human Operator works What the team built is essentially a careful assembly of technologies that already existed, just never quite in this combination. A camera captures what the user sees. Voice input runs through Anthropic’s Claude API, which figures out what motion is needed and maps it to a sequence of muscle commands. Those commands then travel through an Arduino-based hardware stack to EMS electrodes on the wrist and fingers. In the demo footage, a hand waves back at someone, fingers find the right keys for a melody, and a fist curls into an OK sign. Each one is guided by the system, reading the situation and deciding it was time to move. The engineering stack reflects a deliberate choice about where AI sits in the interaction. Most consumer AI systems stop at text, voice, or screen output. Human Operator goes a layer deeper, into motion itself.https://www.youtube.com/watch?v=fCLxENGs7CY&t=13s The system tries to move with the body rather than surface instructions, which is an approach the team frames around helping users “learn and do things you normally cannot do.” Where that leads in terms of physical learning, accessibility, or new kinds of interfaces, however, is an open question.