MIT Unveils Speech-to-Reality System for Instant Furniture Creation with Robotic Assembly
December 6, 2025
To scale and improve practicality, the work is focusing on stronger, non-magnetic connections to boost weight-bearing capacity and on voxel-to-assembly pipelines that let small distributed mobile robots build larger structures.
The team aims to broaden interaction modes by adding gesture recognition and augmented reality alongside speech, drawing inspiration from sci-fi concepts like the Star Trek replicator to make matter creation faster, more accessible, and sustainable.
The project stemmed from Kyaws participation in MITs How to Make Almost Anything course and has continued at the MIT Center for Bits and Atoms, with ongoing collaboration among researchers Se Hwan Jeon and Miana Smith.
MIT researchers have developed a speech-to-reality system that turns spoken prompts into physical objects within minutes using a robotic arm and modular components, enabling on-demand production of stools, chairs, shelves, tables, and decorative pieces.
The technology is designed to democratize design and manufacturing, making it accessible to non-experts and offering faster production than 3D printing by delivering objects in minutes instead of hours or days.
The workflow begins with speech recognition, then 3D generative AI to produce a digital mesh, followed by voxelization, geometric processing for fabrication feasibility, and robotic path planning to assemble the object.
The system integrates natural language processing, 3D generative AI, voxelization, and robotic assembly to translate a spoken request into a feasible digital mesh and an automated fabrication plan.
Overall, the process combines NLP, 3D generative AI, voxelization, geometric processing, and automated path planning to produce a feasible assembly and fabricate objects from speech prompts.
The work, titled Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly, was presented at the ACM SCF ’25 conference at MIT on November 21, with the MIT Center for Bits and Atoms leading the project.
An automated path-planning step guides the robotic arm to assemble modular parts, enabling on-demand production without the long lead times of traditional 3D printing.
The system emphasizes accessibility for non-experts and waste reduction through modular, reassemblable components, enabling reconfiguration of objects (for example, turning a sofa into a bed).
Looking ahead, the team envisions prompts like “I want a chair” materializing a physical chair within minutes, integrating speech, gesture control, and AR to broaden fabrication accessibility.
Summary based on 2 sources
Get a daily email with more AI stories
Sources

MIT News | Massachusetts Institute of Technology • Dec 5, 2025
MIT researchers “speak objects into existence” using AI and robotics
Mirage News • Dec 6, 2025
MIT AI, Robotics Speak Objects Into Existence