Robbyant Open-Sources LingBot-Depth to Revolutionize Embodied AI with Advanced Spatial Perception

January 27, 2026
Robbyant Open-Sources LingBot-Depth to Revolutionize Embodied AI with Advanced Spatial Perception
  • In a strategic move to democratize advanced spatial perception, Robbyant’s CEO announced open-sourcing LingBot-Depth and forming partnerships with hardware leaders to lower barriers for embodied AI across homes, factories, and warehouses.

  • Robbyant, an embodied AI company under Ant Group, released LingBot-Depth as a high-precision depth sensing and 3D environment understanding model to boost robots’ performance in complex real-world settings.

  • LingBot-Depth achieves robustness by leveraging Orbbec’s Gemini 330 stereo cameras and MX6800 depth engine, reconstructing missing data for challenging lighting and reducing sensor latency through on-device computation.

  • The model is designed to be hardware-compatible with existing sensors, avoiding the need for form-factor changes to accelerate adoption.

  • Len Zhong of Orbbec emphasizes the close coupling between Orbbec’s depth data from Gemini 330 and LingBot-Depth, underscoring stable, high-fidelity data as foundational.

  • Robbyant trained LingBot-Depth on roughly 10 million raw samples, building a curated 2 million RGB-depth pairs dataset, with plans to open-source the dataset to spur broader community innovation.

  • The model tackles depth gaps from transparent or reflective surfaces with Masked Depth Modeling, using RGB texture, contours, and scene context to reconstruct missing information.

  • Robbyant intends a strategic partnership with Orbbec to embed LingBot-Depth into Orbbec’s next‑generation depth cameras for embodied intelligence applications.

  • Beyond Orbbec, Robbyant seeks to broaden its ecosystem by sharing spatial perception capabilities with additional hardware partners to enable real-world deployment of intelligent robots in dynamic environments.

  • Key resources include the LingBot-Depth codebase on GitHub, a technical report, and a HuggingFace page detailing the project.

  • Orbbec contributed hardware resources and expertise, with LingBot-Depth co-optimized on Orbbec platforms and validated in Orbbec’s Depth Vision Laboratory.

  • On standard benchmarks like NYUv2 and ETH3D, LingBot-Depth outperformed major rivals by cutting indoor scene relative error by over 70% and lowering RMSE by about 47% on sparse Structure-from-Motion tasks.

Summary based on 1 source


Get a daily email with more AI stories

More Stories