Combining accurate geometry with rich semantics has been proven to be highly effective for language-guided robotic manipulation.Existing methods for dynamic scenes either fail to update in real-time or rely on additional depth sensors for simple scene editing, limiting their applicability in real-world.In this paper, we introduce MSGField, a representation that uses a collection of 2D Gaussians for high-quality reconstruction, further enhanced with attributes to encode semantic and motion information. Specially, we represent the motion field compactly by decomposing each primitive's motion into a combination of a limited set of motion bases. Leveraging the differentiable real-time rendering of Gaussian splatting, we can quickly optimize object motion, even for complex non-rigid motions, with image supervision from only two camera views. Additionally, we designed a pipeline that utilizes object priors to efficiently obtain well-defined semantics.In our challenging dataset, which includes flexible and extremely small objects, our method achieve a success rate of 79.2% in static and 63.3% in dynamic environments for language-guided manipulation. For specified object grasping, we achieve a success rate of 90%, on par with point cloud-based methods.
@article{sheng2024msgfield,
title={MSGField: A Unified Scene Representation Integrating Motion, Semantics, and Geometry for Robotic Manipulation},
author={Sheng, Yu and Lin, Runfeng and Wang, Lidian and Qiu, Quecheng and Zhang, YanYong and Zhang, Yu and Hua, Bei and Ji, Jianmin},
journal={arXiv preprint arXiv:2410.15730},
year={2024}
}