
ConceptGraphs= Open-Vocabulary 3D Scene Graphs for Perception and Planning
Clio= Real-time Task-Driven Open-Set 3D Scene Graphs
贡献:
提出了针对不同任务需要不同粒度的语义信息,本文是通过结合SAM和[[CLIP多模态预训练模型]]实现,但是忽略了物体之间的谓语关系或者父子关系。本质还是智能做导航,拾取,放下,导航的基本操作。
,
通过结合[[DINO]]和grounded-pretraining,可以使用人类输入(例如类别名称或转介表达式)检测任意对象
Open-Vocab. Det
an open-set object detector that can detect any objects with respect to an arbitrary free-form text prompt. The model was trained on over 10 million images, including detection data, visual grounding data, and image-text pairs. It has a strong zero-shot detection performance. However, the model needs text as inputs and can only detect boxes with corresponding phrases.
什么是feature fusion?
- 在多模态领域,feature fusion 特指将不同模态的特征(如视觉、文本、音频等)进行融合的技术。CLIP 应该被看作是 Middle Fusion 的一种形式, 在特征提取后就进行融合对齐 #### large-scale grounded pre-train for concept generalization Reformulating **object detection** as a **phrase grounding task** and introducing **contrastive training** between object regions and language phrases on large-scale dataOK-Robot- What Really Matters in Integrating Open-Knowledge Models for Robotics
Creating a general-purpose robot has been a longstanding dream of the robotics community.
当前想要实现这一目标的系统脆弱、封闭,并且在遇到未见过的情况时会失败。即使是最大的机器人模型通常也只能部署在以前见过的环境中 [5, 6]。在机器人数据很少的环境中,例如在非结构化的家庭环境中,这些系统的脆弱性会进一步加剧。
虽然大型视觉模型显示出语义理解 、检测以及将视觉表示与语言联系起来的能力并且与此同时,机器人的导航、抓取和重新排列等基本机器人技能已经相当成熟。
但是将现代视觉模型与机器人特定基元相结合的机器人系统表现非常差。
这可能是因为单纯将多个不确定性的系统组合在一起会导致准确率急剧恶化。
所以我们需要一个将VLM和机器人primitives(导航,抓取,放置)结合在一起的细致框架,即OK-Robot。
Pick up A (from B) and drop it on/in C”, where A is an object and B and C are places in a real-world environment such as homes
负责空间重建,识别物体大致位置,机器人导航
用到的方法: