
Scene Graph Generation- A comprehensive survey
SceneGraphFusion- Incremental 3D Scene Graph Predictionfrom RGB-D Sequences
Overview of the proposed SceneGraphFusion framework. Our method takes a stream of RGB-D images a) as input to create an incremental geometric segmentation b). Then, the properties of each segment and a neighbor graph between segments are constructed. The properties d) and neighbor graph e) of the segments that have been updated in the current frame c) are used as the inputs to compute node and edge features f) and to predict a 3D scene graph g). Finally, the predictions are h) fused back into a globally consistent 3D graph.
Reconstruct Anything Literature Review
涉及的文章:
通过构建part-level scene-graph,结合Reasoning with LLM 让机器人能够实现更复杂的交互,并以此完成更复杂的任务。
Visual scene understanding长期以来一直被认为是计算机视觉的圣杯
Visual scene understanding 可以被分为两块任务
recognition task
application task
但是以上这些Generally
的工作注重的都是the localization of objects,更高级别的任务强调探索对象之间的丰富语义关系,以及对象与周围环境的相互作用
除此之外还有将NLP和CV结合起来的方向,主要是一些VLM
对于总体场景的感知和信息的有效表示仍然是瓶颈。
所以Li Feifei 在[[Image Retrieval using Scene Graphs]]提出Scene Graph
与Structured Representation相对的是Latent Representation
A scene graph is a structural representation, which can capture detailed semantics by explicitly Modeling
A scene graph is a set of visual relationship triplets in the form of <subject, relation, object> or <object, is, attribute>
Scene graphs should serve as an objective semantic representation of the state of the scene
Scene Graph具有应对和改善其他视觉任务的内在潜力。
可以解决的视觉任务包括:
场景图生成的目的是解析图像或一系列图像,并且生成结构化表示,以此弥合视觉和语义感知之间的差距,并最终达到对视觉场景的完整理解。
任务的本质是检测视觉关系。
早先由Feifei [[Visual Relationship Detection with Language Priors]] 提出了视觉关系检测的方法。
以及Visual Genome这个包含物体关系的数据集
Detects objects first and then solves a classification task to determine the relationship between each pair of objects
**General:** a) 通过图片获取 subject/object and union box proposals (ROI感兴趣区域)b) 提取每个区域的特征。包括object的appearance, spatial information, label, depth, and mask;predicate的appearance, spatial, depth, and mask。
c) 这些多模态特征被 vectorized, combined, and refined。可以通过:
d) 分类器用于预测predicate的类别
基于Visual translation embedding的
Simultaneously detects and recognizes objects and relations
相较于two-stage:
基本都是基于LLM或者VLM之类的大模型
这里所有的工作都是关于如何判断两个独立物体之间的谓语关系(例如riding, holding…),并没有涉及part-level relationship的工作。part-level的父子关系和object-level的谓语关系是很不一样的。
即场景信息存储在一个神经网络中,并没有显式的结构,规划器(可以是LLM)通过query这个模型来获得信息。
核心思想是用交互更丰富的模型组合成可交互的替代场景。
主要用于建模物体之间的运动学关系
将 scene graph 用于机器人任务理解和规划
DETR是一个使用transformer作为基本架构的 object detection 模型。
Object queries (something that can be learned):
>Visualization of all box predictions on all images from COCO 2017 val set for 20 out of total N = 100 prediction slots in DETR decoder. Each box prediction is represented as a point with the coordinates of its center in the 1-by-1 square normalized by each image size. The points are color-coded so that green color corresponds to small boxes, red to large horizontal boxes and blue to large vertical boxes. We observe that each slot learns to specialize on certain areas and box sizes with several operating modes. We note that almost all slots have a mode of predicting large image-wide boxes that are common in COCO dataset.