
Image Retrieval using Scene Graphs
Building an efficient structured representation that captures comprehensive semantic knowledge is a crucial step towards a deeper understanding of visual scenes
Image Retrieval using Scene Graphs
Building an efficient structured representation that captures comprehensive semantic knowledge is a crucial step towards a deeper understanding of visual scenes
SGTR+= End-to-end Scene Graph Generation with Transformer
SGTR 是一种自上而下的方法,该方法首先使用基于Transformer的生成器来生成一组可学习的triplet queries (subject–predicate–object),然后使用级联的triplet detector逐步完善这些查询并生成最终场景图。它还提出了一种基于结构化发生器的实体感知关系表示方法,该方法利用了关系的组成属性。
Top-down approach (SGTR):
- Starts with higher-level structures (triplet queries) and refines them
- Begins by generating complete subject-predicate-object triplet candidates
- Then progressively refines these triplets to match the image content
- Works with the complete structural units from the beginning
- Analogous to starting with a rough sketch of the entire tree and then refining each branch
Scene Graph Generation- A comprehensive survey
See [[Reconstruct-Anything Literature Review]]
SceneGraphFusion- Incremental 3D Scene Graph Predictionfrom RGB-D Sequences
Overview of the proposed SceneGraphFusion framework. Our method takes a stream of RGB-D images a) as input to create an incremental geometric segmentation b). Then, the properties of each segment and a neighbor graph between segments are constructed. The properties d) and neighbor graph e) of the segments that have been updated in the current frame c) are used as the inputs to compute node and edge features f) and to predict a 3D scene graph g). Finally, the predictions are h) fused back into a globally consistent 3D graph.
Reconstruct Anything Literature Review
涉及的文章:
通过构建part-level scene-graph,结合Reasoning with LLM 让机器人能够实现更复杂的交互,并以此完成更复杂的任务。
Visual scene understanding长期以来一直被认为是计算机视觉的圣杯
Visual scene understanding 可以被分为两块任务
recognition task
application task
但是以上这些Generally
的工作注重的都是the localization of objects,更高级别的任务强调探索对象之间的丰富语义关系,以及对象与周围环境的相互作用
除此之外还有将NLP和CV结合起来的方向,主要是一些VLM
对于总体场景的感知和信息的有效表示仍然是瓶颈。
所以Li Feifei 在[[Image Retrieval using Scene Graphs]]提出Scene Graph
与Structured Representation相对的是Latent Representation
A scene graph is a structural representation, which can capture detailed semantics by explicitly Modeling
A scene graph is a set of visual relationship triplets in the form of <subject, relation, object> or <object, is, attribute>
Scene graphs should serve as an objective semantic representation of the state of the scene
Scene Graph具有应对和改善其他视觉任务的内在潜力。
可以解决的视觉任务包括:
场景图生成的目的是解析图像或一系列图像,并且生成结构化表示,以此弥合视觉和语义感知之间的差距,并最终达到对视觉场景的完整理解。
任务的本质是检测视觉关系。
早先由Feifei [[Visual Relationship Detection with Language Priors]] 提出了视觉关系检测的方法。
以及Visual Genome这个包含物体关系的数据集
Detects objects first and then solves a classification task to determine the relationship between each pair of objects
**General:** a) 通过图片获取 subject/object and union box proposals (ROI感兴趣区域)b) 提取每个区域的特征。包括object的appearance, spatial information, label, depth, and mask;predicate的appearance, spatial, depth, and mask。
c) 这些多模态特征被 vectorized, combined, and refined。可以通过:
d) 分类器用于预测predicate的类别
基于Visual translation embedding的
Simultaneously detects and recognizes objects and relations
相较于two-stage:
基本都是基于LLM或者VLM之类的大模型
这里所有的工作都是关于如何判断两个独立物体之间的谓语关系(例如riding, holding…),并没有涉及part-level relationship的工作。part-level的父子关系和object-level的谓语关系是很不一样的。
即场景信息存储在一个神经网络中,并没有显式的结构,规划器(可以是LLM)通过query这个模型来获得信息。
核心思想是用交互更丰富的模型组合成可交互的替代场景。
主要用于建模物体之间的运动学关系
将 scene graph 用于机器人任务理解和规划