
Part-level Dataset Available for Augmentation
Single Instance
Complicated Scene
- Real
- Synthetic
Part-level Dataset Available for Augmentation
Single Instance
Complicated Scene
Feature Pyramid Networks for Object Detection
识别不同尺寸的物体是目标检测中的一个基本挑战,而特征金字塔一直是多尺度目标检测中的一个基本的组成部分,但是由于特征金字塔计算量大,会拖慢整个检测速度,所以大多数方法为了检测速度而尽可能的去避免使用特征金字塔,而是只使用高层的特征来进行预测。高层的特征虽然包含了丰富的语义信息,但是由于低分辨率,很难准确地保存物体的位置信息。与之相反,低层的特征虽然语义信息较少,但是由于分辨率高,就可以准确地包含物体位置信息。所以如果可以将低层的特征和高层的特征融合起来,就能得到一个识别和定位都准确的目标检测系统。所以本文就旨在设计出这样的一个结构来使得检测准确且快速。
为了使得不同尺度的特征都包含丰富的语义信息,同时又不使得计算成本过高,作者就采用top down和lateral connection的方式,让低层高分辨率低语义的特征和高层低分辨率高语义的特征融合在一起,使得最终得到的不同尺度的特征图都有丰富的语义信息。
Bottom-up的过程就是将图片输入到backbone ConvNet中提取特征的过程中。Backbone输出的feature map的尺寸有的是不变的,有的是成2倍的减小的。对于那些输出的尺寸不变的层,把他们归为一个stage,那么每个stage的最后一层输出的特征就被抽取出来。以ResNet为例,将卷积块conv2, conv3, conv4, conv5的输出定义为{$C_2, C_{3}. C_{4}, C_{5}$} ,这些都是每个stage中最后一个残差块的输出,这些输出分别是原图的{$\frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}$}倍,所以这些特征图的尺寸之间就是2倍的关系。
Top-down的过程就是将高层得到的feature map进行上采样然后往下传递,这样做是因为,高层的特征包含丰富的语义信息,经过top-down的传播就能使得这些语义信息传播到低层特征上,使得低层特征也包含丰富的语义信息。本文中,采样方法是最近邻上采样,使得特征图扩大2倍。上采样的目的就是放大图片,在原有图像像素的基础上在像素点之间采用合适的插值算法插入新的像素,在本文中使用的是最近邻上采样(插值)。这是最简单的一种插值方法,不需要计算,在待求像素的四个邻近像素中,将距离待求像素最近的邻近像素值赋给待求像素。
最邻近法计算量较小,但可能会造成插值生成的图像灰度上的不连续,在灰度变化的地方可能出现明显的锯齿状。
下图所示为Faster R-CNN中的RPN的网络结构,接收单尺度的特征输入,然后经过3x3的卷积,并在feature map上的每个点处生成9个anchor(3个尺寸,每种尺寸对应3个宽高比),之后再在两个分支并行的进行1x1卷积,分别用于对anchors进行分类和回归。这是单尺度的特征输入的RPN。
所以将FPN和RPN结合起来,那RPN的输入就会变成多尺度的feature map,那我们就需要在金字塔的每一层后边都接一个RPN head(一个3x3卷积,两个1x1卷积),如下图所示,其中$P_6$是通过$P_5$下采样得到的。Formally, we define the anchors to have areas of {$32^2, 64^2, 128^2, 256^2, 512^2$} pixels on {$P_2, P_3, P_4, P_5, P_6$}
在生成anchor的时候,因为输入是多尺度特征,就不需要再对每层都使用3种不同尺度的anchor了,所以只为每层设定一种尺寸的anchor,图中绿色的数字就代表每层anchor的size,但是每种尺寸还是会对应3种宽高比。所以总共会有15种anchors。此外,anchor的ground truth label和Faster R-CNN中的定义相同,即如果某个anchor和ground-truth box有最大的IoU,或者IoU大于0.7,那这个anchor就是正样本,如果IoU小于0.3,那就是负样本。此外,需要注意的是每层的RPN head都参数共享的。
Deformable Convolutional Networks
Used in [[CenterNet]]
pre: https://www.youtube.com/watch?v=HRLMSrxw2To&t=308s
Modeling spatial transformations is a long standing problem in computer vision
Traditional approaches:
与传统CNN拥有相同的输入输出
可以端到端训练且无需额外监督信号
直接认为是一种在物体检测方面即插即用的模块即可
Associative Embedding= End-to-End Learning for Joint Detection and Grouping
What is standard dense supervised learning? Mentioned in [[CenterNet]].
Standard dense supervised learning typically refers to a supervised learning setup where:
In contrast to sparse supervision, where only a subset of the input (e.g., bounding boxes, keypoints) is labeled, dense supervision provides full annotations for every relevant part of the input.
Example
In semantic segmentation:
FCSGG (Fully Convolutional Scene Graph Generation) is a PyTorch implementation of the paper “Fully Convolutional Scene Graph Generation” published in CVPR 2021. The project focuses on scene graph generation, which is the task of detecting objects in an image and identifying the relationships between them.
Architecture:
Key Features:
Dataset:
Model Components:
Utilities:
fcsgg/: Main module containing model implementation
configs/: Configuration files for different model variants
tools/: Training, evaluation, and visualization scripts
GraphViz/: Visualization tools for scene graphs
The project implements a fully convolutional approach to scene graph generation, which differs from traditional two-stage methods. Instead of first detecting objects and then predicting relationships, it uses a one-stage detector to simultaneously predict objects and their relationships in a fully convolutional manner.
The repository provides several pre-trained models with different backbones:
These models achieve competitive performance on the Visual Genome dataset for scene graph generation tasks.
The project provides tools for training, evaluation, and visualization of scene graphs. It requires the Visual Genome dataset and can be run using Docker or directly with PyTorch.
In summary, FCSGG is a comprehensive implementation of a state-of-the-art approach to scene graph generation using fully convolutional networks, offering various model architectures and training configurations.
FCSGG is built on top of Detectron2, Facebook’s object detection framework, and leverages many of its components while extending it for scene graph generation. Here’s a detailed breakdown:
Meta Architecture: FCSGG registers a custom meta architecture called “CenterNet” with Detectron2’s META_ARCH_REGISTRY
. This extends Detectron2’s modular architecture system while maintaining compatibility.
Backbone Networks: FCSGG uses Detectron2’s backbone networks (ResNet, etc.) directly and also implements custom backbones like HRNet while following Detectron2’s backbone interface.
Feature Pyramid Networks (FPN): The repository uses Detectron2’s FPN implementation and extends it with custom variants like BiFPN and HRFPN.
YAML Configuration: FCSGG adopts Detectron2’s YAML-based configuration system, extending it with custom configurations for scene graph generation through add_fcsgg_config()
.
Command Line Arguments: The training script uses Detectron2’s default_argument_parser()
to maintain the same command-line interface.
Dataset Registration: Visual Genome dataset is registered with Detectron2’s DatasetCatalog
and MetadataCatalog
, making it available through Detectron2’s data loading pipeline.
Custom Dataset Mapper: FCSGG implements a custom DatasetMapper
class that extends Detectron2’s mapper to handle scene graph annotations.
Data Loaders: The repository uses Detectron2’s build_detection_train_loader
and build_detection_test_loader
with custom mappers.
Trainer Class: FCSGG extends Detectron2’s DefaultTrainer
class to customize the training loop, evaluation metrics, and data loading.
Checkpointing: The repository uses Detectron2’s DetectionCheckpointer
for model saving and loading.
Distributed Training: FCSGG leverages Detectron2’s distributed training utilities through detectron2.utils.comm
and the launch
function.
Custom Evaluators: The repository implements a custom VGEvaluator
for scene graph evaluation while following Detectron2’s evaluator interface.
Event Storage: FCSGG uses Detectron2’s event storage system for logging metrics during training.
Visualization Tools: The repository leverages Detectron2’s visualization utilities for debugging and result analysis.
Custom Heads: While using Detectron2’s architecture, FCSGG implements custom prediction heads for relationship detection.
Scene Graph Structures: The repository defines custom data structures for scene graphs that integrate with Detectron2’s Instances
class.
Loss Functions: FCSGG implements specialized loss functions for scene graph generation while maintaining compatibility with Detectron2’s loss computation framework.
Submodule Integration: Detectron2 is included as a Git submodule, ensuring version compatibility.
Build Process: The installation process includes building Detectron2 from source to ensure proper integration.
In summary, FCSGG uses Detectron2 as its foundation, leveraging its modular architecture, data handling, training infrastructure, and configuration system while extending it with custom components for scene graph generation. This approach allows FCSGG to benefit from Detectron2’s robust implementation and optimizations while adding specialized functionality for relationship detection between objects.
Official repo:
https://github.com/liuhengyue/fcsgg
Our repo:
https://github.com/PSGBOT/KAF-Generation
My venv: fcsgg
1 | git clone git@github.com:liuhengyue/fcsgg.git |
Datasets:
1 | cd ~/Reconst |
Download the scene graphs and extract them to datasets/vg/VG-SGG-with-attri.h5
.
1 | AttributeError: module 'PIL.Image' has no attribute 'LINEAR'. Did you mean: 'BILINEAR'? |
LINEAR-> BILINEAR: commit
在尝试训练的过程中报错:
1 | File "/home/cyl/Reconst/fcsgg/fcsgg/data/detection_utils.py", line 432, in generate_score_map |
modify detection_utils.py
: commit
首先更改训练的配置文件./config/quick_schedules/Quick-FCSGG-HRNet-W32.yaml
, (原文件使用预训练的参数)
1 | MODEL: |
更改为train from scratch
1 | MODEL: |
再运行:
1 | python tools/train_net.py --num-gpus 1 --config-file configs/quick_schedules/Quick-FCSGG-HRNet-W32.yaml |
成功训练✌
1 | ... |
See [[FCSGG Repo Explanation]]
Repository:
official:
1 | conda env create -f conda.yaml |
不建议使用official的conda.yaml
, 使用更改后的conda_cyl.yaml
。
1 | pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 |
官方提供了 depth estimation 和 segmentation 的 notebook,可以找时间理解一下
使用的数据集为Imagenet-mini
1 | imagenet-mini |
Note: 需要额外添加一个label.txt
使用脚本生产数据集的meta data:
1 |