Posted 2026-02-06Updated 2026-04-16Review10 minutes read (About 1443 words)BAGEL-Unified-Multimodal-Pretraining论文链接 | 项目主页#Research-paperMulti-modalVLMDiffusionTransformerMoEUnified-MultimodalFoundationModelImage-generationImage2Text
Posted 2026-02-03Updated 2026-04-16Review15 minutes read (About 2285 words)UniDiffuser论文链接 | GitHub#CVResearch-paperMulti-modalTransformerImage2TextDiffusionModelImgGen
Posted 2025-03-18Updated 2026-04-16Notea few seconds read (About 83 words)(UVtransE) Contextual Translation Embedding for Visual Relationship Detection and Scene Graph Generation#CVResearch-paperImage2TextScene-graphVisual-RelationTranslation-Embedding
Posted 2025-03-18Updated 2026-04-16Reviewa few seconds read (About 7 words)Visual Translation Embedding Network for Visual Relation DetectionVTransE#CVResearch-paperImage2TextScene-graphVisual-RelationTranslation-Embedding
Posted 2025-03-04Updated 2026-04-16Reviewa minute read (About 154 words)ALBEF使用的backbone是BERT(通过MLM训练)该研究认为,image encoder的模型大小应该大于text encoder,所以在text encoder这里,只使用六层self attention来提取特征,剩余六层cross attention用于multi-modal encoder。#CVResearch-paperImage2TextMultiModalContrastive-LearningVLPImage-Text
Posted 2025-03-04Updated 2026-04-16Reviewa few seconds read (About 3 words)ViLT#CVResearch-paperTransformerImage2TextMultiModalVLPImage-Text
Posted 2025-01-06Updated 2026-04-16Notea minute read (About 197 words)CLIPhttps://blog.csdn.net/h661975/article/details/135116957#CVResearch-paperImage2TextMultiModalCLIPContrastive-LearningVLPImage-Text