[论文精读]Community-Aware Transformer for Autism Prediction in fMRI Connectome

论文网址:[2307.10181] Community-Aware Transformer for Autism Prediction in fMRI Connectome (arxiv.org)

论文代码:GitHub - ubc-tea/Com-BrainTF: The official Pytorch implementation of paper "Community-Aware Transformer for Autism Prediction in fMRI Connectome" accepted by MICCAI 2023

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用!

1. 省流版

1.1. 心得

(1)我超,开篇自闭症是lifelong疾病。搜了搜是真的啊,玉玉可以治愈但是自闭症不太行,为啥,太神奇了。我还没有见过自闭症的

1.2. 论文总结图

2. 论文逐段精读

2.1. Abstract

①Treating each ROI equally will overlook the social relationships between them. Thus, the authors put forward Com-BrainTF model to learn local and global presentations

②They share the parameters between different communities but provide specific token for each community

2.2. Introduction

①ASD patients perform abnormal in default mode network (DMN) and are influenced by the significant change of dorsal attention network (DAN) and DMN

②Com-BrainTF contains a hierarchical transformer to learn community embedding and a local transformer to aggregate the whole information of brain

③Sharing the local transformer parameters can avoid over-parameterization

2.3. Method

2.3.1. Overview

(1)Problem Definition

①They adopt Pearson correlation coefficients methods to obrain functional connectivity matrices

②Then divide ROIs to communities

③The learned embedding

④Next, the following pooling layer and MPLs predict the labels

(2)Overview of our Pipeline

①They provide a local transformer, a global transformer and a pooling layer in their local-global transformer architecture

②The overall framework

2.3.2. Local-global transformer encoder

①With the input FC, the learned node feature matrix can be calculated by

②In transformer encoder module,

where ,

is the number of heads

(1)Local Transformer

①They apply same local transformer for all the input, but use unique learnable tokens :

(2)Global Transformer

①The global operation is:

2.3.3. Graph Readout Layer

①They aggregate node embedding by OCRead.

②The graph level embedding is calculated by , where is a learnable assignment matrix computed by OCRead layer

③Afterwards, flattening and put it in MLP for final prediction

④Loss: CrossEntropy (CE) loss

2.4. Experiments

2.4.1. Datasets and Experimental Settings

(1)ABIDE

(2)Experimental Settings

2.4.2. Quantitative and Qualitative Results

2.4.3. Ablation studies

(1)Input: node features vs. class tokens of local transformers

(2)Output: Cross Entropy loss on the learned node features vs. prompt token

2.5. Conclusion

2.6. Supplementary Materials

2.6.1. Variations on the Number of Prompts

2.6.2. Attention Scores of ASD vs. HC in Comparison between Com-BrainTF (ours) and BNT (baseline)

2.6.3. Decoded Functional Group Differences of ASD vs. HC

  1. 知识补充

4. Reference List

Bannadabhavi A. et al. (2023) 'Community-Aware Transformer for Autism Prediction in fMRI Connectome', 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023) , doi: https://doi.org/10.48550/arXiv.2307.10181

相关推荐
咚咚王者18 小时前
人工智能之编程基础 Python 入门:第五章 基本数据类型(一)
人工智能·python
说私域18 小时前
基于开源链动2+1模式AI智能名片S2B2C商城小程序的零售流量重构研究
人工智能·小程序·开源
Funny_AI_LAB18 小时前
Anthropic 最新研究深度解析:大型语言模型中涌现的内省意识
人工智能·语言模型·自然语言处理
skywalk816319 小时前
划时代的AI Agent qwen的回答和思考
人工智能
张较瘦_19 小时前
[论文阅读] AI | 大语言模型服务系统服务级目标和系统级指标优化研究
论文阅读·人工智能·语言模型
golang学习记19 小时前
Cursor 2.0正式发布:携自研模型Composer强势登场,不再只做「壳」
人工智能
文火冰糖的硅基工坊19 小时前
[人工智能-大模型-97]:大模型应用层 - 随着技术的发展,软件工程与软件开发过程提效演进阶段(工具化 → 流程化 → 智能化)和未来的展望。
人工智能·软件工程
蛋王派19 小时前
本地部署DeepSeek-OCR:打造高效的PDF文字识别服务
人工智能·自然语言处理·pdf·ocr
Pocker_Spades_A19 小时前
Answer企业社区实战:零成本搭建技术问答平台,远程协作效率提升300%!
人工智能
南方的狮子先生19 小时前
【深度学习】卷积神经网络(CNN)入门:看图识物不再难!
人工智能·笔记·深度学习·神经网络·机器学习·cnn·1024程序员节