[论文精读]Community-Aware Transformer for Autism Prediction in fMRI Connectome

论文网址:[2307.10181] Community-Aware Transformer for Autism Prediction in fMRI Connectome (arxiv.org)

论文代码:GitHub - ubc-tea/Com-BrainTF: The official Pytorch implementation of paper "Community-Aware Transformer for Autism Prediction in fMRI Connectome" accepted by MICCAI 2023

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用!

1. 省流版

1.1. 心得

(1)我超,开篇自闭症是lifelong疾病。搜了搜是真的啊,玉玉可以治愈但是自闭症不太行,为啥,太神奇了。我还没有见过自闭症的

1.2. 论文总结图

2. 论文逐段精读

2.1. Abstract

①Treating each ROI equally will overlook the social relationships between them. Thus, the authors put forward Com-BrainTF model to learn local and global presentations

②They share the parameters between different communities but provide specific token for each community

2.2. Introduction

①ASD patients perform abnormal in default mode network (DMN) and are influenced by the significant change of dorsal attention network (DAN) and DMN

②Com-BrainTF contains a hierarchical transformer to learn community embedding and a local transformer to aggregate the whole information of brain

③Sharing the local transformer parameters can avoid over-parameterization

2.3. Method

2.3.1. Overview

(1)Problem Definition

①They adopt Pearson correlation coefficients methods to obrain functional connectivity matrices

②Then divide ROIs to communities

③The learned embedding

④Next, the following pooling layer and MPLs predict the labels

(2)Overview of our Pipeline

①They provide a local transformer, a global transformer and a pooling layer in their local-global transformer architecture

②The overall framework

2.3.2. Local-global transformer encoder

①With the input FC, the learned node feature matrix can be calculated by

②In transformer encoder module,

where ,

is the number of heads

(1)Local Transformer

①They apply same local transformer for all the input, but use unique learnable tokens :

(2)Global Transformer

①The global operation is:

2.3.3. Graph Readout Layer

①They aggregate node embedding by OCRead.

②The graph level embedding is calculated by , where is a learnable assignment matrix computed by OCRead layer

③Afterwards, flattening and put it in MLP for final prediction

④Loss: CrossEntropy (CE) loss

2.4. Experiments

2.4.1. Datasets and Experimental Settings

(1)ABIDE

(2)Experimental Settings

2.4.2. Quantitative and Qualitative Results

2.4.3. Ablation studies

(1)Input: node features vs. class tokens of local transformers

(2)Output: Cross Entropy loss on the learned node features vs. prompt token

2.5. Conclusion

2.6. Supplementary Materials

2.6.1. Variations on the Number of Prompts

2.6.2. Attention Scores of ASD vs. HC in Comparison between Com-BrainTF (ours) and BNT (baseline)

2.6.3. Decoded Functional Group Differences of ASD vs. HC

  1. 知识补充

4. Reference List

Bannadabhavi A. et al. (2023) 'Community-Aware Transformer for Autism Prediction in fMRI Connectome', 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023) , doi: https://doi.org/10.48550/arXiv.2307.10181

相关推荐
云天徽上1 分钟前
【数据可视化-33】病毒式社交媒体潮流与用户参与度可视化分析
机器学习·信息可视化·数据挖掘·数据分析·媒体
小王努力学编程8 分钟前
【Linux网络编程】应用层协议HTTP(实现一个简单的http服务)
linux·服务器·网络·c++·网络协议·学习·http
viperrrrrrrrrr79 分钟前
大数据学习(112)-HIVE中的窗口函数
hive·sql·学习
点云SLAM10 分钟前
点云配准算法之NDT算法原理详解
人工智能·算法·数学建模·点云配准算法·ndt配准算法·概率模型配准算法
*TQK*19 分钟前
CSS学习笔记8——表格
css·笔记·学习·html
AI蜗牛车21 分钟前
【LLM+Code】Cursor Agent 46.11 版本Prompt&Tools最细致解读
人工智能·算法·语言模型
悲喜自渡72142 分钟前
Pytorch(无CPU搭建)+Jupyter
人工智能·pytorch·jupyter
ARM2NCWU1 小时前
关联具体场景(如AI、智慧城市),强调部署效率
服务器·人工智能·智慧城市
塔能物联运维1 小时前
解析塔能科技:绿色低碳智慧节能一站式破局之匙
大数据·人工智能·物联网
白熊1881 小时前
【计算机视觉】CV实战项目 -深度解析PaddleSegSharp:基于PaddleSeg的.NET图像分割解决方案
人工智能·计算机视觉·.net