论文阅读---REALISE model

REALISE model:

1.utilizes multiple encoders to obtain the semantic ,phonetic , and graphic information to distinguish the similarities of Chinese characters and correct the spelling errors.

2.And then, develop a selective modality fusion module to obtain the context-aware multimodal representations.

3.Finally ,the output layer predict the probabilities of error corrections.

Encoders:

Semantic encoder:

BERT, which provides rich contextual word representation with the unsupervised pretraining on large corpora.

复制代码
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')

Tokenizer是一种文本处理工具,用于将文本分解成单个单词(称为tokens)或其他类型的单位,例如标点符号和数字。在自然语言处理领域,tokenizer通常用于将句子分解为单个单词或词元,以便进行文本分析和机器学习任务。常用的tokenizer包括基于规则的tokenizer和基于机器学习的tokenizer,其中基于机器学习的tokenizer可以自动识别单词和短语的边界,并将其分解为单个tokens。

Phonetic encoder

pinyin: initial(21)+final(39)+tone(5)

hierarchical phonetic encoder :character-level encoder and sentence-level encoder

Character-level encoder

GRU:

GRU(Gate Recurrent Unit)是循环神经网络(Recurrent Neural Network, RNN)的一种。和LSTM(Long-Short Term Memory)一样,也是为了解决长期记忆和反向传播中的梯度等问题而提出来的。

GRU和LSTM在很多情况下实际表现上相差无几,那么为什么我们要使用新人GRU(2014年提出)而不是相对经受了更多考验的LSTM(1997提出)呢。
我们在我们的实验中选择GRU是因为它的实验效果与LSTM相似,但是更易于计算。

Sentence-level Encoder: obtain the contextualized phonetic representation for each Chinese characters

4-layer Transformer with the same hidden size as the semantic encoder

because independent phonetic vectors are not distinguished in order, so we add the positional embeading to each vector. +pack the vector together ->transformer layers to calculate the contextualized representation in acoustic modality.

Graphic Encoder

ResNet

three fonds correpond to the three channels of the character images whose size is set to 32*32 pixel

Selective Modality Fusion Module

Ht, Ha,Hv ==textual ,acoustic,visual

fuse information i n different modalities

selective gate unit: select how much information flow to the mixed multimodal representation.

gate values :fully-connected layer followed by a sigmoid function.

Acoustic and Visual Pretraining

aims to learn the acoustic-textual and visual-textual relationships

phonetic encoder:input method pretraining objective

graphhic encoder:OCP pretraining objective

Data and Metrics

data:SIGHAN --->convert to simplified chinese by using the OPENCC tools

two level :detection and correction level to test the model

相关推荐
智算菩萨8 小时前
【实战讲解】ChatGPT 5.4深度文献检索完全指南:提示词工程与学术实战策略
论文阅读·人工智能·gpt·搜索引擎·chatgpt·提示词·论文笔记
檐下翻书17318 小时前
音乐产业版权管理与运营流程图表制作方法
论文阅读·信息可视化·毕业设计·流程图·论文笔记
森诺Alyson20 小时前
前沿技术借鉴研讨-2026.3.26(解决虚假特征x2/混合专家对比学习框架)
论文阅读·人工智能·经验分享·深度学习·学习·论文笔记
森诺Alyson20 小时前
前沿技术借鉴研讨-2026.3.19(睡眠分期/Agents模拟临床会诊/多模态抑郁症检测)
论文阅读·经验分享·深度学习·论文笔记·论文讨论
imbackneverdie2 天前
如何从海量文献中跨界汲取创新灵感?
论文阅读·人工智能·ai·自然语言处理·aigc·ai写作·ai工具
云霄星乖乖的果冻3 天前
【文献阅读:RobustRAG】Certifiably Robust RAG against Retrieval Corruption
论文阅读
m0_650108243 天前
DreamZero:基于世界行动模型的零样本机器人策略
论文阅读·机器人·vla·世界动作模型·预训练视频扩散模型
白白白飘3 天前
【论文阅读】加密流量-ETool-林欣杰熊刚-TIFS2025
论文阅读·加密流量
传说故事3 天前
【论文阅读】RL Token: Bootstrapping Online RL with Vision-Language-Action Models
论文阅读·人工智能·具身智能·rl
做cv的小昊4 天前
结合代码读3DGS论文(10)——ICLR 2025 3DGS加速&压缩新工作Sort-Free 3DGS论文及代码解读
论文阅读·人工智能·游戏·计算机视觉·3d·图形渲染·3dgs