论文阅读---REALISE model

REALISE model:

1.utilizes multiple encoders to obtain the semantic ,phonetic , and graphic information to distinguish the similarities of Chinese characters and correct the spelling errors.

2.And then, develop a selective modality fusion module to obtain the context-aware multimodal representations.

3.Finally ,the output layer predict the probabilities of error corrections.

Encoders:

Semantic encoder:

BERT, which provides rich contextual word representation with the unsupervised pretraining on large corpora.

from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')

Tokenizer是一种文本处理工具,用于将文本分解成单个单词(称为tokens)或其他类型的单位,例如标点符号和数字。在自然语言处理领域,tokenizer通常用于将句子分解为单个单词或词元,以便进行文本分析和机器学习任务。常用的tokenizer包括基于规则的tokenizer和基于机器学习的tokenizer,其中基于机器学习的tokenizer可以自动识别单词和短语的边界,并将其分解为单个tokens。

Phonetic encoder

pinyin: initial(21)+final(39)+tone(5)

hierarchical phonetic encoder :character-level encoder and sentence-level encoder

Character-level encoder

GRU:

GRU(Gate Recurrent Unit)是循环神经网络(Recurrent Neural Network, RNN)的一种。和LSTM(Long-Short Term Memory)一样,也是为了解决长期记忆和反向传播中的梯度等问题而提出来的。

GRU和LSTM在很多情况下实际表现上相差无几,那么为什么我们要使用新人GRU(2014年提出)而不是相对经受了更多考验的LSTM(1997提出)呢。
我们在我们的实验中选择GRU是因为它的实验效果与LSTM相似,但是更易于计算。

Sentence-level Encoder: obtain the contextualized phonetic representation for each Chinese characters

4-layer Transformer with the same hidden size as the semantic encoder

because independent phonetic vectors are not distinguished in order, so we add the positional embeading to each vector. +pack the vector together ->transformer layers to calculate the contextualized representation in acoustic modality.

Graphic Encoder

ResNet

three fonds correpond to the three channels of the character images whose size is set to 32*32 pixel

Selective Modality Fusion Module

Ht, Ha,Hv ==textual ,acoustic,visual

fuse information i n different modalities

selective gate unit: select how much information flow to the mixed multimodal representation.

gate values :fully-connected layer followed by a sigmoid function.

Acoustic and Visual Pretraining

aims to learn the acoustic-textual and visual-textual relationships

phonetic encoder:input method pretraining objective

graphhic encoder:OCP pretraining objective

Data and Metrics

data:SIGHAN --->convert to simplified chinese by using the OPENCC tools

two level :detection and correction level to test the model

相关推荐
何大春15 小时前
【弱监督语义分割】Self-supervised Image-specific Prototype Exploration for WSSS 论文阅读
论文阅读·人工智能·python·深度学习·论文笔记·原型模式
Bearnaise2 天前
GaussianDreamer: Fast Generation from Text to 3D Gaussians——点云论文阅读(11)
论文阅读·人工智能·python·深度学习·opencv·计算机视觉·3d
PD我是你的真爱粉3 天前
Quality minus junk论文阅读
论文阅读
regret~3 天前
【论文笔记】LoFLAT: Local Feature Matching using Focused Linear Attention Transformer
论文阅读·深度学习·transformer
Maker~3 天前
23、论文阅读:基于多分辨率特征学习的层次注意力聚合GAN水下图像增强
论文阅读·学习·生成对抗网络
Q_yt3 天前
【图像压缩感知】论文阅读:Content-Aware Scalable Deep Compressed Sensing
论文阅读
江海寄3 天前
[论文阅读] 异常检测 Deep Learning for Anomaly Detection: A Review(三)总结梳理-疑点记录
论文阅读·人工智能·深度学习·机器学习·计算机视觉·语言模型·视觉检测
江海寄3 天前
[论文阅读] 异常检测 Deep Learning for Anomaly Detection: A Review (四)三种分类方法对比
论文阅读·人工智能·深度学习·机器学习·计算机视觉·分类
代码太难敲啊喂4 天前
【Anomaly Detection论文阅读记录】Resnet网络与WideResNet网络
论文阅读·人工智能
YunTM4 天前
革新预测领域:频域融合时间序列预测,深度学习新篇章,科研涨点利器
论文阅读·人工智能·深度学习