【CE314】Computer Science NLP

Deadline: Please follow deadline on FASER

Build a text classifier on the IMDB sentiment classification dataset, you can use any classification method, but you must training your model on the first 40000 instances and testing your model on the last 10000 instances. The IMDB dataset will be uploaded on the moodle page for you to download.

Your code should include:

1: Read the file, incorporate the instances into the training set and testing set.

2: Pre-processing the text, you can choose whether you need stemming, removing stop words, removing non-alphabetical words. (Not all classification models need this step, it is OK if you think your model can perform better without this step, and you can give some justification in the report.)

3: Analysing the feature of the training set, report the linguistic features of the training dataset.

4: Build a text classification model, train your model on the training set and test your model on the test set.

5: Summarize the performance of your model (You can gain additional marks if you have some graph visualization).

6: (Optional) You can speculate how you can improve your works based on your proposed model.

After you build such a model and test on the test set, you should write a report (no longer than three pages in A4, with Arial 11 fonts) to summarize your work.

(You can use the existing algorithms on github or kaggle, but you must not directly copy and paste their code!

However, you are not allowed to use the Naïve Bayes algorithm and VADER classifier, which practiced in Lab 4)

Suggestion: some bonus points:

Have necessary comments on your code

Have proper reference on your report

Have graph visualization on your report

Investigate more evaluation methods, like not only show the P R F score, but also run multiple times and show the standard derivation on P R F (I am sure you can find more evaluation methods.)

Write your report like a mini-conference paper (you can learn from this paper:

  • Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1480--1489, San Diego, California. Association for Computational Linguistics.
相关推荐
TDengine (老段)21 小时前
TDengine C/C++ 连接器进阶指南
大数据·c语言·c++·人工智能·物联网·时序数据库·tdengine
lixzest21 小时前
PyTorch与Transformer的关系
人工智能·pytorch·transformer
檐下翻书1731 天前
产品开发跨职能流程图在线生成工具
大数据·人工智能·架构·流程图·论文笔记
杜子不疼.1 天前
计算机视觉热门模型手册:Faster R-CNN / YOLO / SAM 技术原理 + 应用场景对比
人工智能·计算机视觉·r语言·cnn
腾视科技1 天前
腾视科技TS-SG-SM7系列AI算力模组:32TOPS算力引擎,开启边缘智能新纪元
人工智能·科技
极新1 天前
深势科技生命科学高级业务架构师孟月:AI4S 赋能生命科学研发,数智化平台的实践与落地 | 2025极新AIGC峰会演讲实录
人工智能
Light601 天前
破局而立:制造业软件企业的模式重构与AI赋能新路径
人工智能·云原生·工业软件·商业模式创新·ai赋能·人机协同·制造业软件
Quintus五等升1 天前
深度学习①|线性回归的实现
人工智能·python·深度学习·学习·机器学习·回归·线性回归
natide1 天前
text-generateion-webui模型加载器(Model Loaders)选项
人工智能·llama
野生的码农1 天前
码农的妇产科实习记录
android·java·人工智能