【CE314】Computer Science NLP

Deadline: Please follow deadline on FASER

Build a text classifier on the IMDB sentiment classification dataset, you can use any classification method, but you must training your model on the first 40000 instances and testing your model on the last 10000 instances. The IMDB dataset will be uploaded on the moodle page for you to download.

Your code should include:

1: Read the file, incorporate the instances into the training set and testing set.

2: Pre-processing the text, you can choose whether you need stemming, removing stop words, removing non-alphabetical words. (Not all classification models need this step, it is OK if you think your model can perform better without this step, and you can give some justification in the report.)

3: Analysing the feature of the training set, report the linguistic features of the training dataset.

4: Build a text classification model, train your model on the training set and test your model on the test set.

5: Summarize the performance of your model (You can gain additional marks if you have some graph visualization).

6: (Optional) You can speculate how you can improve your works based on your proposed model.

After you build such a model and test on the test set, you should write a report (no longer than three pages in A4, with Arial 11 fonts) to summarize your work.

(You can use the existing algorithms on github or kaggle, but you must not directly copy and paste their code!

However, you are not allowed to use the Naïve Bayes algorithm and VADER classifier, which practiced in Lab 4)

Suggestion: some bonus points:

Have necessary comments on your code

Have proper reference on your report

Have graph visualization on your report

Investigate more evaluation methods, like not only show the P R F score, but also run multiple times and show the standard derivation on P R F (I am sure you can find more evaluation methods.)

Write your report like a mini-conference paper (you can learn from this paper:

  • Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1480--1489, San Diego, California. Association for Computational Linguistics.
相关推荐
Liudef06几秒前
三维点云Transformer局部感受野构建:理论、方法与挑战
人工智能·深度学习·transformer
说私域2 分钟前
基于定制开发开源AI智能名片与S2B2C商城小程序的旅游日志创新应用研究
人工智能·小程序·旅游
DAWN_T1735 分钟前
Transforms
pytorch·python·机器学习·jupyter·pycharm
一百天成为python专家1 小时前
python库之jieba 库
开发语言·人工智能·python·深度学习·机器学习·pycharm·python3.11
搬砖的小码农_Sky1 小时前
AI:机器人行业发展现状
人工智能·机器人
深圳市快瞳科技有限公司1 小时前
破解多宠管理难题,端侧AI重新定义宠物智能硬件
人工智能·智能硬件·宠物
Blossom.1181 小时前
用一张“冰裂纹”石墨烯薄膜,让被动散热也能做 AI 推理——基于亚波长裂纹等离激元的零功耗温度-逻辑门
人工智能·深度学习·神经网络·目标检测·机器学习·机器人·语音识别
cylat2 小时前
Day59 经典时序预测模型3
人工智能·python·深度学习·神经网络
萤火虫儿飞飞2 小时前
从基础加热到智能生态跨越:艾芬达用创新重构行业价值边界!
大数据·人工智能·重构
aneasystone本尊2 小时前
学习 RAGFlow 的系统架构
人工智能