Data analytics FIT3152

my wechat:_0206girl

Don't hesitate to contact me

Instructions and data

The objective of this assignment is to gain familiarity with classification models using R. We want to create models that may be used to predict whether or not a website will be legitimate or designed for phishing -- that is, stealing personal data from users.
You will be using a modified version of the PhiUSIIL Phishing data, hosted by the UCI Machine Learning Archive https://archive.ics.uci.edu/dataset/967/phiusiil+phishing+url+dataset . A research paper based on this data is available here https://doi.org/10.1016/j.cose.2023.103545 .
There are two options for compiling your written report:
(1) You can create your report using any word processor with your R code pasted in as machine readable text as an appendix, and save as a pdf, or
(2) As an R Markdown document that contains the R code with the discussion/text interleaved.
Render this as an HTML file and save as a pdf.
Your video report should be less than 100MB in size. You may need to reduce the resolution of your original recording to achieve this. Use a standard file format such as .mp4, or mov for submission.
Creating your data set
Clear your workspace, set the number of significant digits to a sensible value, and use ' Phish ' as the default data frame name for the whole data set. Read your data into R and create your individual data using the following code:
rm(list = ls())
Phish <- read.csv("PhishingData.csv")
set.seed(XXXXXXXX) # Your Student ID is the random seed
L <- as.data.frame(c(1:50))
L <- L[sample(nrow(L), 10, replace = FALSE),]
Phish <- Phish[(Phish$A01 %in% L),]
PD <- Phish[sample(nrow(Phish), 2000, replace = FALSE),] # sample of 2000 rows
Questions (10 Marks)

  1. Explore the data: What is the proportion of phishing sites to legitimate sites? Obtain descriptions of the predictor (independent) variables -- mean, standard deviations, etc. for real-valued attributes. Is there anything noteworthy in the data? Are there any attributes you need to consider omitting from your analysis? (1 Mark)
  2. Document any pre-processing required to make the data set suitable for the model fitting that follows. (1 Mark)
  3. Divide your data into a 70% training and 30% test set by adapting the following code (written for the iris data). Use your student ID as the random seed.
    set.seed(XXXXXXXX) #Student ID as random seed
    train.row = sample(1:nrow(iris), 0.7*nrow(iris))
    iris.train = iris[train.row,]
    iris.test = iris[-train.row,]
  4. Implement a classification model using each of the following techniques. For this question you may use each of the R functions at their default settings if suitable. (5 Marks)
    • Decision Tree
    • Naïve Bayes
    • Bagging
    • Boosting
    • Random Forest
  5. Using the test data, classify each of the test cases as 'phishing (1)' or 'legitimate (0)'.
    Create a confusion matrix and report the accuracy of each model. (1 Mark)
  6. Using the test data, calculate the confidence of predicting 'phishing' for each case and construct an ROC curve for each classifier. You should be able to plot all the curves on the same axis. Use a different colour for each classifier. Calculate the AUC for each classifier. (1 Mark)
  7. Create a table comparing the results in Questions 5 and 6 for all classifiers. Is there a single "best" classifier? (1 Mark)
    Investigative Tasks (18 Marks)
  8. Examining each of the models, determine the most important variables in predicting whether a web site will be phishing or legitimate. Which variables could be omitted from the data with very little effect on performance? Give reasons. (2 Marks)
  9. Starting with one of the classifiers you created in Question 4, create a classifier that is simple enough for a person to be able to classify whether a site is phishing or legitimate by hand. Describe your model with either a diagram or written explanation. What factors were important in your decision? State why you chose the attributes you used. Using the test data created in Question 3, evaluate model performance using the measures you calculated for Questions 5 and 6. How does it compare to those in Question 4? (4 Marks )
  10. Create the best tree-based classifier you can. You may do this by adjusting the parameters, and/or cross-validation of the basic models in Question 4. Show that your model is better than the others using the measures you calculated for Questions 5 and 6.
    Describe how you created your improved model, and why you chose that model. What factors were important in your decision? State why you chose the attributes you used. (4 Marks)
  11. Using the insights from your analysis so far, implement an Artificial Neural Network classifier and report its performance. Comment on attributes used and your data preprocessing required. How does this classifier compare with the others? Can you give any reasons? (4 Marks)
  12. Fit a new classifier to the data, test and report its performance in the same way as for previous models. You can choose a new type of classifier not covered in the course, or a new version of any of the classifiers we have studied. Either way, you will be implementing a new R package. As a starting point, you might refer to James et al. (2021),
    or look online. When writing up, state the new classifier and package used. Include a web link to the package details. Give a brief description of the model type and how it works.
    Comment on the performance of your new model. (4 Marks)
相关推荐
用户Taobaoapi20142 小时前
京东店铺所有商品API技术开发文档
大数据·数据挖掘·数据分析
华科云商xiao徐8 小时前
告别IP被封!分布式爬虫的“隐身”与“分身”术
爬虫·数据挖掘·数据分析
没有梦想的咸鱼185-1037-166319 小时前
【高分论文密码】大尺度空间模拟预测与数字制图
信息可视化·数据分析·r语言
m0_575046341 天前
FPGA数据流分析
数据分析·fpga·数据流分析
思辨共悟1 天前
Python的价值:突出在数据分析与挖掘
python·数据分析
用户Taobaoapi20141 天前
京东图片搜索相似商品API开发指南
大数据·数据挖掘·数据分析
带娃的IT创业者1 天前
《AI大模型应知应会100篇》第69篇:大模型辅助的数据分析应用开发
人工智能·数据挖掘·数据分析
数据科学作家2 天前
学数据分析必囤!数据分析必看!清华社9本书覆盖Stata/SPSS/Python全阶段学习路径
人工智能·python·机器学习·数据分析·统计·stata·spss
liliangcsdn2 天前
Leiden社区发现算法的学习和示例
学习·数据分析·知识图谱
云天徽上2 天前
【数据可视化-107】2025年1-7月全国出口总额Top 10省市数据分析:用Python和Pyecharts打造炫酷可视化大屏
开发语言·python·信息可视化·数据挖掘·数据分析·pyecharts