UCL-ELEC0136: Data Acquisition and Processing Systems

Submission deadline:
Please check the Moodle page of the course.
1 Objectives
The objective of this assignment is to simulate a real-life data science scenario that aligns with the process discussed in class. This process involves:

  1. Finding and acquiring a source of data.
  2. Storing the acquired data.
  3. Cleaning and pre-processing the data.
  4. Extracting meaningful visualizations.
  5. Building a model for inference.
    You are encouraged to utilize any additional methods you deem suitable for solving the problem. The assignment comprises two main deliverables:
  6. A written report presented in the format of an academic paper.
  7. The accompanying codebase to support your report.
    While exchanging ideas and discussing the assignment with your peers is allowed, it is essential to emphasize that your code, experiments, and report must be the result of your individual effort .
    2 Overview
    Assume you are a junior Data Scientist at Money, a UK investment company and your project manager, Melanie, provides you with the following list of public companies:
    • Apple Inc. (AAPL),
    • Microsoft Corp. (MSFT),
    • American Airlines Group Inc (AAL),
    • Zoom Video Communication Inc (ZM)
    You must select ONE of these companies and study their market trends to ultimately be able to advise on when and whether Money should (I) buy, (II) hold, or (III) sell this stock.
    Melanie asked you to follow the company guidelines, which advise this process:
  8. Select a company and acquire stock data from the beginning of April 2019 up to the end of March 2023.
  9. Collect any other data on external events (e.g., seasonal trends, world news etc.) that might have an impact on the company's stocks.
  10. Choose the storing strategy that most efficiently supports the upcoming data analysis.
  11. Check for any missing/noisy/outlier data, and clean it, only if necessary .
  12. Process the data, extracting features that you believe are meaningful to forecast the trend of the
    stock.
  13. Provide useful visualisations of the data, exploiting patterns you might find.
  14. Train a model to predict the closing stock price.
    Details for each task are provided in Section 2. Details of how each task is marked are included in
    Section 3.
    3 Task Details
    [IMPORTANT NOTE]
    Tasks 1.2, 2.2, 4.2 and 6 are more advanced, but based on the scoring criteria
    provided in Section 5 , you can pass this assignment without solving these tasks. However, you would need
    to solve these to achieve a top-distinction range.
    The percentage provided on each task description is the weight of the mark in the 70% of the report , as clearly defined in Section 5 .
    Task 1: Data Acquisition
    You will first have to acquire the necessary data to conduct your study.
    Task 1.1 [5%]
    One essential type of data that you will need is the stock prices for the company you have chosen, spanning from the 1st of April 2019 to the 31st of March 2023, as described in Section 1. Since these companies are public, the data is made available online. We note that any data sources are to
    be accessed exclusively through a web API rather than downloading files manually . The first task is to search and collect stock prices, finding the best way to access and acquire it through a web API.
    Task 1.2 [7%]
    Search and collect more/different data relevant to this task. There are many valuable sources of information for analysing the stock market. In addition to time series depicting the evolution of stock prices, acquire auxiliary data that is likely to be useful for the forecast, such as:
  15. Social Media, e.g., Twitter: This can be used to understand the public's sentiment towards the stock market;
  16. Financial reports: This can help explain what kind of factors are likely to affect the stock market the most;
  17. News: This can be used to draw links between current affairs and the stock market;
  18. Meteorological data: Sometimes climate or weather data is directly correlated to some companies'
    stock prices and should therefore be taken into account in financial analysis;
  19. Others: anything that can justifiably support your analysis.
    Remember, you are looking for historical data, not live data, and that any data sources must be accessed through a web API rather than downloading files manually .
相关推荐
zone77396 小时前
001:简单 RAG 入门
后端·python·面试
F_Quant6 小时前
🚀 Python打包踩坑指南:彻底解决 Nuitka --onefile 配置文件丢失与重启报错问题
python·操作系统
允许部分打工人先富起来7 小时前
在node项目中执行python脚本
前端·python·node.js
IVEN_7 小时前
Python OpenCV: RGB三色识别的最佳工程实践
python·opencv
haosend8 小时前
AI时代,传统网络运维人员的转型指南
python·数据网络·网络自动化
曲幽8 小时前
不止于JWT:用FastAPI的Depends实现细粒度权限控制
python·fastapi·web·jwt·rbac·permission·depends·abac
IVEN_1 天前
只会Python皮毛?深入理解这几点,轻松进阶全栈开发
python·全栈
Ray Liang1 天前
用六边形架构与整洁架构对比是伪命题?
java·python·c#·架构设计
AI攻城狮1 天前
如何给 AI Agent 做"断舍离":OpenClaw Session 自动清理实践
python
千寻girling1 天前
一份不可多得的 《 Python 》语言教程
人工智能·后端·python