Submission deadline:
Please check the Moodle page of the course.
1 Objectives
The objective of this assignment is to simulate a real-life data science scenario that aligns with the process discussed in class. This process involves:
-
Finding and acquiring a source of data.
-
Storing the acquired data.
-
Cleaning and pre-processing the data.
-
Extracting meaningful visualizations.
-
Building a model for inference.
You are encouraged to utilize any additional methods you deem suitable for solving the problem. The assignment comprises two main deliverables:
-
A written report presented in the format of an academic paper.
-
The accompanying codebase to support your report.
While exchanging ideas and discussing the assignment with your peers is allowed, it is essential to emphasize that your code, experiments, and report must be the result of your individual effort.
2 Overview
Assume you are a junior Data Scientist at Money, a UK investment company and your project manager, Melanie, provides you with the following list of public companies:
• Apple Inc. (AAPL),
• Microsoft Corp. (MSFT),
• American Airlines Group Inc (AAL),
• Zoom Video Communication Inc (ZM)
You must select ONEof these companies and study their market trends to ultimately be able to advise on when and whether Money should (I) buy, (II) hold, or (III) sell this stock.
Melanie asked you to follow the company guidelines, which advise this process:
-
Select a company and acquire stock data from the beginning of April 2019 up to the end of March 2023.
-
Collect any other data on external events (e.g., seasonal trends, world news etc.) that might have an impact on the company's stocks.
-
Choose the storing strategy that most efficiently supports the upcoming data analysis.
-
Check for any missing/noisy/outlier data, and clean it, only if necessary.
-
Process the data, extracting features that you believe are meaningful to forecast the trend of the
stock.
-
Provide useful visualisations of the data, exploiting patterns you might find.
-
Train a model to predict the closing stock price.
Details for each task are provided in Section 2. Details of how each task is marked are included in
Section 3.
3 Task Details
[IMPORTANT NOTE]
Tasks 1.2, 2.2, 4.2 and 6 are more advanced, but based on the scoring criteria
provided in Section 5, you can pass this assignment without solving these tasks. However, you would need
to solve these to achieve a top-distinction range.
The percentage provided on each task description is the weight of the mark in the 70% of the report, as clearly defined in Section 5.
Task 1: Data Acquisition
You will first have to acquire the necessary data to conduct your study.
Task 1.1 [5%]
One essential type of data that you will need is the stock prices for the company you have chosen, spanning from the 1st of April 2019 to the 31st of March 2023, as described in Section 1. Since these companies are public, the data is made available online. We note that any data sources are to
be accessed exclusively through a web API rather than downloading files manually. The first task is to search and collect stock prices, finding the best way to access and acquire it through a web API.
Task 1.2 [7%]
Search and collect more/different data relevant to this task. There are many valuable sources of information for analysing the stock market. In addition to time series depicting the evolution of stock prices, acquire auxiliary data that is likely to be useful for the forecast, such as:
-
**Social Media, e.g., Twitter:**This can be used to understand the public's sentiment towards the stock market;
-
**Financial reports:**This can help explain what kind of factors are likely to affect the stock market the most;
-
**News:**This can be used to draw links between current affairs and the stock market;
-
**Meteorological data:**Sometimes climate or weather data is directly correlated to some companies'
stock prices and should therefore be taken into account in financial analysis;
- **Others:**anything that can justifiably support your analysis.
Remember, you are looking for historical data, not live data, and that any data sources must be accessed through a web API rather than downloading files manually.