Computer Vision COMP90086

Introduction
Finding correspondences between keypoints is a critical step in many computer vision applications. It can be used to align images when constructing a panorama from lots of separate photogtraps, and it is
used to find point correspondences between keypoints detetected in multiple views of a scene.
iuww520iuww520iuww520iuww520iuww520iuww520iuww520iuww520
This assignment uses a dataset generated from many views of the Trevi fountain in Rome. Finding correspondences between detected keypoints is a critical step in the pipeline for reconstructing a 3D representation of the fountain from individual photographs.
The dataset in this assignment is generated as a set of pairs of image patches taken centred at detected keypoints. The image patches are 64x64 pixels each and each training sample is made of two patches placed side by side to make a 128x64 image. For half the training set (10,000 examples in the '1good' subdirectory) the two patches are from two separate views of the same keypoint. For the other half (10,000 examples in the '0bad' subdirectory) the two patches are from two different keypoints. Figure
1 shows an example of each of these. The validation directory is similarly structured but contains four times as many non-matching pairs (2000 examples in '0bad') as matching pairs (500 examples in '1good').
Figure 1: Corresponding (left) and non-corresponding (right) pairs of image patches Your task is to create and train some neural networks that can tackle the problem of determining whether the two patches correspond or not.
1. Baseline Neural Network [2 pt]
Run the baseline neural network implementation in the provided python notebook and in your report,
you should include the loss and accuracy curves for the training and validation sets in your report and
discuss what these imply about the baseline model.
The validation set contains more bad examples than good. Why might this be a sensible way of
testing for the task of finding feature correspondences? Should the training environment also reflect
this imbalance?
2. Regularizing your Neural Network [2pt]
To regularize the network, your should try adding a regularization layer (see the Keras documenation for these layers). Try adding a Dropout() layer after Flatten() and try different rate values to see what the effect of this parameter is. Include the loss and accuracy plots in your report for three different
choices of the rate parameter. Describe the changes you see in these loss and accuracy plots in your report and suggest what the best choice of rate value is from the three you have reported.
3. Convolutional Neural Network [3pt]
Design a Convolutional Neural Network to solve this challenge. If you use Conv2D() layers imme diately after the LayerNormalization layer these convolutions will apply identically to both image patches in each input sample. Try using one or two Conv2D() layers with relu activations. You should explore the value of having different numbers of filters, kernel sizes, and strides before the Flatten() layer.
Briefly describe the set of settings you tried in your report in a table (this should be around 10 settings).
For each setting, report the final training loss and accuracy as well as the validation loss and accuracy.
Include a discussion of the results of these experiments in your report. Identify your best performing
design and discuss why you think this may have been best.

相关推荐
水中加点糖1 分钟前
小白都能看懂的——车牌检测与识别(最新版YOLO26快速入门)
人工智能·yolo·目标检测·计算机视觉·ai·车牌识别·lprnet
Yaozh、5 分钟前
【神经网络中的Dropout随机失活问题】
人工智能·深度学习·神经网络
墩墩冰14 分钟前
计算机图形学 实现直线段的反走样
人工智能·机器学习
Pyeako19 分钟前
深度学习--卷积神经网络(下)
人工智能·python·深度学习·卷积神经网络·数据增强·保存最优模型·数据预处理dataset
OPEN-Source21 分钟前
大模型实战:搭建一张“看得懂”的大模型应用可观测看板
人工智能·python·langchain·rag·deepseek
zzz的学习笔记本23 分钟前
AI智能体时代的记忆 笔记(由大模型生成)
人工智能·智能体
AGI-四顾30 分钟前
文生图模型选型速览
人工智能·ai
大尚来也30 分钟前
一篇搞懂AI通识:用大白话讲清人工智能的核心逻辑
人工智能
Coder_Boy_32 分钟前
Deeplearning4j+ Spring Boot 电商用户复购预测案例
java·人工智能·spring boot·后端·spring
风指引着方向36 分钟前
动态形状算子支持:CANN ops-nn 的灵活推理方案
人工智能·深度学习·神经网络