Introduction
Finding correspondences between keypoints is a critical step in many computer vision applications. It can be used to align images when constructing a panorama from lots of separate photogtraps, and it is
used to find point correspondences between keypoints detetected in multiple views of a scene.
iuww520iuww520iuww520iuww520iuww520iuww520iuww520iuww520
This assignment uses a dataset generated from many views of the Trevi fountain in Rome. Finding correspondences between detected keypoints is a critical step in the pipeline for reconstructing a 3D representation of the fountain from individual photographs.
The dataset in this assignment is generated as a set of pairs of image patches taken centred at detected keypoints. The image patches are 64x64 pixels each and each training sample is made of two patches placed side by side to make a 128x64 image. For half the training set (10,000 examples in the '1good' subdirectory) the two patches are from two separate views of the same keypoint. For the other half (10,000 examples in the '0bad' subdirectory) the two patches are from two different keypoints. Figure
1 shows an example of each of these. The validation directory is similarly structured but contains four times as many non-matching pairs (2000 examples in '0bad') as matching pairs (500 examples in '1good').
Figure 1: Corresponding (left) and non-corresponding (right) pairs of image patches Your task is to create and train some neural networks that can tackle the problem of determining whether the two patches correspond or not.
1. Baseline Neural Network [2 pt]
Run the baseline neural network implementation in the provided python notebook and in your report,
you should include the loss and accuracy curves for the training and validation sets in your report and
discuss what these imply about the baseline model.
The validation set contains more bad examples than good. Why might this be a sensible way of
testing for the task of finding feature correspondences? Should the training environment also reflect
this imbalance?
2. Regularizing your Neural Network [2pt]
To regularize the network, your should try adding a regularization layer (see the Keras documenation for these layers). Try adding a Dropout() layer after Flatten() and try different rate values to see what the effect of this parameter is. Include the loss and accuracy plots in your report for three different
choices of the rate parameter. Describe the changes you see in these loss and accuracy plots in your report and suggest what the best choice of rate value is from the three you have reported.
3. Convolutional Neural Network [3pt]
Design a Convolutional Neural Network to solve this challenge. If you use Conv2D() layers imme diately after the LayerNormalization layer these convolutions will apply identically to both image patches in each input sample. Try using one or two Conv2D() layers with relu activations. You should explore the value of having different numbers of filters, kernel sizes, and strides before the Flatten() layer.
Briefly describe the set of settings you tried in your report in a table (this should be around 10 settings).
For each setting, report the final training loss and accuracy as well as the validation loss and accuracy.
Include a discussion of the results of these experiments in your report. Identify your best performing
design and discuss why you think this may have been best.
Computer Vision COMP90086
jia V iuww5202024-09-08 19:14
相关推荐
糖葫芦君1 小时前
TD时间差分算法邹霍梁@开源软件GoodERP1 小时前
【AI+智造】DeepSeek价值重构:当采购与物控遇上数字化转型的化学反应zhulu5062 小时前
PyTorch 源码学习:Dispatch & Autograd & Operators山海青风3 小时前
从零开始玩转TensorFlow:小明的机器学习故事 5小森( ﹡ˆoˆ﹡ )3 小时前
DeepSeek 全面分析报告刘大猫263 小时前
十、MyBatis的缓存deephub4 小时前
用PyTorch从零构建 DeepSeek R1:模型架构和分步训练详解阿正的梦工坊4 小时前
详解 @符号在 PyTorch 中的矩阵乘法规则人类群星闪耀时4 小时前
大数据平台上的机器学习模型部署:从理论到实仙人掌_lz4 小时前
DeepSeek开源周首日:发布大模型加速核心技术可变长度高效FlashMLA 加持H800算力解码性能狂飙升至3000GB/s