

Both, ensure a pair of detected features are indeed close enough to be considered similar. In essence, ratio testing does the same job as the cross-checking option from the BruteForce Matcher. Also, the ratio value must be chosen manually. For each pair of features ( f1, f2), if the distance between f1 and f2 is within a certain ratio, we keep it, otherwise, we throw it away. Basically, we iterate over each of the pairs returned by KNN and perform a distance test. To make sure the features returned by KNN are well comparable, the authors of the SIFT paper, suggests a technique called ratio test. However, we need to ensure that all these matching pairs are robust before going further.

As we expect, KNN provides a larger set of candidate features. Note that the value of k has to be pre-defined by the user. Instead of returning the single best match for a given feature, KNN returns the k best matches. However, for cases where we want to consider more than one candidate match, we can use a KNN based matching procedure. This procedure ensures a more robust set of matching features and is described in the original SIFT paper. In other words, for a pair of features ( f1, f2) to considered valid, f1 needs to match f2 and f2 has to match f1 as the closest match as well. The crossCheck bool parameter indicates whether the two features have to match each other to be considered valid. The second is the crossCheck boolean parameter. To create a BruteForce Matcher using OpenCV we only need to specify 2 parameters. For other feature extractors like ORB and BRISK, Hamming distance is suggested. For SIFT and SURF OpenCV recommends using Euclidean distance. Thus, for every feature in set A, it returns the closest feature from set B. By default, BF Matcher computes the Euclidean distance between two points. Given 2 sets of features (from image A and image B), each feature from set A is compared against all features from set B. The BruteForce (BF) Matcher does exactly what its name suggests. With OpenCV, feature matching requires a Matcher object.

Now, we would like to compare the 2 sets of features and stick with the pairs that show more similarity. Feature MatchingĪs we can see, we have a large number of features from both images. Also, before feeding the images to detectAndCompute() we convert them to grayscale.ĭetection of key points and descriptors using ORB and Hamming distances. Note that in order to use detectAndCompute() we need an instance of a keypoint detector and descriptor object. We can do it in one step by using the OpenCV detectAndCompute() function. Initially, we begin by extracting key points and descriptors from both. To start, we need to load 2 images, a query image, and a training image. It also uses the neighboring pixel information to find and refine key points and corresponding descriptors. The idea is to apply DoD on differently scaled versions of the same image. To address this limitation, methods like SIFT uses Difference of Gaussians (DoD). It is easy to see that when we scale an image, this kernel might become too small or too big. Usually, corner detector algorithms use a fixed size kernel to detect regions of interest (corners) on images. Methods like SIFT and SURF try to address the limitations of corner detection algorithms. That is where more robust methods like SIFT, SURF, and ORB come in. In summary, we need features that are invariant to rotation and scaling. That is to say, if we zoom-in to an image, the previously detected corner might become a line! However, what if we rotate then scale an image? In this situation, we would have a hard time because corners are not invariant to scale. It means that, once we detect a corner, if we rotate an image, that corner will still be there. As we know, corners have one nice property: they are invariant to rotation. Then, we could try to match the corresponding key points based on some measure of similarity like Euclidean distance. Keypoints DetectionĪn initial and probably naive approach would be to extract key points using an algorithm such as Harris Corners. These features, however, need to have some special properties. The first step in that direction is to extract some key points and features of interest. Moreover, our solution has to be robust even if the pictures have differences in one or more of the following aspects: It is important to note that both images need to share some common region. Given a pair of images like the ones above, we want to stitch them to create a panoramic scene.
