OpenCV模板匹配

2018-09-25 15:01 更新

目標(biāo)

在本教程中,您將學(xué)習(xí)如何:

  • 使用OpenCV功能matchTemplate()來搜索圖像補(bǔ)丁和輸入圖像之間的匹配
  • 使用OpenCV函數(shù)minMaxLoc()來查找給定數(shù)組中的最大值和最小值(以及它們的位置)。

理論

什么是模板匹配?

模板匹配是一種用于查找與模板圖像(補(bǔ)?。┢ヅ洌愃疲┑膱D像區(qū)域的技術(shù)。

雖然補(bǔ)丁必須是一個矩形,可能并不是所有的矩形都是相關(guān)的。在這種情況下,可以使用掩模來隔離應(yīng)該用于找到匹配的補(bǔ)丁部分。

它是如何工作的?

  • 我們需要兩個主要組件:
  1. 源圖像(I):我們期望找到與模板圖像匹配的圖像
  2. 模板圖像(T):將與模板圖像進(jìn)行比較的補(bǔ)丁圖像

我們的目標(biāo)是檢測最匹配的區(qū)域:

OpenCV模板匹配

  • 要識別匹配區(qū)域,我們必須通過滑動來比較模板圖像與源圖像:

OpenCV模板匹配

  • 通過滑動,我們的意思是一次移動補(bǔ)丁一個像素(從左到右,從上到下)。在每個位置,計算度量,以便它表示在該位置處的匹配的“好”還是“壞”(或者與圖像的特定區(qū)域相似)。

對于T的每個位置超過I,則存儲在該度量結(jié)果矩陣 。R中的每個位置(x,y)都包含匹配度量

OpenCV模板匹配

    

上面的圖片是一個度量tm_ccorr_normed滑動補(bǔ)丁結(jié)果R。最亮的位置表示最高匹配。如您所見,紅色圓圈標(biāo)記的位置可能是具有最高值的位置,因此這個位置(由點(diǎn)形成的矩形,角度和寬度和高度等于補(bǔ)丁圖像)被認(rèn)為是匹配。

  • 實(shí)際上,我們使用函數(shù)minMaxLoc()定位R矩陣中最高的值(或更低的取決于匹配方法的類型)

mask是如何工作的?   

  • 如果匹配需要屏蔽,則需要三個組件:
  1. 源圖像(I):我們期望找到與模板圖像匹配的圖像
  2. 模板圖像(T):將與模板圖像進(jìn)行比較的補(bǔ)丁圖像
  3. 掩模圖像(M):The mask,屏蔽模板的灰度圖像
  • 目前只有兩種匹配方法接受掩碼:CV_TM_SQDIFF和CV_TM_CCORR_NORMED(有關(guān)opencv中可用的所有匹配方法的說明,請參見下文)。
  • The mask必須與模板尺寸相同
  • The mask應(yīng)具有CV_8U或CV_32F深度和與模板圖像相同數(shù)量的通道。在CV_8U情況下,The mask值被視為二進(jìn)制,即零和非零。在CV_32F情況下,值應(yīng)該落在[0..1]范圍內(nèi),并且模板像素將乘以相應(yīng)的The mask像素值。由于樣本中的輸入圖像具有CV_8UC3類型,因此屏蔽也被讀取為彩色圖像。

OpenCV模板匹配

OpenCV中可以使用哪些匹配方法?

OpenCV在函數(shù)matchTemplate()中實(shí)現(xiàn)模板匹配??捎玫姆椒ㄓ幸陨?種:

  • method=CV_TM_SQDIFF

OpenCV模板匹配方法

  • method=CV_TM_SQDIFF_NORMED

OpenCV模板匹配方法

  • method=CV_TM_CCORR

OpenCV模板匹配方法

  • method=CV_TM_CCORR_NORMED

OpenCV模板匹配方法

  • method=CV_TM_CCOEFF

OpenCV模板匹配方法

where

OpenCV模板匹配方法

  • method=CV_TM_CCOEFF_NORMED

OpenCV模板匹配方法

Code 

C ++

這個程序是做什么的?

  • 加載輸入圖像,圖像補(bǔ)?。0澹┖涂蛇x的mask
  • 使用OpenCV函數(shù)matchTemplate()與之前描述的6種匹配方法中的任何一種執(zhí)行模板匹配過程。用戶可以通過在軌跡欄中輸入其選擇來選擇該方法。如果提供了一個mask,它只會用于支持mask的方法
  • 規(guī)范匹配過程的輸出
  • 以較高的匹配概率來定位位置
  • 在與最高匹配相對應(yīng)的區(qū)域周圍繪制一個矩形
  • 可下載的代碼:點(diǎn)擊這里
  • 代碼一覽:
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace std;
using namespace cv;
bool use_mask;
Mat img; Mat templ; Mat mask; Mat result;
const char* image_window = "Source Image";
const char* result_window = "Result window";
int match_method;
int max_Trackbar = 5;
void MatchingMethod( int, void* );
int main( int argc, char** argv )
{
  if (argc < 3)
  {
    cout << "Not enough parameters" << endl;
    cout << "Usage:\n./MatchTemplate_Demo <image_name> <template_name> [<mask_name>]" << endl;
    return -1;
  }
  img = imread( argv[1], IMREAD_COLOR );
  templ = imread( argv[2], IMREAD_COLOR );
  if(argc > 3) {
    use_mask = true;
    mask = imread( argv[3], IMREAD_COLOR );
  }
  if(img.empty() || templ.empty() || (use_mask && mask.empty()))
  {
    cout << "Can't read one of the images" << endl;
    return -1;
  }
  namedWindow( image_window, WINDOW_AUTOSIZE );
  namedWindow( result_window, WINDOW_AUTOSIZE );
  const char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
  createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
  MatchingMethod( 0, 0 );
  waitKey(0);
  return 0;
}
void MatchingMethod( int, void* )
{
  Mat img_display;
  img.copyTo( img_display );
  int result_cols =  img.cols - templ.cols + 1;
  int result_rows = img.rows - templ.rows + 1;
  result.create( result_rows, result_cols, CV_32FC1 );
  bool method_accepts_mask = (CV_TM_SQDIFF == match_method || match_method == CV_TM_CCORR_NORMED);
  if (use_mask && method_accepts_mask)
    { matchTemplate( img, templ, result, match_method, mask); }
  else
    { matchTemplate( img, templ, result, match_method); }
  normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
  double minVal; double maxVal; Point minLoc; Point maxLoc;
  Point matchLoc;
  minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
  if( match_method  == TM_SQDIFF || match_method == TM_SQDIFF_NORMED )
    { matchLoc = minLoc; }
  else
    { matchLoc = maxLoc; }
  rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
  rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
  imshow( image_window, img_display );
  imshow( result_window, result );
  return;
}

代碼說明 

  • 聲明一些全局變量,如圖像,模板和結(jié)果矩陣,以及匹配方法和窗口名稱:
bool use_mask;
Mat img; Mat templ; Mat mask; Mat result;
const char* image_window = "Source Image";
const char* result_window = "Result window";
int match_method;
int max_Trackbar = 5;
  • 加載源圖像,模板,可選地,如果匹配方法支持,則使用mask:
  img = imread( argv[1], IMREAD_COLOR );
  templ = imread( argv[2], IMREAD_COLOR );
  if(argc > 3) {
    use_mask = true;
    mask = imread( argv[3], IMREAD_COLOR );
  }
  if(img.empty() || templ.empty() || (use_mask && mask.empty()))
  {
    cout << "Can't read one of the images" << endl;
    return -1;
  }
  • 創(chuàng)建跟蹤欄以輸入要使用的匹配方法。當(dāng)檢測到更改時,調(diào)用回調(diào)函數(shù)。
  const char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
  createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
  • 我們來看看回調(diào)函數(shù)。首先,它創(chuàng)建源圖像的副本:
  Mat img_display;
  img.copyTo( img_display );
  • 執(zhí)行模板匹配操作。的參數(shù)是自然輸入圖像我,模板T,結(jié)果?和match_method(由給定的TrackBar),和任選的掩模圖像中號。
  bool method_accepts_mask = (CV_TM_SQDIFF == match_method || match_method == CV_TM_CCORR_NORMED);
  if (use_mask && method_accepts_mask)
    { matchTemplate( img, templ, result, match_method, mask); }
  else
    { matchTemplate( img, templ, result, match_method); }
  • 我們對結(jié)果進(jìn)行歸一化:
  normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
  • 我們使用minMaxLoc()定位結(jié)果矩陣R中的最小值和最大值。
  double minVal; double maxVal; Point minLoc; Point maxLoc;
  Point matchLoc;
  minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() )
  • 對于前兩種方法(TM_SQDIFF和MT_SQDIFF_NORMED),最佳匹配是最低值。對于所有其他的,更高的值表示更好的匹配。所以我們把對應(yīng)的值保存在matchLoc變量中:
  if( match_method  == TM_SQDIFF || match_method == TM_SQDIFF_NORMED )
    { matchLoc = minLoc; }
  else
    { matchLoc = maxLoc; }
  • 顯示源圖像和結(jié)果矩陣。在最高可能的匹配區(qū)域周圍繪制一個矩形:
  rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
  rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
  imshow( image_window, img_display );
  imshow( result_window, result );

Java代碼一覽

import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import javax.swing.*;
import javax.swing.event.ChangeEvent;
import javax.swing.event.ChangeListener;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.util.*;
class MatchTemplateDemoRun implements ChangeListener{
    Boolean use_mask = false;
    Mat img = new Mat(), templ = new Mat();
    Mat mask = new Mat();
    int match_method;
    JLabel imgDisplay = new JLabel(), resultDisplay = new JLabel();
    public void run(String[] args) {
        if (args.length < 2)
        {
            System.out.println("Not enough parameters");
            System.out.println("Program arguments:\n<image_name> <template_name> [<mask_name>]");
            System.exit(-1);
        }
        img = Imgcodecs.imread( args[0], Imgcodecs.IMREAD_COLOR );
        templ = Imgcodecs.imread( args[1], Imgcodecs.IMREAD_COLOR );
        if(args.length > 2) {
            use_mask = true;
            mask = Imgcodecs.imread( args[2], Imgcodecs.IMREAD_COLOR );
        }
        if(img.empty() || templ.empty() || (use_mask && mask.empty()))
        {
            System.out.println("Can't read one of the images");
            System.exit(-1);
        }
        matchingMethod();
        createJFrame();
    }
    private void matchingMethod() {
        Mat result = new Mat();
        Mat img_display = new Mat();
        img.copyTo( img_display );
        int result_cols =  img.cols() - templ.cols() + 1;
        int result_rows = img.rows() - templ.rows() + 1;
        result.create( result_rows, result_cols, CvType.CV_32FC1 );
        Boolean method_accepts_mask = (Imgproc.TM_SQDIFF == match_method ||
                match_method == Imgproc.TM_CCORR_NORMED);
        if (use_mask && method_accepts_mask)
        { Imgproc.matchTemplate( img, templ, result, match_method, mask); }
        else
        { Imgproc.matchTemplate( img, templ, result, match_method); }
        Core.normalize( result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat() );
        double minVal; double maxVal;
        Point matchLoc;
        Core.MinMaxLocResult mmr = Core.minMaxLoc( result );
        //  For all the other methods, the higher the better
        if( match_method  == Imgproc.TM_SQDIFF || match_method == Imgproc.TM_SQDIFF_NORMED )
        { matchLoc = mmr.minLoc; }
        else
        { matchLoc = mmr.maxLoc; }
        Imgproc.rectangle(img_display, matchLoc, new Point(matchLoc.x + templ.cols(),
                matchLoc.y + templ.rows()), new Scalar(0, 0, 0), 2, 8, 0);
        Imgproc.rectangle(result, matchLoc, new Point(matchLoc.x + templ.cols(),
                matchLoc.y + templ.rows()), new Scalar(0, 0, 0), 2, 8, 0);
        Image tmpImg = toBufferedImage(img_display);
        ImageIcon icon = new ImageIcon(tmpImg);
        imgDisplay.setIcon(icon);
        result.convertTo(result, CvType.CV_8UC1, 255.0);
        tmpImg = toBufferedImage(result);
        icon = new ImageIcon(tmpImg);
        resultDisplay.setIcon(icon);
    }
    public void stateChanged(ChangeEvent e) {
        JSlider source = (JSlider) e.getSource();
        if (!source.getValueIsAdjusting()) {
            match_method = (int)source.getValue();
            matchingMethod();
        }
    }
    public Image toBufferedImage(Mat m) {
        int type = BufferedImage.TYPE_BYTE_GRAY;
        if ( m.channels() > 1 ) {
            type = BufferedImage.TYPE_3BYTE_BGR;
        }
        int bufferSize = m.channels()*m.cols()*m.rows();
        byte [] b = new byte[bufferSize];
        m.get(0,0,b); // get all the pixels
        BufferedImage image = new BufferedImage(m.cols(),m.rows(), type);
        final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
        System.arraycopy(b, 0, targetPixels, 0, b.length);
        return image;
    }
    private void createJFrame() {
        String title = "Source image; Control; Result image";
        JFrame frame = new JFrame(title);
        frame.setLayout(new GridLayout(2, 2));
        frame.add(imgDisplay);
        int min = 0, max = 5;
        JSlider slider = new JSlider(JSlider.VERTICAL, min, max, match_method);
        slider.setPaintTicks(true);
        slider.setPaintLabels(true);
        // Set the spacing for the minor tick mark
        slider.setMinorTickSpacing(1);
        // Customizing the labels
        Hashtable labelTable = new Hashtable();
        labelTable.put( new Integer( 0 ), new JLabel("0 - SQDIFF") );
        labelTable.put( new Integer( 1 ), new JLabel("1 - SQDIFF NORMED") );
        labelTable.put( new Integer( 2 ), new JLabel("2 - TM CCORR") );
        labelTable.put( new Integer( 3 ), new JLabel("3 - TM CCORR NORMED") );
        labelTable.put( new Integer( 4 ), new JLabel("4 - TM COEFF") );
        labelTable.put( new Integer( 5 ), new JLabel("5 - TM COEFF NORMED : (Method)") );
        slider.setLabelTable( labelTable );
        slider.addChangeListener(this);
        frame.add(slider);
        frame.add(resultDisplay);
        frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        frame.pack();
        frame.setVisible(true);
    }
}
public class MatchTemplateDemo
{
    public static void main(String[] args) {
        // load the native OpenCV library
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
        // run code
        new MatchTemplateDemoRun().run(args);
    }
}

代碼說明

代碼說明 

  • 聲明一些全局變量,如圖像,模板和結(jié)果矩陣,以及匹配方法和窗口名稱:
    Boolean use_mask = false;
    Mat img = new Mat(), templ = new Mat();
    Mat mask = new Mat();
    int match_method;
    JLabel imgDisplay = new JLabel(), resultDisplay = new JLabel();
  • 加載源圖像,模板,可選地,如果匹配方法支持,則使用mask:
        img = Imgcodecs.imread( args[0], Imgcodecs.IMREAD_COLOR );
        templ = Imgcodecs.imread( args[1], Imgcodecs.IMREAD_COLOR );
  • 創(chuàng)建跟蹤欄以輸入要使用的匹配方法。當(dāng)檢測到更改時,調(diào)用回調(diào)函數(shù)。
        int min = 0, max = 5;
        JSlider slider = new JSlider(JSlider.VERTICAL, min, max, match_method);
  • 我們來看看回調(diào)函數(shù)。首先,它創(chuàng)建源圖像的副本:
        Mat img_display = new Mat();
        img.copyTo( img_display );
  • 執(zhí)行模板匹配操作。的參數(shù)是自然輸入圖像我,模板T,結(jié)果?和match_method(由給定的TrackBar),和任選的掩模圖像中號。
        Boolean method_accepts_mask = (Imgproc.TM_SQDIFF == match_method ||
                match_method == Imgproc.TM_CCORR_NORMED);
        if (use_mask && method_accepts_mask)
        { Imgproc.matchTemplate( img, templ, result, match_method, mask); }
        else
        { Imgproc.matchTemplate( img, templ, result, match_method); }
  • 我們對結(jié)果進(jìn)行歸一化:
        Core.normalize( result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat() );
  • 我們使用minMaxLoc()定位結(jié)果矩陣R中的最小值和最大值。
        double minVal; double maxVal;
        Point matchLoc;
        Core.MinMaxLocResult mmr = Core.minMaxLoc( result );
  • 對于前兩種方法(TM_SQDIFF和MT_SQDIFF_NORMED),最佳匹配是最低值。對于所有其他的,更高的值表示更好的匹配。所以我們把對應(yīng)的值保存在matchLoc變量中:
        //  For all the other methods, the higher the better
        if( match_method  == Imgproc.TM_SQDIFF || match_method == Imgproc.TM_SQDIFF_NORMED )
        { matchLoc = mmr.minLoc; }
        else
        { matchLoc = mmr.maxLoc; }
  • 顯示源圖像和結(jié)果矩陣。在最高可能的匹配區(qū)域周圍繪制一個矩形:
        Imgproc.rectangle(img_display, matchLoc, new Point(matchLoc.x + templ.cols(),
                matchLoc.y + templ.rows()), new Scalar(0, 0, 0), 2, 8, 0);
        Imgproc.rectangle(result, matchLoc, new Point(matchLoc.x + templ.cols(),
                matchLoc.y + templ.rows()), new Scalar(0, 0, 0), 2, 8, 0);
        Image tmpImg = toBufferedImage(img_display);
        ImageIcon icon = new ImageIcon(tmpImg);
        imgDisplay.setIcon(icon);
        result.convertTo(result, CvType.CV_8UC1, 255.0);
        tmpImg = toBufferedImage(result);
        icon = new ImageIcon(tmpImg);
        resultDisplay.setIcon(icon);

Python代碼一覽

import sys
import cv2
use_mask = False
img = None
templ = None
mask = None
image_window = "Source Image"
result_window = "Result window"
match_method = 0
max_Trackbar = 5
def main(argv):
    if (len(sys.argv) < 3):
        print 'Not enough parameters'
        print 'Usage:\nmatch_template_demo.py <image_name> <template_name> [<mask_name>]'
        return -1
    
    global img
    global templ
    img = cv2.imread(sys.argv[1], cv2.IMREAD_COLOR)
    templ = cv2.imread(sys.argv[2], cv2.IMREAD_COLOR)
    if (len(sys.argv) > 3):
        global use_mask
        use_mask = True
        global mask
        mask = cv2.imread( sys.argv[3], cv2.IMREAD_COLOR )
    if ((img is None) or (templ is None) or (use_mask and (mask is None))):
        print 'Can\'t read one of the images'
        return -1
    
    
    cv2.namedWindow( image_window, cv2.WINDOW_AUTOSIZE )
    cv2.namedWindow( result_window, cv2.WINDOW_AUTOSIZE )
    
    
    trackbar_label = 'Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED'
    cv2.createTrackbar( trackbar_label, image_window, match_method, max_Trackbar, MatchingMethod )
    
    MatchingMethod(match_method)
    
    cv2.waitKey(0)
    return 0
    
def MatchingMethod(param):
    global match_method
    match_method = param
    
    img_display = img.copy()
    
    method_accepts_mask = (cv2.TM_SQDIFF == match_method or match_method == cv2.TM_CCORR_NORMED)
    if (use_mask and method_accepts_mask):
        result = cv2.matchTemplate(img, templ, match_method, None, mask)
    else:
        result = cv2.matchTemplate(img, templ, match_method)
    
    
    cv2.normalize( result, result, 0, 1, cv2.NORM_MINMAX, -1 )
    
    _minVal, _maxVal, minLoc, maxLoc = cv2.minMaxLoc(result, None)
    
    
    if (match_method == cv2.TM_SQDIFF or match_method == cv2.TM_SQDIFF_NORMED):
        matchLoc = minLoc
    else:
        matchLoc = maxLoc
    
    
    cv2.rectangle(img_display, matchLoc, (matchLoc[0] + templ.shape[0], matchLoc[1] + templ.shape[1]), (0,0,0), 2, 8, 0 )
    cv2.rectangle(result, matchLoc, (matchLoc[0] + templ.shape[0], matchLoc[1] + templ.shape[1]), (0,0,0), 2, 8, 0 )
    cv2.imshow(image_window, img_display)
    cv2.imshow(result_window, result)
    
    pass
if __name__ == "__main__":
    main(sys.argv[1:])

代碼說明 

  • 聲明一些全局變量,如圖像,模板和結(jié)果矩陣,以及匹配方法和窗口名稱:
use_mask = False
img = None
templ = None
mask = None
image_window = "Source Image"
result_window = "Result window"
match_method = 0
max_Trackbar = 5
  • 加載源圖像,模板,可選地,如果匹配方法支持,則使用mask:
    global img
    global templ
    img = cv2.imread(sys.argv[1], cv2.IMREAD_COLOR)
    templ = cv2.imread(sys.argv[2], cv2.IMREAD_COLOR)
    if (len(sys.argv) > 3):
        global use_mask
        use_mask = True
        global mask
        mask = cv2.imread( sys.argv[3], cv2.IMREAD_COLOR )
    if ((img is None) or (templ is None) or (use_mask and (mask is None))):
        print 'Can\'t read one of the images'
        return -1
  • 創(chuàng)建跟蹤欄以輸入要使用的匹配方法。當(dāng)檢測到更改時,調(diào)用回調(diào)函數(shù)。
    trackbar_label = 'Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED'
    cv2.createTrackbar( trackbar_label, image_window, match_method, max_Trackbar, MatchingMethod )
  • 我們來看看回調(diào)函數(shù)。首先,它創(chuàng)建源圖像的副本:
    img_display = img.copy()
  • 執(zhí)行模板匹配操作。的參數(shù)是自然輸入圖像我,模板T,結(jié)果?和match_method(由給定的TrackBar),和任選的掩模圖像中號。
    method_accepts_mask = (cv2.TM_SQDIFF == match_method or match_method == cv2.TM_CCORR_NORMED)
    if (use_mask and method_accepts_mask):
        result = cv2.matchTemplate(img, templ, match_method, None, mask)
    else:
        result = cv2.matchTemplate(img, templ, match_method)
  • 我們對結(jié)果進(jìn)行歸一化:
    cv2.normalize( result, result, 0, 1, cv2.NORM_MINMAX, -1 )
  • 我們使用minMaxLoc()定位結(jié)果矩陣R中的最小值和最大值。
    _minVal, _maxVal, minLoc, maxLoc = cv2.minMaxLoc(result, None)
  • 對于前兩種方法(TM_SQDIFF和MT_SQDIFF_NORMED),最佳匹配是最低值。對于所有其他的,更高的值表示更好的匹配。所以我們把對應(yīng)的值保存在matchLoc變量中:
    if (match_method == cv2.TM_SQDIFF or match_method == cv2.TM_SQDIFF_NORMED):
        matchLoc = minLoc
    else:
        matchLoc = maxLoc
  • 顯示源圖像和結(jié)果矩陣。在最高可能的匹配區(qū)域周圍繪制一個矩形:
    cv2.rectangle(img_display, matchLoc, (matchLoc[0] + templ.shape[0], matchLoc[1] + templ.shape[1]), (0,0,0), 2, 8, 0 )
    cv2.rectangle(result, matchLoc, (matchLoc[0] + templ.shape[0], matchLoc[1] + templ.shape[1]), (0,0,0), 2, 8, 0 )
    cv2.imshow(image_window, img_display)
    cv2.imshow(result_window, result)

結(jié)果

  • 使用輸入圖像測試我們的程序,如:

OpenCV模板匹配

和模板圖片:

OpenCV模板匹配

  • 生成以下結(jié)果矩陣(第一行是標(biāo)準(zhǔn)方法SQDIFF,CCORR和CCOEFF,第二行在其標(biāo)準(zhǔn)化版本中是相同的方法)。在第一列中,最黑暗的是更好的匹配,對于另外兩列,位置越亮,匹配越高。

OpenCV模板匹配

RESULT_0

OpenCV模板匹配

Result_1

OpenCV模板匹配

Result_2

OpenCV模板匹配

Result_3

OpenCV模板匹配

Result_4

OpenCV模板匹配

Result_5

正確的匹配如下所示(黑色矩形在右邊的家伙的臉上)。請注意,CCORR和CCDEFF給出了錯誤的最佳匹配,但是它們的正常版本是正確的,這可能是因為我們只考慮“最高匹配”,而不是其他可能的高匹配。

OpenCV模板匹配(1)

以上內(nèi)容是否對您有幫助:
在線筆記
App下載
App下載

掃描二維碼

下載編程獅App

公眾號
微信公眾號

編程獅公眾號