ImageProcessingOpenCVUncategorized

《使用OpenCV开发机器视觉项目》&之一卡通画与皮肤变色之初探

每过几天就去看看OpenCV.org的更新,今天突然发现了一个有趣的东西。http://opencv.org/mastering-opencv-with-practical-computer-vision-projects.html。弄OpenCV的人出版了一个Mastering OpenCV with Practical Computer Vision Projects的书,也就是用OpenCV开发的一切有意思的项目。

 

使用OpenCV开发机器视觉项目

有以下9个章节

 

Chapters:

  • Ch1) Cartoonifier and Skin Changer for Android, by Shervin Emami.
  • Ch2) Marker-based Augmented Reality on iPhone or iPad, by Khvedchenia Ievgen.
  • Ch3) Marker-less Augmented Reality, by Khvedchenia Ievgen.
  • Ch4) Exploring Structure from Motion using OpenCV, by Roy Shilkrot.
  • Ch5) Number Plate Recognition using SVM and Neural Networks, by David Escrivá.
  • Ch6) Non-rigid Face Tracking, by Jason Saragih.
  • Ch7) 3D Head Pose Estimation using AAM and POSIT, by Daniel Lélis Baggio.
  • Ch8) Face Recognition using Eigenfaces or Fisherfaces, by Shervin Emami.
  • Ch9) Developing Fluid Wall using the Microsoft Kinect, by Naureen Mahmood.
  • Per-chapter Requirements:
    • Ch1: webcam (for desktop app), or Android development system (for Android app).
    • Ch2: iOS development system (to build an iOS app).
    • Ch3: OpenGL built into OpenCV.
    • Ch4: PCL (http://pointclouds.org/) and SSBA (http://www.inf.ethz.ch/personal/chzach/opensource.html).
    • Ch5: nothing.
    • Ch6: nothing, but requires training data for execution.
    • Ch7: nothing.
    • Ch8: webcam.
    • Ch9: Kinect depth sensor.

 

Screenshots:

  • Ch1) Cartoonifier and Skin Changer for Android: Ch1) Cartoonifier and Skin Changer for Android
  • Ch2) Marker-based Augmented Reality on iPhone or iPad: Ch2) Marker-based Augmented Reality on iPhone or iPad
  • Ch3) Marker-less Augmented Reality: Ch3) Marker-less Augmented Reality
  • Ch4) Exploring Structure from Motion using OpenCV: Ch4) Exploring Structure from Motion using OpenCV
  • Ch5) Number Plate Recognition using SVM and Neural Networks: Ch5) Number Plate Recognition using SVM and Neural Networks
  • Ch6) Non-rigid Face Tracking: Ch6) Non-rigid Face Tracking
  • Ch7) 3D Head Pose Estimation using AAM and POSIT: Ch7) 3D Head Pose Estimation using AAM and POSIT
  • Ch8) Face Recognition using Eigenfaces or Fisherfaces: Ch8) Face Recognition using Eigenfaces or Fisherfaces
  • Ch9) Developing Fluid Wall using the Microsoft Kinect: Ch9) Developing Fluid Wall using the Microsoft Kinect

 

 

看看,他们确实涵盖了当今最热门的一些机器视觉相关项目,其中包括我喜欢的Kinect,甚至我熟悉的人脸识别、人脸跟踪、人脸朝向估计等等(这么多关于人脸的!),还包括虚拟现实之类技术,有时间也得看看。这本书可以买纸质版也可以买电子版,购买地址 PacktPub。好吧,估计一般人是买不到的,国外的书果然不便宜,$44.99

不过书中配套的项目源码倒是都有的!https://github.com/MasteringOpenCV/code

第一个项目:卡通画和肤色变化初探

我在windows上尝试编译了第一个例子(他既有android平台的代码也给出了PC平台的)。以下是截图:

第一张和第四张图片都是卡通图,第2张是evil状态的,所以有点惨不忍睹吧,第三张是素描。具体算法我还未去细读,给出下载第一个项目的VS2010地址。通过debug可以编译出可用的exe,而release尽然无法检测到摄像头以致exe无法运行,编译时注意。

给出主要的卡通画函数实现代码:

 

/*****************************************************************************
*   cartoon.cpp
*   Create a cartoon-like or painting-like image filter.
******************************************************************************
*   by Shervin Emami, 5th Dec 2012 (shervin.emami@gmail.com)
*   http://www.shervinemami.info/
******************************************************************************
*   Ch1 of the book "Mastering OpenCV with Practical Computer Vision Projects"
*   Copyright Packt Publishing 2012.
*   http://www.packtpub.com/cool-projects-with-opencv/book
*****************************************************************************/

#include "cartoon.h"
#include "ImageUtils.h" // Handy functions for debugging OpenCV images, by Shervin Emami.

// Convert the given photo into a cartoon-like or painting-like image.
// Set sketchMode to true if you want a line drawing instead of a painting.
// Set alienMode to true if you want alien skin instead of human.
// Set evilMode to true if you want an "evil" character instead of a "good" character.
// Set debugType to 1 to show where skin color is taken from, and 2 to show the skin mask in a new window (for desktop).
void cartoonifyImage(Mat srcColor, Mat dst, bool sketchMode, bool alienMode, bool evilMode, int debugType)
{
    // Convert from BGR color to Grayscale
    Mat srcGray;
    cvtColor(srcColor, srcGray, CV_BGR2GRAY);

    // Remove the pixel noise with a good Median filter, before we start detecting edges.
    medianBlur(srcGray, srcGray, 7);

    Size size = srcColor.size();
    Mat mask = Mat(size, CV_8U);
    Mat edges = Mat(size, CV_8U);
    if (!evilMode) {
        // Generate a nice edge mask, similar to a pencil line drawing.
        Laplacian(srcGray, edges, CV_8U, 5);
        threshold(edges, mask, 80, 255, THRESH_BINARY_INV);
        // Mobile cameras usually have lots of noise, so remove small
        // dots of black noise from the black & white edge mask.
        removePepperNoise(mask);
    }
    else {
        // Evil mode, making everything look like a scary bad guy.
        // (Where "srcGray" is the original grayscale image plus a medianBlur of size 7x7).
        Mat edges2;
        Scharr(srcGray, edges, CV_8U, 1, 0);
        Scharr(srcGray, edges2, CV_8U, 1, 0, -1);
        edges += edges2;
        threshold(edges, mask, 12, 255, THRESH_BINARY_INV);
        medianBlur(mask, mask, 3);
    }
    //imshow("edges", edges);
    //imshow("mask", mask);

    // For sketch mode, we just need the mask!
    if (sketchMode) {
        // The output image has 3 channels, not a single channel.
        cvtColor(mask, dst, CV_GRAY2BGR);
        return;
    }

    // Do the bilateral filtering at a shrunken scale, since it
    // runs so slowly but doesn't need full resolution for a good effect.
    Size smallSize;
    smallSize.width = size.width/2;
    smallSize.height = size.height/2;
    Mat smallImg = Mat(smallSize, CV_8UC3);
    resize(srcColor, smallImg, smallSize, 0,0, INTER_LINEAR);

    // Perform many iterations of weak bilateral filtering, to enhance the edges
    // while blurring the flat regions, like a cartoon.
    Mat tmp = Mat(smallSize, CV_8UC3);
    int repetitions = 7;        // Repetitions for strong cartoon effect.
    for (int i=0; i<repetitions; i++) {
        int size = 9;           // Filter size. Has a large effect on speed.
        double sigmaColor = 9;  // Filter color strength.
        double sigmaSpace = 7;  // Positional strength. Effects speed.
        bilateralFilter(smallImg, tmp, size, sigmaColor, sigmaSpace);
        bilateralFilter(tmp, smallImg, size, sigmaColor, sigmaSpace);
    }

    if (alienMode) {
        // Apply an "alien" filter, when given a shrunken image and the full-res edge mask.
        // Detects the color of the pixels in the middle of the image, then changes the color of that region to green.
        changeFacialSkinColor(smallImg, edges, debugType);
    }

    // Go back to the original scale.
    resize(smallImg, srcColor, size, 0,0, INTER_LINEAR);

    // Clear the output image to black, so that the cartoon line drawings will be black (ie: not drawn).
    memset((char*)dst.data, 0, dst.step * dst.rows);

    // Use the blurry cartoon image, except for the strong edges that we will leave black.
    srcColor.copyTo(dst, mask);
}

// Apply an "alien" filter, when given a shrunken BGR image and the full-res edge mask.
// Detects the color of the pixels in the middle of the image, then changes the color of that region to green.
void changeFacialSkinColor(Mat smallImgBGR, Mat bigEdges, int debugType)
{
        // Convert to Y'CrCb color-space, since it is better for skin detection and color adjustment.
        Mat yuv = Mat(smallImgBGR.size(), CV_8UC3);
        cvtColor(smallImgBGR, yuv, CV_BGR2YCrCb);

        // The floodFill mask has to be 2 pixels wider and 2 pixels taller than the small image.
        // The edge mask is the full src image size, so we will shrink it to the small size,
        // storing into the floodFill mask data.
        int sw = smallImgBGR.cols;
        int sh = smallImgBGR.rows;
        Mat maskPlusBorder = Mat::zeros(sh+2, sw+2, CV_8U);
        Mat mask = maskPlusBorder(Rect(1,1,sw,sh));  // mask is a ROI in maskPlusBorder.
        resize(bigEdges, mask, smallImgBGR.size());

        // Make the mask values just 0 or 255, to remove weak edges.
        threshold(mask, mask, 80, 255, THRESH_BINARY);
        // Connect the edges together, if there was a pixel gap between them.
        dilate(mask, mask, Mat());
        erode(mask, mask, Mat());
        //imshow("constraints for floodFill", mask);

        // YCrCb Skin detector and color changer using multiple flood fills into a mask.
        // Apply flood fill on many points around the face, to cover different shades & colors of the face.
        // Note that these values are dependent on the face outline, drawn in drawFaceStickFigure().
        int const NUM_SKIN_POINTS = 6;
        Point skinPts[NUM_SKIN_POINTS];
        skinPts[0] = Point(sw/2,          sh/2 - sh/6);
        skinPts[1] = Point(sw/2 - sw/11,  sh/2 - sh/6);
        skinPts[2] = Point(sw/2 + sw/11,  sh/2 - sh/6);
        skinPts[3] = Point(sw/2,          sh/2 + sh/16);
        skinPts[4] = Point(sw/2 - sw/9,   sh/2 + sh/16);
        skinPts[5] = Point(sw/2 + sw/9,   sh/2 + sh/16);
        // Skin might be fairly dark, or slightly less colorful.
        // Skin might be very bright, or slightly more colorful but not much more blue.
        const int LOWER_Y = 60;
        const int UPPER_Y = 80;
        const int LOWER_Cr = 25;
        const int UPPER_Cr = 15;
        const int LOWER_Cb = 20;
        const int UPPER_Cb = 15;
        Scalar lowerDiff = Scalar(LOWER_Y, LOWER_Cr, LOWER_Cb);
        Scalar upperDiff = Scalar(UPPER_Y, UPPER_Cr, UPPER_Cb);
        // Instead of drawing into the "yuv" image, just draw 1's into the "maskPlusBorder" image, so we can apply it later.
        // The "maskPlusBorder" is initialized with the edges, because floodFill() will not go across non-zero mask pixels.
        Mat edgeMask = mask.clone();    // Keep an duplicate copy of the edge mask.
        for (int i=0; i<NUM_SKIN_POINTS; i++) {
            // Use the floodFill() mode that stores to an external mask, instead of the input image.
            const int flags = 4 | FLOODFILL_FIXED_RANGE | FLOODFILL_MASK_ONLY;
            floodFill(yuv, maskPlusBorder, skinPts[i], Scalar(), NULL, lowerDiff, upperDiff, flags);
            if (debugType >= 1)
                circle(smallImgBGR, skinPts[i], 5, CV_RGB(0, 0, 255), 1, CV_AA);
        }
        if (debugType >= 2)
            imshow("flood mask", mask*120); // Draw the edges as white and the skin region as grey.

        // After the flood fill, "mask" contains both edges and skin pixels, whereas
        // "edgeMask" just contains edges. So to get just the skin pixels, we can remove the edges from it.
        mask -= edgeMask;
        // "mask" now just contains 1's in the skin pixels and 0's for non-skin pixels.

        // Change the color of the skin pixels in the given BGR image.
        int Red = 0;
        int Green = 70;
        int Blue = 0;
        add(smallImgBGR, Scalar(Blue, Green, Red), smallImgBGR, mask);
}

// Remove black dots (upto 4x4 in size) of noise from a pure black & white image.
// ie: The input image should be mostly white (255) and just contains some black (0) noise
// in addition to the black (0) edges.
void removePepperNoise(Mat &mask)
{
    // For simplicity, ignore the top & bottom row border.
    for (int y=2; y<mask.rows-2; y++) {
        // Get access to each of the 5 rows near this pixel.
        uchar *pThis = mask.ptr(y);
        uchar *pUp1 = mask.ptr(y-1);
        uchar *pUp2 = mask.ptr(y-2);
        uchar *pDown1 = mask.ptr(y+1);
        uchar *pDown2 = mask.ptr(y+2);

        // For simplicity, ignore the left & right row border.
        pThis += 2;
        pUp1 += 2;
        pUp2 += 2;
        pDown1 += 2;
        pDown2 += 2;
        for (int x=2; x<mask.cols-2; x++) {
            uchar v = *pThis;   // Get the current pixel value (either 0 or 255).
            // If the current pixel is black, but all the pixels on the 2-pixel-radius-border are white
            // (ie: it is a small island of black pixels, surrounded by white), then delete that island.
            if (v == 0) {
                bool allAbove = *(pUp2 - 2) && *(pUp2 - 1) && *(pUp2) && *(pUp2 + 1) && *(pUp2 + 2);
                bool allLeft = *(pUp1 - 2) && *(pThis - 2) && *(pDown1 - 2);
                bool allBelow = *(pDown2 - 2) && *(pDown2 - 1) && *(pDown2) && *(pDown2 + 1) && *(pDown2 + 2);
                bool allRight = *(pUp1 + 2) && *(pThis + 2) && *(pDown1 + 2);
                bool surroundings = allAbove && allLeft && allBelow && allRight;
                if (surroundings == true) {
                    // Fill the whole 5x5 block as white. Since we know the 5x5 borders
                    // are already white, just need to fill the 3x3 inner region.
                    *(pUp1 - 1) = 255;
                    *(pUp1 + 0) = 255;
                    *(pUp1 + 1) = 255;
                    *(pThis - 1) = 255;
                    *(pThis + 0) = 255;
                    *(pThis + 1) = 255;
                    *(pDown1 - 1) = 255;
                    *(pDown1 + 0) = 255;
                    *(pDown1 + 1) = 255;
                }
                // Since we just covered the whole 5x5 block with white, we know the next 2 pixels
                // won't be black, so skip the next 2 pixels on the right.
                pThis += 2;
                pUp1 += 2;
                pUp2 += 2;
                pDown1 += 2;
                pDown2 += 2;
            }
            // Move to the next pixel.
            pThis++;
            pUp1++;
            pUp2++;
            pDown1++;
            pDown2++;
        }
    }
}

// Draw an anti-aliased face outline, so the user knows where to put their face.
// Note that the skin detector for "alien" mode uses points around the face based on the face
// dimensions shown by this function.
void drawFaceStickFigure(Mat dst)
{
    Size size = dst.size();
    int sw = size.width;
    int sh = size.height;

    // Draw the face onto a color image with black background.
    Mat faceOutline = Mat::zeros(size, CV_8UC3);
    Scalar color = CV_RGB(255,255,0);   // Yellow
    int thickness = 4;
    // Use 70% of the screen height as the face height.
    int faceH = sh/2 * 70/100;  // "faceH" is actually half the face height (ie: radius of the ellipse).
    // Scale the width to be the same nice shape for any screen width (based on screen height).
    int faceW = faceH * 72/100; // Use a face with an aspect ratio of 0.72
    // Draw the face outline.
    ellipse(faceOutline, Point(sw/2, sh/2), Size(faceW, faceH), 0, 0, 360, color, thickness, CV_AA);
    // Draw the eye outlines, as 2 half ellipses.
    int eyeW = faceW * 23/100;
    int eyeH = faceH * 11/100;
    int eyeX = faceW * 48/100;
    int eyeY = faceH * 13/100;
    // Set the angle and shift for the eye half ellipses.
    int eyeA = 15; // angle in degrees.
    int eyeYshift = 11;
    // Draw the top of the right eye.
    ellipse(faceOutline, Point(sw/2 - eyeX, sh/2 - eyeY), Size(eyeW, eyeH), 0, 180+eyeA, 360-eyeA, color, thickness, CV_AA);
    // Draw the bottom of the right eye.
    ellipse(faceOutline, Point(sw/2 - eyeX, sh/2 - eyeY - eyeYshift), Size(eyeW, eyeH), 0, 0+eyeA, 180-eyeA, color, thickness, CV_AA);
    // Draw the top of the left eye.
    ellipse(faceOutline, Point(sw/2 + eyeX, sh/2 - eyeY), Size(eyeW, eyeH), 0, 180+eyeA, 360-eyeA, color, thickness, CV_AA);
    // Draw the bottom of the left eye.
    ellipse(faceOutline, Point(sw/2 + eyeX, sh/2 - eyeY - eyeYshift), Size(eyeW, eyeH), 0, 0+eyeA, 180-eyeA, color, thickness, CV_AA);

    // Draw the bottom lip of the mouth.
    int mouthY = faceH * 53/100;
    int mouthW = faceW * 45/100;
    int mouthH = faceH * 6/100;
    ellipse(faceOutline, Point(sw/2, sh/2 + mouthY), Size(mouthW, mouthH), 0, 0, 180, color, thickness, CV_AA);

    // Draw anti-aliased text.
    int fontFace = FONT_HERSHEY_COMPLEX;
    float fontScale = 1.0f;
    int fontThickness = 2;
    putText(faceOutline, "Put your face here", Point(sw * 23/100, sh * 10/100), fontFace, fontScale, color, fontThickness, CV_AA);
    //imshow("faceOutline", faceOutline);

    // Overlay the outline with alpha blending.
    addWeighted(dst, 1.0, faceOutline, 0.7, 0, dst, CV_8UC3);
}

 

这些项目有趣,并且十分实用,很可能进而在其基础上开发更多十分有趣的项目。希望更多的人来研读这些代码,并作分享。

ImageProcessingOpenCVUncategorized

[半原创]指纹识别+谷歌图片识别技术之C++代码

以前看到一个http://topic.csdn.net/u/20120417/15/edbf86f8-cfec-45c3-93e1-67bd555c684a.html网页,觉得蛮有趣的,方法似乎很简单,早就想用c++实现它,但是搁置很久,今天突然感兴趣实现了下。给一个免费的下载java源代码地址:http://download.csdn.net/detail/yjflinchong/4239243,图片你可以用他们的图片~~

以下程序中的图片自己随便找。

主题内容摘录:

Google “相似图片搜索”:你可以用一张图片,搜索互联网上所有与它相似的图片。
打开Google图片搜索页面:
点击使用上传一张angelababy原图:
点击搜索后,Google将会找出与之相似的图片,图片相似度越高就越排在前面。

这种技术的原理是什么?计算机怎么知道两张图片相似呢?

根据Neal Krawetz博士的解释,实现相似图片搜素的关键技术叫做”感知哈希算法”(Perceptualhash algorithm),它的作用是对每张图片生成一个”指纹”(fingerprint)字符串,然后比较不同图片的指纹。结果越接近,就说明图片越相似。
以下是一个最简单的Java实现:
预处理:读取图片
第一步,缩小尺寸。

将图片缩小到8×8的尺寸,总共64个像素。这一步的作用是去除图片的细节,只保留结构、明暗等基本信息,摒弃不同尺寸、比例带来的图片差异。
第二步,简化色彩。

将缩小后的图片,转为64级灰度。也就是说,所有像素点总共只有64种颜色。
第三步,计算平均值。
计算所有64个像素的灰度平均值。
第四步,比较像素的灰度。
将每个像素的灰度,与平均值进行比较。大于或等于平均值,记为1;小于平均值,记为0。
第五步,计算哈希值。
将上一步的比较结果,组合在一起,就构成了一个64位的整数,这就是这张图片的指纹。组合的次序并不重要,只要保证所有图片都采用同样次序就行了。
得到指纹以后,就可以对比不同的图片,看看64位中有多少位是不一样的。在理论上,这等同于计算”汉明距离”(Hammingdistance)。如果不相同的数据位不超过5,就说明两张图片很相似;如果大于10,就说明这是两张不同的图片。
你可以将几张图片放在一起,也计算出他们的汉明距离对比,就可以看看两张图片是否相似。
这种算法的优点是简单快速,不受图片大小缩放的影响,缺点是图片的内容不能变更。如果在图片上加几个文字,它就认不出来了。所以,它的最佳用途是根据缩略图,找出原图。
实际应用中,往往采用更强大的pHash算法和SIFT算法,它们能够识别图片的变形。只要变形程度不超过25%,它们就能匹配原图。这些算法虽然更复杂,但是原理与上面的简便算法是一样的,就是先将图片转化成Hash字符串,然后再进行比较。

用的OpenCV打开图像(貌似没有opencv寸步难行呢,囧)

 

// Win32TestPure.cpp : 定义控制台应用程序的入口点。
 #include "stdafx.h"
 //#include <atlstr.h>//CString, CEdit
 #include "opencv2opencv.hpp"
 #include <hash_map>
 //----------------------------------------------------
 using namespace std;
 using namespace cv;
 class PhotoFingerPrint
 {
 public:
 	int		Distance(string &str1,string &str2);
 	string	HashValue(Mat &src);		//主要功能函数
 	void    Insert(Mat &src,string &val);
 	void	Find(Mat &src);
 private:
 	Mat		m_imgSrc;
 	hash_map<string,string> m_hashMap;

 };
 string PhotoFingerPrint::HashValue(Mat &src)
 {
 	string rst(64,'');
 	Mat img;
 	if(src.channels()==3)
 		cvtColor(src,img,CV_BGR2GRAY);
 	else
 		img=src.clone();
 	// 第一步,缩小尺寸。
 /*将图片缩小到8x8的尺寸,总共64个像素。这一步的作用是去除图片的细节,
 只保留结构、明暗等基本信息,摒弃不同尺寸、比例带来的图片差异。*/
 	resize(img,img,Size(8,8));//缩小尺寸
 	// 第二步,简化色彩。
 	// 将缩小后的图片,转为64级灰度。也就是说,所有像素点总共只有64种颜色。
 	uchar *pData;
 	for(int i=0;i<img.rows;i++)
 	{
 		pData = img.ptr<uchar>(i);
 		for(int j=0;j<img.cols;j++)
 		{
 			pData[j]=pData[j]/4;   //0~255--->0~63
 		}
 	}
 	// 第三步,计算平均值。
 	// 计算所有64个像素的灰度平均值。
 	int average = mean(img).val[0];
 	// 第四步,比较像素的灰度。
 	// 将每个像素的灰度,与平均值进行比较。大于或等于平均值,记为1;小于平均值,记为0。
 	Mat mask= (img>=(uchar)average);//////
 	// 第五步,计算哈希值。
 	/* 将上一步的比较结果,组合在一起,就构成了一个64位的整数,这就是这张图片的指纹。
 	组合的次序并不重要,只要保证所有图片都采用同样次序就行了。
 	*/
 	int index = 0;
 	for(int i=0;i<mask.rows;i++)
 	{
 		pData = mask.ptr<uchar>(i);
 		for(int j=0;j<mask.cols;j++)
 		{
 			if(pData[j]==0)
 				rst[index++]='0';
 			else
 				rst[index++]='1';
 		}
 	}
 	return rst;
 }
 void    PhotoFingerPrint::Insert(Mat &src,string &val)
 {
 	string strVal = HashValue(src);
 	m_hashMap.insert(pair<string,string>(strVal,val));
 	cout<<"insert one value:"<<strVal<<"   string:"<<val<<endl;
 }
 void    PhotoFingerPrint::Find(Mat &src)
 {
 	string strVal=HashValue(src);
 	hash_map<string,string>::iterator it=m_hashMap.find(strVal);
 	if(it==m_hashMap.end())
 		{cout<<"no photo---------"<<strVal<<endl;}
 	else
 		cout<<"find one , key:  "<<it->first<<"   value:"<<it->second<<endl;

 /*	return *it;*/
 }
 int PhotoFingerPrint::Distance(string &str1,string &str2)
 {
 	if((str1.size()!=64)||(str2.size()!=64))
 		return -1;
 	int difference = 0;
 	for(int i=0;i<64;i++)
 	{
 		if(str1[i]!=str2[i])
 			difference++;
 	}
 	return difference;
 }
 int main(int argc, char* argv[] )
 {
 	PhotoFingerPrint pfp;
 	Mat m1=imread("imagesexample3.jpg",0);
 	Mat m2=imread("imagesexample4.jpg",0);
 	Mat m3=imread("imagesexample5.jpg",0);
 	Mat m4=imread("imagesexample6.jpg",0);
 	Mat m5;
 	resize(m3,m5,Size(100,100));
 	string str1 = pfp.HashValue(m1);
 	string str2 = pfp.HashValue(m2);
 	string str3 = pfp.HashValue(m3);
 	string str4 = pfp.HashValue(m4);
 	pfp.Insert(m1,string("str1"));
 	pfp.Insert(m2,string("str2"));
 	pfp.Insert(m3,string("str3"));
 	pfp.Insert(m4,string("str4"));
 	pfp.Find(m5);
 // 	cout<<pfp.Distance(str1,str1)<<endl;
 // 	cout<<pfp.Distance(str1,str2)<<endl;
 // 	cout<<pfp.Distance(str1,str3)<<endl;
 // 	cout<<pfp.Distance(str1,str4)<<endl;

 	return 0;
 }

好吧,只有当加入足够多的图像,这个哈希表才有意义。本程序给了一个大致的模型,细节都没有进行推敲(hash_map第一次用)。希望大家提点意见。