opencv相机标定

   代码是我几个月前,不知道哪里下载的,原始版权不在我,也没法给出处。

   opencv做相机标定经常碰到问题,就是超大图片无法找到角点。我做了小修改,就是把图片先缩小,等找到角点了,再放大到原来比例。

   输入参数:

方格的数量,注意是内圈角点数量 boardsize

方格的物理 尺寸,单位毫米   squaresize

CMakeLists:

cmake_minimum_required(VERSION 2.8)
project( Calibrate )
find_package( OpenCV REQUIRED )
include_directories(toolFunction.h)
add_executable( Calibrate camera.cpp toolFunction.cpp)
target_link_libraries( Calibrate ${OpenCV_LIBS} )

camera.cpp

#include<iostream>
#include <vector>
#include <string>

#include <opencv2/photo.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include "toolFunction.h"
#define DEBUG_OUTPUT_INFO

using namespace std;
using namespace cv;

int main()
{
    //char* folderPath = "E:/Images/New";           // image folder
    //std::vector<std::string> graphPaths;
    std::vector<std::string> graphSuccess;
    char mypath[50];
    CalibrationAssist calAssist;
    cv::Size msize(600,400);  //for resolution 6000*4000
    int downsize = 10;       //downsize scale factor

    //graphPaths = calAssist.get_filelist(folderPath); // collect image list

        std::cout << "Start corner detection ..." << std::endl;

        cv::Mat curGraph;  // current image
        cv::Mat gray;      // gray image of current image
        cv::Mat small;     // temp file to downsize the image

        int imageCount = 12;
        int imageCountSuccess = 0;
        cv::Size image_size;
        cv::Size boardSize  = cv::Size(7, 5);     // chess board pattern size,only compute the inside square!!
        cv::Size squareSize = cv::Size(30, 30);     // grid physical size, as a scale factor

        std::vector<cv::Point2f> corners;                  // one image corner list
        std::vector<std::vector<cv::Point2f> > seqCorners; // n images corner list

        for ( int i=1; i<=imageCount; i++ )
        {
            sprintf(mypath,"/home/jst/Data/gezi/%03d.jpg", i);
            std::cout<<mypath<<endl;
            curGraph = cv::imread(mypath);
            cv::resize(curGraph, small, msize);

            if ( curGraph.channels() == 3 )
                cv::cvtColor( curGraph, gray, CV_BGR2GRAY );
            else
                curGraph.copyTo( gray );

            // for every image, empty the corner list
            std::vector<cv::Point2f>().swap( corners );  

            // corners detection
            bool success = cv::findChessboardCorners( small, boardSize, corners ); 

            if ( success ) // succeed
            {
                std::cout << i << " " << mypath << " succeed"<< std::endl;
                int row = curGraph.rows;
                int col = curGraph.cols;

                imageCountSuccess ++;

                image_size = cv::Size( col, row );
                //rectify the corner
                for(size_t j=0;j<corners.size();j++)
                {
                   corners[j].x = corners[j].x*downsize;
                   corners[j].y = corners[j].y*downsize;
                }
                // find sub-pixels
                cv::cornerSubPix(
                    gray,
                    corners,
                    cv::Size( 11, 11 ),
                    cv::Size( -1, -1 ),
                    cv::TermCriteria( CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1 ) );
                seqCorners.push_back( corners );

                // draw corners and show them in current image
                cv::Mat imageDrawCorners;
                if ( curGraph.channels() == 3 )
                    curGraph.copyTo( imageDrawCorners );
                else
                    cv::cvtColor( curGraph, imageDrawCorners, CV_GRAY2RGB );

                for ( int j = 0; j < corners.size(); j ++)
                {
                    cv::Point2f dotPoint = corners[j];
                    cv::circle( imageDrawCorners, dotPoint, 3.0, cv::Scalar( 0, 255, 0 ), -1 );
                    cv::Point2f pt_m = dotPoint + cv::Point2f(4,4);
                    char text[100];
                    sprintf( text, "%d", j+1 );  // corner indexes which start from 1
                    cv::putText( imageDrawCorners, text, pt_m, 1, 0.5, cv::Scalar( 255, 0, 255 ) );
                }

               sprintf(mypath,"./corners_%d.jpg",i);
                // save image drawn with corners and labeled with indexes
                cv::imwrite( mypath, imageDrawCorners );
            }
            else // failed
            {
                std::cout << mypath << " corner detect failed!" << std::endl;
            }

        }
        std::cout << "Corner detect done!" << std::endl
            << imageCountSuccess << " succeed! " << std::endl;

        if ( imageCountSuccess < 3 )
        {
            std::cout << "Calibrated success " << imageCountSuccess
                << " images, less than 3 images." << std::endl;
            return 0;
        }
        else
        {
            std::cout << "Start calibration ..." << std::endl;
            cv::Point3f point3D;
            std::vector<cv::Point3f> objectPoints;
            std::vector<double> distCoeffs;
            std::vector<double> rotation;
            std::vector<double> translation;

            std::vector<std::vector<cv::Point3f> > seqObjectPoints;
            std::vector<std::vector<double> > seqRotation;
            std::vector<std::vector<double> > seqTranslation;
            cv::Mat_<double> cameraMatrix;

            // calibration pattern points in the calibration pattern coordinate space
            for ( int t=0; t<imageCountSuccess; t++ )
            {
                objectPoints.clear();
                for ( int i=0; i<boardSize.height; i++ )
                {
                    for ( int j=0; j<boardSize.width; j++ )
                    {
                        point3D.x = i * squareSize.width;
                        point3D.y = j * squareSize.height;
                        point3D.z = 0;
                        objectPoints.push_back(point3D);
                    }
                }
                seqObjectPoints.push_back(objectPoints);
            }

            double reprojectionError = calibrateCamera(
                seqObjectPoints,
                seqCorners,
                image_size,
                cameraMatrix,
                distCoeffs,
                seqRotation,
                seqTranslation,
                CV_CALIB_FIX_ASPECT_RATIO|CV_CALIB_FIX_PRINCIPAL_POINT );

            std::cout << "Calibration done!" << std::endl;
            // calculate the calibration pattern points with the camera model
            std::vector<cv::Mat_<double> > projectMats;

            for ( int i=0; i<imageCountSuccess; i++ )
            {
                cv::Mat_<double> R, T;
                // translate rotation vector to rotation matrix via Rodrigues transformation
                cv::Rodrigues( seqRotation[i], R );
                T = cv::Mat( cv::Matx31d(
                    seqTranslation[i][0],
                    seqTranslation[i][1],
                    seqTranslation[i][2]) );

                cv::Mat_<double> P = cameraMatrix * cv::Mat( cv::Matx34d(
                    R(0,0), R(0,1), R(0,2), T(0),
                    R(1,0), R(1,1), R(1,2), T(1),
                    R(2,0), R(2,1), R(2,2), T(2) ) ); 

                projectMats.push_back(P);
            }

            std::vector<cv::Point2d> PointSet;
            int pointNum = boardSize.width*boardSize.height;
            std::vector<cv::Point3d> objectClouds;
            for ( int i=0; i<pointNum; i++ )
            {
                PointSet.clear();
                for ( int j=0; j<imageCountSuccess; j++ )
                {
                    cv::Point2d tempPoint = seqCorners[j][i];
                    PointSet.push_back(tempPoint);
                }
                // calculate calibration pattern points
                cv::Point3d objectPoint = calAssist.triangulate(projectMats,PointSet);
                objectClouds.push_back(objectPoint);
            }
            std::string pathTemp_point;
            pathTemp_point = ".";
            pathTemp_point += "/point.txt";
            calAssist.save3dPoint(pathTemp_point,objectClouds);

            std::string pathTemp_calib;
            pathTemp_calib = ".";
            pathTemp_calib += "/calibration.txt";

            FILE* fp = fopen( pathTemp_calib.c_str(), "w" );
            fprintf( fp, "The average of re-projection error : %lf\n", reprojectionError );
            for ( int i=0; i<imageCountSuccess; i++ )
            {
                std::vector<cv::Point2f> errorList;
                cv::projectPoints(
                    seqObjectPoints[i],
                    seqRotation[i],
                    seqTranslation[i],
                    cameraMatrix,
                    distCoeffs,
                    errorList );

                corners.clear();
                corners = seqCorners[i];

                double meanError(0.0);
                for ( int j=0; j<corners.size(); j++ )
                {
                    meanError += std::sqrt((errorList[j].x - corners[j].x)*(errorList[j].x - corners[j].x) +
                        (errorList[j].y - corners[j].y)*(errorList[j].y - corners[j].y));
                }
                rotation.clear();
                translation.clear();

                rotation = seqRotation[i];
                translation = seqTranslation[i];
                fprintf( fp, "Re-projection of image %d:%lf\n", i+1, meanError/corners.size() );
                fprintf( fp, "Rotation vector :\n" );
                fprintf( fp, "%lf %lf %lf\n", rotation[0], rotation[1], rotation[2] );
                fprintf( fp, "Translation vector :\n" );
                fprintf( fp, "%lf %lf %lf\n\n", translation[0], translation[1], translation[2] );
            }
            fprintf( fp, "Camera internal matrix :\n" );
            fprintf( fp, "%lf %lf %lf\n%lf %lf %lf\n%lf %lf %lf\n",
                cameraMatrix(0,0), cameraMatrix(0,1), cameraMatrix(0,2),
                cameraMatrix(1,0), cameraMatrix(1,1), cameraMatrix(1,2),
                cameraMatrix(2,0), cameraMatrix(2,1), cameraMatrix(2,2));
            fprintf( fp,"Distortion coefficient :\n" );
            for ( int k=0; k<distCoeffs.size(); k++)
                fprintf( fp, "%lf ", distCoeffs[k] );
            std::cout << "Results are saved!" << std::endl;
        }

	return 0;
}

toolFunction.cpp

#include "toolFunction.h"

cv::Point3d CalibrationAssist::triangulate(
    std::vector<cv::Mat_<double> > &ProjectMats,
    std::vector<cv::Point2d> &imagePoints)
{
    int i,j;
    std::vector<cv::Point2d> pointSet;
    int frameSum = ProjectMats.size();
    cv::Mat A(2*frameSum,3,CV_32FC1);
    cv::Mat B(2*frameSum,1,CV_32FC1);
    cv::Point2d u,u1;
    cv::Mat_<double> P;
    cv::Mat_<double> rowA1,rowA2,rowB1,rowB2;
    int k = 0;
    for ( i = 0; i < frameSum; i++ )     //get the coefficient matrix A and B
    {
        u = imagePoints[i];
        P = ProjectMats[i];
        cv::Mat( cv::Matx13d(
            u.x*P(2,0)-P(0,0),
            u.x*P(2,1)-P(0,1),
            u.x*P(2,2)-P(0,2) ) ).copyTo( A.row(k) );

        cv::Mat( cv::Matx13d(
            u.y*P(2,0)-P(1,0),
            u.y*P(2,1)-P(1,1),
            u.y*P(2,2)-P(1,2) ) ).copyTo( A.row(k+1) );

        cv::Mat rowB1( 1, 1, CV_32FC1, cv::Scalar( -(u.x*P(2,3)-P(0,3)) ) );
        cv::Mat rowB2( 1, 1, CV_32FC1, cv::Scalar(-(u.y*P(2,3)-P(1,3)) ) );
        rowB1.copyTo( B.row(k) );
        rowB2.copyTo( B.row(k+1) );
        k += 2;
    }
    cv::Mat X;
    cv::solve( A, B, X, DECOMP_SVD );
    return Point3d(X);
}

void CalibrationAssist::save3dPoint( std::string path_, std::vector<cv::Point3d> &Point3dLists)
{
    const char * path = path_.c_str();
    FILE* fp = fopen( path, "w" );
    for ( int i = 0; i < Point3dLists.size(); i ++)
    {
        //      fprintf(fp,"%d ",i);
        fprintf( fp, "%lf %lf %lf\n",
            Point3dLists[i].x, Point3dLists[i].y, Point3dLists[i].z);
    }
    fclose(fp);
#if 1
    std::cout << "clouds of points are saved!" << std::endl;
#endif
}

toolFunction.h

#ifndef TOOL_FUNCTION_H
#pragma once
#define TOOL_FUNCTION_H

#include<iostream>
#include <math.h>
#include <fstream>
#include <vector>
#include <string>

#include <opencv2/photo.hpp>
#include <opencv2/highgui.hpp>

using namespace cv;
using namespace std;

class CalibrationAssist
{
public:
    CalibrationAssist() {}
    ~CalibrationAssist() {}

public:

    cv::Point3d triangulate( std::vector<cv::Mat_<double> > &ProjectMats,
        std::vector<cv::Point2d> &imagePoints );

    void save3dPoint( std::string path_, std::vector<cv::Point3d> &Point3dLists );
};
#endif // TOOL_FUNCTION_H

Camera internal matrix :
11964.095146 0.000000 2999.500000
0.000000 11964.095146 1999.500000
0.000000 0.000000 1.000000
Distortion coefficient :
0.163781 6.243557 -0.000678 0.000548 -190.849777 

这图可是6000*4000分辨率!我缩小的10倍做的

时间: 2024-10-31 23:03:49

opencv相机标定的相关文章

OpenCV相机标定和姿态更新

原帖地址: http://blog.csdn.net/aptx704610875/article/details/48914043 http://blog.csdn.net/aptx704610875/article/details/48915149 这一节我们首先介绍下计算机视觉领域中常见的三个坐标系:图像坐标系,相机坐标系,世界坐标系以及他们之间的关系,然后介绍如何使用张正友相机标定法标定相机. 图像坐标系: 理想的图像坐标系原点O1和真实的O0有一定的偏差,由此我们建立了等式(1)和(2)

图像处理、相机标定-有关于相机标定的问题

问题描述 有关于相机标定的问题 请问一下用相机拍摄的一幅图像,根据这幅图像不同区域得到的相机标定是一样的吗? 解决方案 关于相机标定的问题答复网友关于相机标定的问题答复网友利用OpenCV进行相机标定及标定异常问题分析

相机的那些事儿 - 概念、模型及标定

说起相机大家都比较熟悉,现在已经是手机的标配和卖点,而且做的非常便捷易用,随便按都能拍出不错的照片,但如果想更手动.更专业一点,或者将相机用于工业应用(如机器视觉.摄影测量等),还是需要了解一下成像方面的东西,本文力求通俗易懂,先介绍一些相机相关的基本概念,然后对相机的标定过程进行简单的阐述. 一.基本概念 1.景深 我们拍照片的时候常有"虚化"的效果,其实就是利用"景深"来突出重点: 上图只有中间部分是清晰的,远景和近景都模糊掉,原理上从下图可以理解 即理论上只有

xml-opencv摄像头标定,不知道XML怎么输入

问题描述 opencv摄像头标定,不知道XML怎么输入 用opencv samples里面的摄像头标定源文件,编译成功了,好像使用XML文件把图片调进去 <?xml version=""1.0""?> images/CameraCalibraation/VID5/xx1.jpgimages/CameraCalibraation/VID5/xx2.jpgimages/CameraCalibraation/VID5/xx3.jpgimages/CameraC

基于C++实现kinect+opencv 获取深度及彩色数据_C 语言

开发环境 vs2010+OPENCV2.4.10 首先,下载最新的Kinect 2 SDK  http://www.microsoft.com/en-us/kinectforwindows/develop/downloads-docs.aspx 下载之后不要插入Kinect,最好也不用插入除了键盘鼠标以外的其它USB设备,然后安装SDK,安装完成之后插入Kinect,会有安装新设备的提示.安装完成之后可以去"开始"那里找到两个新安装的软件,一个是可以显示Kinect深度图,另外一个软件

计算机视觉库/人脸识别开源软件

中文车牌识别系统 EasyPR EasyPR 是一个开源的中文车牌识别系统. EasyPR是一个中文的开源车牌识别系统,其目标是成为一个简单.灵活.准确的车牌识别引擎. 相比于其他的车牌识别系统,EasyPR有如下特点: 它基于openCV这个开源库,这意味着所有它的代码都可以轻易的获取. 它能够...更多EasyPR信息 最近更新: EasyPR 1.3 Beta 发布,中文车牌识别系统 发布于 7个月前   开源生物特征识别库 OpenBR OpenBR 是一个用来从照片中识别人脸的工具.还

AR识别技术不再成为难以逾越的技术壁垒

2016年,一款名为<Pokémon GO>的游戏出现在大众视野,怒刷各大游戏排行榜.极具趣味性和科技感的AR技术效果瞬间火爆全球.接着AR技术被广泛用在各大互联网公司APP的营销场景中,其中最多的便是AR识别和追踪.但一段时间内,真正掌握核心技术的国内厂家并不多,对应出现了提供AR识别SDK小公司的创业机会. 那么,这些技术背后的原理是什么?本文会从图像处理.特征检测.特征点匹配.图像变换匹配和追踪算法等方面给你进行深入浅出的技术展现,让AR识别技术不再成为难以逾越的技术壁垒.最后也会跟大家

计算机视觉领域的一些牛人博客,超有实力的研究机构等的网站链接

以下链接是本人整理的关于计算机视觉(ComputerVision, CV)相关领域的网站链接,其中有CV牛人的主页,CV研究小组的主页,CV领域的paper,代码,CV领域的最新动态,国内的应用情况等等.打算从事这个行业或者刚入门的朋友可以多关注这些网站,多了解一些CV的具体应用.搞研究的朋友也可以从中了解到很多牛人的研究动态.招生情况等.总之,我认为,知识只有分享才能产生更大的价值,真诚希望下面的链接能对朋友们有所帮助.(1)googleResearch: http://research.go

专访微软研究院张正友:从“张氏标定法”到人机交互,20年视觉技术的探索

张正友博士,是世界著名的计算机视觉和多媒体技术的专家,ACM Fellow,IEEE Fellow.他在立体视觉.三维重建.运动分析.图像配准.摄像机自标定等方面都有开创性的贡献. 张正友带领的微软研究院视觉团队在学术研究上做了大量的工作,除了在顶尖会议(比如CVPR.ICCV.ACM Multimedia.ICME)上发表了大量文章和几部专著,而且在微软很多产品里都有团队的贡献,比如Windows.Office.Xbox.Kinect.Skype for Business.Office Len