6D pose estimation problem. How to estimate the rotation matrix?
I have been trying to estimate the 6D pose of a moving camera (in 6D) using opencv library. My steps are:
- use goodFeaturesToTrack to get features.
- calculate optical flow
- track features
- get fundamental matrix
- get essential matrix
- check the combination of R or t to determine the true R & t using triangulation (could find a better way to do this.. had to combine R & t and then later extract R & t)
- Use the height information (from altitude sensor) to get scale factor and scale the translation
- Update the transformation
- Save the new feature set and image for next loop.
But the results are far from expectation. I verified that R is orthogonal. But cant figure out where I went wrong. My code is as follows:
void imageCb(const sensor_msgs::ImageConstPtr& image)
{
try
{
if(start_flag_)
{
cv_ptr_prev_ = cv_bridge::toCvCopy(image, enc::BGR8);
}
cv_ptr_new_ = cv_bridge::toCvCopy(image, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Conversion error: %s", e.what());
return;
}
if(start_flag_)
{
image_prev_.create(cv_ptr_prev_->image.rows, cv_ptr_prev_->image.cols , CV_8UC1);
image_new_.create(cv_ptr_new_->image.rows, cv_ptr_new_->image.cols , CV_8UC1);
}
cvtColor( cv_ptr_new_->image, image_new_, CV_BGR2GRAY);
cvtColor( cv_ptr_prev_->image, image_prev_, CV_BGR2GRAY);
Mat F, E; //Fundamental and Essential Matrices
vector<Point2f> p1; //common feature points in first image
vector<Point2f> p2; //common feature points in second image
double f = 320; //camera parameter focal length
double Cx = 320.5; //Resolution
double Cy = 240.5;
Mat K = (Mat_<double>(3,3) << f, 0 , -Cx, 0, f, -Cy, 0, 0, 1); //camera internal matrix
Mat W = (Mat_<double>(3,3) << 0, -1, 0, 1, 0, 0, 0, 0 , 1); //for SVD
Mat Z = (Mat_<double>(3,3) << 0, 1, 0, -1, 0, 0, 0, 0 , 1); //for SVD
Mat w, u , vt; //SVD result of Essential Matrix
Mat tx, Ra, Rb; //Temporary checking for R and t
Mat t_unscaled, t_scaled, R; //True R and t for the current step
//Lucas Kanade Optical Flow
vector<uchar> features_found;
if(!start_flag_)
{
calcOpticalFlowPyrLK(image_prev_, image_new_, features_prev_, features_new_, features_found, noArray(),Size(win_size_*4+1, win_size_*4+1), 5,TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS,20, 0.3));
for( int i = 0; i < (int)features_new_.size(); i++ )
{
circle(cv_ptr_prev_->image, features_new_[i],1,CV_RGB(255,255,255));
circle(cv_ptr_prev_->image, features_prev_[i],1,CV_RGB(0,0,0));
if( !features_found[i] )
continue;
else
{
line(cv_ptr_prev_->image, features_prev_[i], features_new_[i], Scalar(255,0,0), 1, CV_AA);
p1.push_back(Point2f(features_prev_[i].x-Cx, features_prev_[i].y-Cy));
p2.push_back(Point2f(features_new_[i].x-Cx, features_new_[i].y-Cy));
}
}
image_pub_.publish(cv_ptr_prev_->toImageMsg()); //publish image with features
//Estimation
vector<uchar> inliers;
if(p1.size()==p2.size())
{
F = findFundamentalMat(p1, p2, CV_FM_RANSAC,1.0, 0.999, inliers);
// cout<<"Number of inliers is::::::"<<sum(inliers)<<endl;
E = K.t()*F*K;
SVD::compute(E, w, u, vt);
tx = u * Z * u.t(); Ra = u * W * vt; Rb = u * W.t() * vt;
if(determinant(Ra)<=0) Ra = -Ra;
if(determinant(Rb)<=0) Rb = -Rb;
t_unscaled = (Mat_<double>(1,3) << tx.at<double>(2,1), tx.at<double>(0,2), tx.at<double>(1,0) );
Mat tminus = -(t_unscaled.clone());
//Check which Ra or Rb or t
vector<Mat> R_vec ...
Have you considered to use the function cv::findHomography()? Maybe I misunderstood your problem, but it might be a straight forward solution to use the with the keypoint matches your calculated. A good example for using it is the brief_match_test.cpp in the OpenCV samples....
I guess cvHomography is for feature points associated with a plane in the image. In my case the feature points are not necessarily associated with a single plane. In fact the image sequences are of rough terrain and camera has both translation and rotation.