ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

convert pcl point cloud to CV32FC3

asked 2017-06-14 07:11:42 -0600

waschbaer gravatar image

updated 2017-06-14 10:50:24 -0600

lucasw gravatar image

Hi,

I am writing a callbak function for point cloud. I want to convert pcl::PointCloud< pcl::PointXYZ > to a cv::Mat format 32FC3, so three channel for X Y Z coordinates, I don't need color information at all. The toROSMsg could convert the pcl point cloud to ROS image, but it is bgr8.

I have tried convert it explicitly with something like this in a for loop.

pcl_mat.at<cv::Vec3f>(i,j)[0] = curr_cloud_.at(i,j).x;
pcl_mat.at<cv::Vec3f>(i,j)[1] = curr_cloud_.at(i,j).y;
pcl_mat.at<cv::Vec3f>(i,j)[2] = curr_cloud_.at(i,j).z;

However it gives me core dumped when the for loop FINISHED. Anyone would suggest me a way to convert properly?

Best, waschbaer

edit retag flag offensive close merge delete

Comments

1

If you add the code that iterates through the curr_cloud_ and creates pcl_mat the cause of the crash could be determined. Most likely the width and height of point cloud don't match the width and height of the pcl_mat.

lucasw gravatar image lucasw  ( 2017-06-14 10:58:40 -0600 )edit

yes, you are correct. I answer more details in my answer below.

waschbaer gravatar image waschbaer  ( 2017-06-14 12:18:30 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
0

answered 2017-06-14 08:51:20 -0600

Haibo gravatar image

To convert from point cloud to image, maybe you can check this tutorial.

If my understanding is correct, you tried to store the point cloud X, Y, Z coordinate in a three-channel image. I am not sure if this is a proper way of doing it. I do not have experience in converting from point cloud to image. What I tried before is converting from image to point cloud. If we don't care about color information, point cloud and depth image are equivalent. From a depth image, if we have the camera model, the coordinate (x,y) in the image plane can be converted to X, Y in the world coordinate and the depth info stored in pixel (x,y) is the corresponding Z value in world coordinate. My understanding is if you want to convert from point cloud to image, you should do it in the reverse way instead of storing the X, Y, Z values directly as a vector. It can be done, but I am not sure if it is proper to store X, Y, Z directly in a 3-channel image data structure.

edit flag offensive delete link more
0

answered 2017-06-14 12:17:35 -0600

waschbaer gravatar image

updated 2017-06-14 12:19:15 -0600

I solved this problem. The core dumped comes from point cloud structure. Because width and height definition in point cloud is similar to matlab, meaning width =rows, height=cols, but in opencv it is width=cols, height=ros.

The convertion itself has no functional problem. What only left is whether this convertion is efficient or not.

Thanks to Haibo though I didn't read too much about your answer.:)

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-06-14 07:11:42 -0600

Seen: 1,880 times

Last updated: Jun 14 '17