ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

So after @ahendrix helped explaining a few things, i ended up using the sensor_msgs::fillImage() function to store my left/right camera buffer data into a sensor_msgs::Image.

Here is my code:

    ros::Time       timeNow         = ros::Time::now();
    std::string     frame           = "camera";

    leftImage.header.stamp          = timeNow;
    leftImage.header.frame_id       = frame;

    rightImage.header.stamp         = timeNow;
    rightImage.header.frame_id      = frame;

    // Fill the left image message
    sensor_msgs::fillImage( leftImage,
                            sensor_msgs::image_encodings::MONO8,
                            240, // height
                            320, // width
                            320, // stepSize
                            pFrameData->leftData);

    // Fill the right image message
    sensor_msgs::fillImage( rightImage,
                            sensor_msgs::image_encodings::MONO8,
                            240, // height
                            320, // width
                            320, // stepSize
                            pFrameData->rightData);

    leftImagePub.publish(leftImage);
    rightImagePub.publish(rightImage);

    sensor_msgs::clearImage(leftImage);
    sensor_msgs::clearImage(rightImage);

If you look in the sensor_msgs::fillImage() the step size (in my case) was the width of the image; the total amount of columns your image has.

inside the fillImage() you will see that a size_t st0 is created by multiplying the step size (which we said is the width) * rows (height of the image), which will create a size and then allocate the total amount of pixels your image has.