ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

how to publish tf in gazebo simulation with depth camera for octomap_server

asked 2020-02-04 18:50:19 -0600

zhefan gravatar image

updated 2020-02-04 20:33:14 -0600

I am using gazebo simulation for drone. I mounted a depth camera on the drone in gazebo. It can publish its odom and pointcloud2 correctly. I would like to use octomap_mapping.launch to get the octree for planning usage, and I change frame_id to /odom, cloud_in to pointcloud2. However, there is nothing when I check with Rviz.

I think the reason is because my gazebo simulation does not publish a tf, but I am not sure how to do that. Can anyone tell me what tf should I publish and how to do that?

<plugin name='camera_plugin' filename='libgazebo_ros_openni_kinect.so'>
        <baseline>0.2</baseline>
        <alwaysOn>1</alwaysOn>
        <updateRate>0.0</updateRate>
        <cameraName>camera_ir</cameraName>
        <imageTopicName>/camera/color/image_raw</imageTopicName>
        <cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
        <depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
        <depthImageCameraInfoTopicName>/camera/depth/camera_info</depthImageCameraInfoTopicName>
        <pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
        <frameName>camera_link</frameName>
        <pointCloudCutoff>0.3</pointCloudCutoff>
        <pointCloudCutoffMax>20</pointCloudCutoffMax>
        <distortionK1>0</distortionK1>
        <distortionK2>0</distortionK2>
        <distortionK3>0</distortionK3>
        <distortionT1>0</distortionT1>
        <distortionT2>0</distortionT2>
        <CxPrime>0</CxPrime>
        <Cx>0</Cx>
        <Cy>0</Cy>
        <focalLength>0</focalLength>
        <hackBaseline>0</hackBaseline>
      </plugin>
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2020-02-04 20:11:39 -0600

updated 2020-02-04 22:08:46 -0600

Hi,

I dont know how you are attaching your depth cam to drone, but I assume you are doing it with URDF, in that URDF you should define parent link of camera_link correctly. If you do attach the depth cam to correct link on the drone , then tf tree should be complete, did you chechk that your tf tree is well connected ?

let us see what your tf tree looks like with;

rosrun tf view_frames

a pdf file will be created in the same directory that you executed above command

Edit;

attaching sensors directly into world file of gazebo is not recommended. The sensors should be attached to body of file(URDF) of robot(drone in this case), and then the this URDF file should be spawned into gazebo.

for instance I can attach a depth camera to base_link of robot as follow;

<joint name="camera_joint" type="fixed">
    <parent link="base_link" />
    <child link="camera_link" />
    <origin rpy="0 1.57 0" xyz="0.6 -0.9 1.6" />
    <axis xyz="0 0 0" />
</joint>

<link name="camera_link">

</link>


    <gazebo reference="camera_link">
          <sensor name="camera" type="depth">
            <pose>0.0085 0.0105 0.0197 0 0 0</pose>
            <update_rate>30</update_rate>
            <camera>
              <horizontal_fov>1.01229</horizontal_fov>
              
              <clip>
                <near>0.2</near>
                <far>1.5</far>
              </clip>
            </camera>
            <plugin name="camera_plugin" filename="libgazebo_ros_openni_kinect.so">
              <baseline>0.2</baseline>
              <alwaysOn>true</alwaysOn>
              <!-- Keep this zero, update_rate in the parent <sensor> tag
                will control the frame rate. -->
              <updateRate>0.0</updateRate>
              <cameraName>camera_ir</cameraName>
              <imageTopicName>/camera/color/image_raw</imageTopicName>
              <cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
              <depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
              <depthImageCameraInfoTopicName>/camera/depth/camera_info</depthImageCameraInfoTopicName>
              <pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
              <frameName>camera_link</frameName>
              <pointCloudCutoff>0.2</pointCloudCutoff>
              <pointCloudCutoffMax>1.5</pointCloudCutoffMax>
              <distortionK1>0</distortionK1>
              <distortionK2>0</distortionK2>
              <distortionK3>0</distortionK3>
              <distortionT1>0</distortionT1>
              <distortionT2>0</distortionT2>
              <CxPrime>0</CxPrime>
              <Cx>0</Cx>
              <Cy>0</Cy>
              <focalLength>0</focalLength>
              <hackBaseline>0</hackBaseline>
            </plugin>
          </sensor>
    </gazebo>
edit flag offensive delete link more

Comments

I have actually check that pdf file. It simply says NO tf. And I've checked the rostopic, there is no /tf topic. Someone else gave me the .world file saying the depth camera is mounted inside. Below is the depth camera I found in the .world file:

<sensor name='camera2' type='depth'>
          <camera name='__default__'>
            <horizontal_fov>1.8</horizontal_fov>
            
            <clip>
              <near>0.02</near>
              <far>100</far>
            </clip>
            <noise>
              <type>gaussian</type>
              <mean>0</mean>
              <stddev>0.005</stddev>
            </noise>
          </camera>

Also added extra info in the question.

zhefan gravatar image zhefan  ( 2020-02-04 20:24:00 -0600 )edit

are you calling joint_state_publisher and robot_state_publisher somewhere in your launch file ? , It is hard to tell what is going on without seeing code. Do you have any github repo of this project

Fetullah Atas gravatar image Fetullah Atas  ( 2020-02-04 20:34:56 -0600 )edit

Sure, I have added to my github repo. https://github.com/Zhefan-Xu/drone Basically, the following file is what I want to use: launch/tunnel_drone.launch worlds/tunnel_1.world

zhefan gravatar image zhefan  ( 2020-02-04 21:00:02 -0600 )edit

why you commented out spawn_drone node ?, my system is 18.04 and gazebo 9 so i couldn't try the package

Fetullah Atas gravatar image Fetullah Atas  ( 2020-02-04 21:57:02 -0600 )edit

Because somehow that node causes error.

Anyway, If I mounted the sensor correctly (as you mentioned, attach it to the base_link of robot), the TF would also be published right?

Thanks a lot.

zhefan gravatar image zhefan  ( 2020-02-04 22:30:41 -0600 )edit

yes, once the robot body and its sensors are are defined correctly in URDF, joint_state_publisher and robot_state_publisher will publish the transforms between all existing links which is simply /tf topic

Fetullah Atas gravatar image Fetullah Atas  ( 2020-02-04 23:10:09 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2020-02-04 18:50:19 -0600

Seen: 2,496 times

Last updated: Feb 04 '20