try
rosbag record /tf /camera/depth/image_rect /camera/depth/camera_info /camera/rgb/camera_info /camera/rgb/image_rect_color
Don't forget to use these topics in your launchfile for rgbdslam (and set config/topic_points to an empty string), e.g.:
<launch>
<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="log">
<!-- Input data settings-->
<param name="config/topic_image_mono" value="/camera/rgb/image_rect_color"/> <!--could also be color -->
<param name="config/topic_image_depth" value="/camera/depth/image_rect"/>
<param name="config/topic_points" value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->
<param name="config/feature_detector_type" value="SURF"/>
<param name="config/feature_extractor_type" value="SURF"/>
<param name="config/max_keypoints" value="600"/>
</node>
</launch>
Can you post more information about what you've tried so far, and what's not working for you?
Now I can get color map and depth map on the kinect on turtlebot. But do not know how to make this information in rosbag, I would like to send this rosbag, then desktop RGBDSLAM get this rosbag, three-dimensional environment to finalize the composition.