Yes, the resolution of the Kinect can be decreased. At the default 640x480, you get over 300,000 points per scan. That was too much data for me to process, so in my launch files, I set my kinect like this:
<rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml" />
<param name="rgb_frame_id" value="/openni_rgb_optical_frame" />
<param name="depth_frame_id" value="/openni_depth_optical_frame" />
<param name="use_indices" value="false" />
<param name="depth_registration" value="false" />
<param name="image_mode" value="8" />
<param name="depth_mode" value="8" />
<param name="debayering" value="0" />
<param name="depth_time_offset" value="0" />
<param name="image_time_offset" value="0" />
Note that here, I am NOT using the rgb camera at all for pointclouds, only for the image. This is the minimum amount of information I was able to obtain from the kinect and corresponds to roughly 19,200 points per scan.
However, this was STILL too much information. I tried pointcloud_to_laserscan along with voxel_grid filters, but I wasn't very happy with the results.
The node I wrote is very similar to pointcloud_to_laserscan, but still publishes a pointcloud2. What it does is adjusts the number of "slices" evenly through a range, so it gives the appearance of multiple laserscans. I haven't written it as a nodelet, but feel free to try it.
https://github.com/cwru-robotics/smart_wheelchair/tree/master/pcl_decimator">https://github.com/cwru-robotics/smart_wheelchair/tree/master/pcl_decimator
Video of this node mapping: http://www.youtube.com/watch?v=_rIAGKA6uS8
Video of this node mapping with full scan comparison: http://www.youtube.com/watch?v=Rl6bi5Udfz4
Be sure to run dynamic reconfigure on this node, it makes tuning it very easy! Here's my default values:
<node pkg="pcl_decimator" type="pcl_decimator" respawn="true" name="pcl_slicer" output="screen">
<rosparam>
field_name: y
num_slices: 5
slice_width: 0.01
start_threshold: -1.0
end_threshold: 1.0
</rosparam>
</node>