Like tfoote said, there are two different ideas going on here.
If YOU need to know the positions of the obstacles in Gazebo (for testing or something), there's the get_model_state command in the Gazebo API that you can use in a plugin or a script with rospy or roscpp (C++) to get the world/global/groundtruth coordinates of models in the Gazebo simulation.
However, it appears you need your ROBOT to find the positions of the obstacles. That's more difficult.
I use the Turtlebot with Gazebo, and when I need to drive from one point to the next, I use the AMCL package to make a map of the environment, and then I plug the map into AMCL and use the RViz GUI to set a goal and it drives the Turtlebot to where I want it. In this case, you're making a map of a room by collecting enough laserscan data to outline entire objects.
AMCL also publishes where it thinks the robot is in the environment as you drive on the AMCL/amcl_pose topic.
Here's more on AMCL: http://wiki.ros.org/amcl
The tough part - and what I don't know the answer to, but the object recognition link from tfoote might help - is figuring out/somehow identifying that the blob at (3,4) is a couch, or a chair, etc.
Here are links to the Turtlebot/Gazebo/AMCL/RViz tutorials. To be clear, I'm not suggesting that they are a solution to your problem. But they're a good demonstration of how localization in Gazebo - using the Kinect data as a makeshift laserscan - works.
http://wiki.ros.org/turtlebot_navigat...
http://wiki.ros.org/turtlebot_navigat...