For that you'll need two things: some form of controller that operates in Cartesian end-effector space (like a J-transpose or J-inverse controller), and then a simple interactive_marker application that will just send the location of the interactive marker to the controller as a set point.
We have both implemented for the PR2, and the pr2_marker_control package in the pr2_object_manipulation stack is a good starting point. For a generic robot, I expect the Cartesian controller to be the most significant hurdle; the interactive_marker based app should be relatively straightforward.
Alternatively, you could just compute inverse kinematics on the location of the interactive marker and then feed the result directly to your regular joint-space controller. You run the risk of sudden movements in the redundant space of the arm (if your arm is redundant), or inability to get close to locations where IK can not find solutions. We do not have anything generic like that already implemented, but if your IK is fast enough, then it should be a relatively straightforward implementation, at least for a proof of concept.