How to treat a global visual odometry in the robot localization package.
Greeting,
I'm using ekf from the robot localization package not to localize the robot but to estimate the external object with respect to the robot's base link frame. I currently have one camera, one odometer, and one IMU on the robot.
Edited:
The camera detects a fixed object and provides an initial transformation from the odom frame to the object frame (odom -> map). Then, I'm thinking of fusing data from the odometer, IMU, and camera to track the object pose with single ekf node. In this case, how should I fuse the discontinuous pose data (due to temporary occlusion) from the camera which provides a transformation from the base link to the map (basically a global position of the robot)? I think that I can treat this as something similiar to GPS?
frequency: 15
silent_tf_failure: false
sensor_timeout: 0.1
two_d_mode: false
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
debug_out_file: /path/to/debug/file.txt
publish_tf: true
publish_acceleration: false
permit_corrected_publication: false
map_frame: map # Defaults to "map" if unspecified
odom_frame: odom # Defaults to "odom" if unspecified
base_link_frame: base_link # Defaults to "base_link" if unspecified
world_frame: map # Defaults to the value of odom_frame if unspecified
odom0: /odom
odom0_config: [true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
odom0_queue_size: 2
odom0_nodelay: false
odom0_differential: false
odom0_relative: false
odom0_pose_rejection_threshold: 5
odom0_twist_rejection_threshold: 1
# Pose data from the global visual odometry
pose0: /globpose
pose0_config: [true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
pose0_differential: true
pose0_relative: false
pose0_queue_size: 5
pose0_rejection_threshold: 2 # Note the difference in parameter name
pose0_nodelay: false
imu0: /imu
imu0_config: [false, false, false,
false, false, false,
false, false, false,
true, true, true,
true, true, true]
imu0_nodelay: false
imu0_differential: false
imu0_relative: true
imu0_queue_size: 5
imu0_pose_rejection_threshold: 0.8 # Note the difference in parameter names
imu0_twist_rejection_threshold: 0.8 #
imu0_linear_acceleration_rejection_threshold: 0.8 #
I am not understanding how robot-localization pkg helps solve this problem. What sensors are providing information about the position of the external object? The robot's odometry and IMU would seem to provide no data to determine that.
An EKF has no idea what kind of data it is processing, but the inputs need to be transformed to a common frame before they can be "merged." Your intention to subvert this can only lead to garbage output.
I should've explained it a bit more in detail. So the camera initially detects an object and provides its pose with respect to the base link frame. Using this initial pose, odometer and IMU will basically get transformed to track the object with respect to the base link. Finally, I'm trying to fuse the pose data from the camera with the transformed data of the odometer and IMU.
After thinking about this issue as Mike pointed out, I understood that I should treat the object pose data from the camera as a global visual odometry. This pose data is just an inverse of map -> odom transformation. Hence, I should leave everything as it is (don't care about transformation of odom and IMU) and feed this pose information to map and world frame.