I guess your question comes from the confusion why localization algorithm like amcl or gmapping do not publish /map -> /base_link (robot's position and orientation in map) directly instead of publishing the /map -> /odom transform. Some answers use static_transformation for publishing the /map -> /odom transform, which I can't say is incorrect because if the odometry reading never drifts then the /map -> /odom transform can be static. It will always be the displacement between map's origin and the starting position of the robot. Then you wouldn't need the amcl or gmapping algorithms because Tmap->odom * Todom->base_link = Tmap->base_link will always give you an accurate estimate of the robot's location.
But the fact that odometry reading drifts over time so we need extra localization mechanism including those lidar sensing, visual fiducial system, GPS etc. For amcl using lidar we are capable of getting /map -> /base_link transformation, BUT this breaks the rules of tf where 1 child frame can only have 1 parent frame. If we have /odom -> /base_link, then we can NOT have another direct transformation /map -> /base_link in the tf tree. So a typical tf tree would be /world -> /map -> /odom -> /base_link, where you get /map -> /odom from amcl, and /odom -> /base_link from your wheel encoders and imu fusion.
So to answer your question "where can I get /map -> /odom", it can be an output from localization algorithm like amcl. Together with /map -> /odom and /odom -> /base_link, you can get a more accurate /map -> /base_link transform.