Image processing pipeline for a stereo camera feed
My first real ROS project involves detecting lines in a video feed coming from a Zed stereo camera.
I know how I would do this on a single camera feed; the opencv_apps package does most of the work for me. And while I could simply create a pipeline for each feed, that seems inelegant. The edge_detection and hough_lines nodes don't care whether the image is coming from the left or right camera, so why have one node that subscribes to the left_frame and publishes to left_edges, and a second node that subscribes to right_frame, does the exact same processing, and then publishes to right_edges?
However, the edge_detection node only subscribes to one topic, and only publishes to one topic. My current idea is a relay node that merges the two streams and republishes their messages on a single topic (in a new message type that also contains the information a node at the other end would need to split them again). I could also just combine the images themselves, but I would prefer to keep them separate to simplify some processing later in the pipeline that involves a particular area of the feed. As a last resort, I could fork opencv_apps and add a second subscriber and publisher to the nodes I need. Or I could simply forgo the use of opencv_apps and write an all-in-one processing node.
I imagine that this type of problem is not unique. What other solutions have people come up with? Is there a particular way of handling this that is considered standard/ideal/best practice?