Strictly speaking what you're trying to do is not possible. If you convert the depth maps to point clouds then you could easily combine them to a super point cloud, but returning to a depth map representation has a few problems.
The problem is that a depth map represents a special sub-set of point clouds, they are both structured and projectable. Structured means that the points are arranged in a 2D grid pattern from the point of view of the sensor, and projectable means that no points occlude any other points.
If your two depth sensors have a parallax baseline between them, then the combined point cloud may loose the projectable property, meaning that it cannot be represented as a depth map without some data loss. The data that would be lost are the parts of the point cloud which are occluded by other points.
However if you're okay with this data loss then converting back to a depth map could be the solution. You'll need to bear in mind that the combined point cloud will no longer be structured, so you'll have to project each point back into camera space to work out which pixel of the new depth map it corresponds to.
Hope this helps.
UPDATE:
I don't know of any ROS nodes that can do the whole thing out of the box. You could use the depth_image_proc package to convert the two depth images into point clouds. You could then make your own node that transformed these two clouds into the same coordinate system and created a super-set of the points in them both.
To convert this super-set back into a point cloud you could follow the final answer on this stackoverflow question. The user shows some example code for projecting points back into camera space using a simple pinhole camera model. For this to do what you want you'll need to define 'virtual' camera intrinsic and extrinsic parameters such that its FOV covers both of the cameras you're trying to merge. Then use these parameters to project the points to image space.
In order to deal with the data loss you'll need to use a z-buffering type approach, where you only add a new point into a pixel if it's closer than the value currently there.
Hope this makes sense, I'm guessing this has turned out more complicated than you were expecting!