Hi,
I have a volumetric dataset that contains in essence little glowing semi-rigid jelly beans that crawl around, and occasionally merge, or split, in quite some density. Over 120 frames (pretty contiguous in terms of sampling the real behaviour). For curiosity, data is lattice light sheet microscopy acquisition of cellular mitochondria.
I can render this 3d volumetric set out in most imaginable ways, with as many cameras as required, in order to best aid the algorithms (eg could do a ring of 36 cameras, rendering with depth or floating point world position, naive global masks [eg alpha cut the BG], etc). Can supply exact specifics of the camera details and relative position to the data. Its not so much a question about resolving where they are in space, but tracking them in their dense environment, utilising as many cameras as required to help that.
I want to track many of these beans, say 30 out of a population of 100. That could be done as separate passes though if required. And obviously RT is not of particular interest. Could also consider doing this in multiple passes, for single tracked entities, where a rough track is achieved, that is then applied to re-rendering the volumetric data with any surrounding, occluding, non-interacting data removed to aid better tracking.
Any and all advice is greatly appreciated, Thank you, Campbell.