I am interested in estimating the depth of a scene within 1m of the sensor. For simplicity, I am interested in pursuing the use of a single IR sensor and several (2-3) sources of LED IR illumination. I am interested in combining the information from intensity fall-off with the differences between subsequent frames illuminated from different sources, the locations of which are known relative to the sensor. I am able to capture in excess of 120fps and my hope is that this frame rate is sufficient to capture moving articulated objects (up to some reasonable speed of the objects) by means of photometric stereo, certainly intensity fall off would work regardless of frame rate (up to the limitations of sensor sensitivity).
My questions are the following: 1) If you have tried this approach, what are its strengths and limitations? 2) What frame rate is needed to capture moving objects if photometric stereo is used? 3) Do I need three sources of illumination? Or can I use 2? Can the 3 sources be collinear? 4) Can anyone point me to quality recent work estimating depth from intensity, especially for close range depth estimation?