I'm starting on a multi-faceted project and this is about the simplest thing I can reduce it down to. It's difficult for me to put this into words, so bare with me. I have a camera (60 or 120FPS), computer, and USB laser pointer (facing the same direction as the camera) on a panning and tilting platform. What I'd like to do is automatically aim the laser pointer ahead of anything moving against the background (objects with no rhyme or reason to their appearance) by predicting where the object is going to be. I could figure this out if the camera was stationary, but it's not. The whole assembly will be moving.
Here's what I think I have to do:
Compare the most recent and previous image features (like a panorama making program would when stitching photos together)
Perform a search for the moving object in the overlapping area between the two images (which would be most of the frame because of the camera's frame rate) as you would if the camera were not moving and estimate its next position (I already conceptually know how to do this step)
Does this sound like a feasible way to do this? Is it possible to perform feature detection at that speed on a modern computer (something like an Odroid or even Alienware Alpha / mini PC)? Is there a better way to do it? It's been a while since I've done vision processing. I apologize if I'm not making sense -- feel free to have me clarify. Thanks!