1 | initial version |
It might seem stupid, but the top of the page says exactly which paper has the same base idea as the implementation:
http://cs.bath.ac.uk/brown/papers/ijcv2007.pdf
About the featherblender, this isn't actually a paper. Basically what you do is first align both images based on feature points that are detected. Following that you take a region near border where you select stitching points that need to match perfectly. Take for example a 2 inch border. Based on those regions, image one gets an opacity of 50% and image 2 gets an opacity of 50%. Combined this gives you a 100% image of two regions that are blended in. Due to this basic featherblending, ghost elements can occur (elements that appear to be there because they are not in both borders, for example driving cars).
About the seam finder, I would suggest they align features for this, so don't know exactly what you want to know here. Read through the paper?