Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

What options exist in OpenCV to improve the thresholding algorithm used for contour detection of fish arches on sonar images?

I am working on a fish detection algorithm for sonar images as a pet open source project using OpenCV and am looking for advice from someone with experience in computer vision about how I can improve its accuracy likely by improving the thresholding/segmentation algorithm in use.

Sonar images look a bit like below and the basic artifacts I want to find in them are:

  • Upside down horizontal arches that are likely fish
  • Cloud/blob/balls shape artifacts that are likely schools of bait fish

I would really like to extract contours of these cloud and fish-arch artifacts.

The example code below uses threshold() and findContours(). The results are reasonable in this case as it has been manually tuned for this image but does not work on other sonar images that may require different thresholds or a different thresholding algorithm.

I have tried OSTUs method and it doesn't really work very well for this use case. I think I need a thresholding/segmentation algorithm that uses contrast of localized blobs somehow, does such an algorithm exist in OpenCV or is there some other technique I should look more into?

Thanks, Brendon.


Original image searching for artifacts: image description


Example output:

image description


import numpy
import random
import cv2
import math

MIN_AREA = 10
MIN_THRESHOLD = 90

def IsContourUseful(contour):
    # I have a much more complex version of this in my real code
    # This is good enough for demo of the concept and easier to understand

    # Filter on area for all items
    area = cv2.contourArea(contour)
    if area < MIN_AREA:
        return False

    # Remove any contours close to the top
    for i in range(0, contour.shape[0]):
        if contour[i][0][1] <= 10:
            return False

    return True

def FindFishContoursInImageWithoutBottom(image, file_name_base):
    ret, thresh = cv2.threshold(image, MIN_THRESHOLD, 255, cv2.THRESH_BINARY)
    cv2.imwrite(file_name_base + 'thresholded.png', thresh)

    contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = [c for c in contours if IsContourUseful(c)]
    print ('Found %s interesting contours' % (len(contours)))

    # Lets draw each contour with a diff colour so we can see them as separate items
    im_colour = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
    um2 = cv2.UMat(im_colour)
    for contour in contours:
        colour = (random.randint(100,255), random.randint(100,255), random.randint(100,255))
        um2 = cv2.drawContours(um2, [contour], -1, colour, 1)
    cv2.imwrite(file_name_base + 'contours.png', um2)

    return contours


# Load png and make greyscale as that is what original sonar data looks like
file_name_base = 'fish_image_cropped_erased_bottom'
image = cv2.imread(file_name_base + '.png')
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

FindFishContoursInImageWithoutBottom(image, file_name_base)

What options exist in OpenCV to improve the thresholding algorithm used for contour detection of fish arches on sonar images?

I am working on a fish detection algorithm for sonar images as a pet open source project using OpenCV and am looking for advice from someone with experience in computer vision about how I can improve its accuracy likely by improving the thresholding/segmentation algorithm in use.

Sonar images look a bit like below and the basic artifacts I want to find in them are:

  • Upside down horizontal arches that are likely fish
  • Cloud/blob/balls shape artifacts that are likely schools of bait fish

I would really like to extract contours of these cloud and fish-arch artifacts.

The example code below uses threshold() and findContours(). The results are reasonable in this case as it has been manually tuned for this image but does not work on other sonar images that may require different thresholds or a different thresholding algorithm.

I have tried OSTUs method and it doesn't really work very well for this use case. I think I need a thresholding/segmentation algorithm that uses contrast of localized blobs somehow, does such an algorithm exist in OpenCV or is there some other technique I should look more into?

Thanks, Brendon.


Original image searching for artifacts: image description


Example output:

image description


import numpy
import random
import cv2
import math

MIN_AREA = 10
MIN_THRESHOLD = 90

def IsContourUseful(contour):
    # I have a much more complex version of this in my real code
    # This is good enough for demo of the concept and easier to understand

    # Filter on area for all items
    area = cv2.contourArea(contour)
    if area < MIN_AREA:
        return False

    # Remove any contours close to the top
    for i in range(0, contour.shape[0]):
        if contour[i][0][1] <= 10:
            return False

    return True

def FindFishContoursInImageWithoutBottom(image, file_name_base):
    ret, thresh = cv2.threshold(image, MIN_THRESHOLD, 255, cv2.THRESH_BINARY)
    cv2.imwrite(file_name_base + 'thresholded.png', thresh)

    contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = [c for c in contours if IsContourUseful(c)]
    print ('Found %s interesting contours' % (len(contours)))

    # Lets draw each contour with a diff colour so we can see them as separate items
    im_colour = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
    um2 = cv2.UMat(im_colour)
    for contour in contours:
        colour = (random.randint(100,255), random.randint(100,255), random.randint(100,255))
        um2 = cv2.drawContours(um2, [contour], -1, colour, 1)
    cv2.imwrite(file_name_base + 'contours.png', um2)

    return contours


# Load png and make greyscale as that is what original sonar data looks like
file_name_base = 'fish_image_cropped_erased_bottom'
image = cv2.imread(file_name_base + '.png')
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

FindFishContoursInImageWithoutBottom(image, file_name_base)

Some examples where thresholding failed to identify a large school of bait fish as they had lower intensity:

image description

image description


An example where it picked up a bunch of things when there was nothing there as had slightly higher global intensity:

image description

What options exist in OpenCV to improve the thresholding algorithm used for contour detection of fish arches on sonar images?

I am working on a fish detection algorithm for sonar images as a pet open source project using OpenCV and am looking for advice from someone with experience in computer vision about how I can improve its accuracy likely by improving the thresholding/segmentation algorithm in use.

Sonar images look a bit like below and the basic artifacts I want to find in them are:

  • Upside down horizontal arches that are likely fish
  • Cloud/blob/balls shape artifacts that are likely schools of bait fish

I would really like to extract contours of these cloud and fish-arch artifacts.

The example code below uses threshold() and findContours(). The results are reasonable in this case as it has been manually tuned for this image but does not work on other sonar images that may require different thresholds or a different thresholding algorithm.

I have tried OSTUs method and it doesn't really work very well for this use case. I think I need a thresholding/segmentation algorithm that uses contrast of localized blobs somehow, does such an algorithm exist in OpenCV or is there some other technique I should look more into?

Thanks, Brendon.


Original image searching for artifacts: image description


Example output:

image description


import numpy
import random
import cv2
import math

MIN_AREA = 10
MIN_THRESHOLD = 90

def IsContourUseful(contour):
    # I have a much more complex version of this in my real code
    # This is good enough for demo of the concept and easier to understand

    # Filter on area for all items
    area = cv2.contourArea(contour)
    if area < MIN_AREA:
        return False

    # Remove any contours close to the top
    for i in range(0, contour.shape[0]):
        if contour[i][0][1] <= 10:
            return False

    return True

def FindFishContoursInImageWithoutBottom(image, file_name_base):
    ret, thresh = cv2.threshold(image, MIN_THRESHOLD, 255, cv2.THRESH_BINARY)
    cv2.imwrite(file_name_base + 'thresholded.png', thresh)

    contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = [c for c in contours if IsContourUseful(c)]
    print ('Found %s interesting contours' % (len(contours)))

    # Lets draw each contour with a diff colour so we can see them as separate items
    im_colour = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
    um2 = cv2.UMat(im_colour)
    for contour in contours:
        colour = (random.randint(100,255), random.randint(100,255), random.randint(100,255))
        um2 = cv2.drawContours(um2, [contour], -1, colour, 1)
    cv2.imwrite(file_name_base + 'contours.png', um2)

    return contours


# Load png and make greyscale as that is what original sonar data looks like
file_name_base = 'fish_image_cropped_erased_bottom'
image = cv2.imread(file_name_base + '.png')
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

FindFishContoursInImageWithoutBottom(image, file_name_base)

Some examples where thresholding failed to identify a large school of bait fish as they had lower intensity:

image description

image description


An example where it picked up a bunch of things when there was nothing there as had slightly higher global intensity:

image description


Overall as a human I can see there are features that exist for example with the bait schools that the thresholding doesn't pick up.

Also there are some cases in other examples I added with a number of false positives. I am not too worried about these as long as there are not too many of them as I can probably do some post processing to exclude these.

Another two issues I see occasionally are:

1) The segmentation "joins" very tenuously connected blobs.

I was thinking some of the steps in the watershed example (dilation/erosion) might be helpful here. One example is in the original image, there is a contour identified with a yellow outline that is at the bottom in roughly the middle. It is really a few separate objects one blob attached to the bottom and another a bit higher off the bottom but joined by a very thin line.

2) The segmentation "separates" some blobs that really I think should be joined

I see this often on thin fish arches. The arch continues but has a slightly lower intensity in the middle of the arch and gets split in half. Again in this case the contour shape is roughly a bannana shape and it continues and knowing this I can probably post process and merge contours. But I wonder if some kind of adaptive thresholding might help with this, joining blobs that have close surrounding blobs.

What options exist in OpenCV to improve the thresholding algorithm used for contour detection of fish arches on sonar images?

I am working on a fish detection algorithm for sonar images as a pet open source project using OpenCV and am looking for advice from someone with experience in computer vision about how I can improve its accuracy likely by improving the thresholding/segmentation algorithm in use.

Sonar images look a bit like below and the basic artifacts I want to find in them are:

  • Upside down horizontal arches that are likely fish
  • Cloud/blob/balls shape artifacts that are likely schools of bait fish

I would really like to extract contours of these cloud and fish-arch artifacts.

The example code below uses threshold() and findContours(). The results are reasonable in this case as it has been manually tuned for this image but does not work on other sonar images that may require different thresholds or a different thresholding algorithm.

I have tried OSTUs method and it doesn't really work very well for this use case. I think I need a thresholding/segmentation algorithm that uses contrast of localized blobs somehow, does such an algorithm exist in OpenCV or is there some other technique I should look more into?

Thanks, Brendon.


Original image searching for artifacts: image description


Example output:

image description


import numpy
import random
import cv2
import math

MIN_AREA = 10
MIN_THRESHOLD = 90

def IsContourUseful(contour):
    # I have a much more complex version of this in my real code
    # This is good enough for demo of the concept and easier to understand

    # Filter on area for all items
    area = cv2.contourArea(contour)
    if area < MIN_AREA:
        return False

    # Remove any contours close to the top
    for i in range(0, contour.shape[0]):
        if contour[i][0][1] <= 10:
            return False

    return True

def FindFishContoursInImageWithoutBottom(image, file_name_base):
    ret, thresh = cv2.threshold(image, MIN_THRESHOLD, 255, cv2.THRESH_BINARY)
    cv2.imwrite(file_name_base + 'thresholded.png', thresh)

    contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = [c for c in contours if IsContourUseful(c)]
    print ('Found %s interesting contours' % (len(contours)))

    # Lets draw each contour with a diff colour so we can see them as separate items
    im_colour = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
    um2 = cv2.UMat(im_colour)
    for contour in contours:
        colour = (random.randint(100,255), random.randint(100,255), random.randint(100,255))
        um2 = cv2.drawContours(um2, [contour], -1, colour, 1)
    cv2.imwrite(file_name_base + 'contours.png', um2)

    return contours


# Load png and make greyscale as that is what original sonar data looks like
file_name_base = 'fish_image_cropped_erased_bottom'
image = cv2.imread(file_name_base + '.png')
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

FindFishContoursInImageWithoutBottom(image, file_name_base)

Some examples where thresholding failed to identify a large school of bait fish as they had lower intensity:

image description

image description


An example where it picked up a bunch of things when there was nothing there as had slightly higher global intensity:

image description


Overall as a human I can see there are features that exist for in these example with the bait schools that the thresholding doesn't pick up.

Also up as it is slightly lower intensity.

image description

image description


An example where it picked up a bunch of things when there was nothing there as had slightly higher global intensity:

There are some cases in other examples I added this example with a number of false positives. I am not too worried about these as long as there are not too many of them as I can probably do some post processing to exclude these.

image description


Another two issues I see occasionally are:

1) The segmentation "joins" very tenuously connected blobs.

I was thinking some of the steps in the watershed example (dilation/erosion) might be helpful here. One example is in the original image, there is a contour identified with a yellow outline that is at the bottom in roughly the middle. It is really a few separate objects one blob attached to the bottom and another a bit higher off the bottom but joined by a very thin line.

2) The segmentation "separates" some blobs that really I think should be joined

I see this often on thin fish arches. The arch continues but has a slightly lower intensity in the middle of the arch and gets split in half. Again in this case the contour shape is roughly a bannana shape and it continues and knowing this I can probably post process and merge contours. But I wonder if some kind of adaptive thresholding might help with this, joining blobs that have close surrounding blobs.