Convert Image to Occupancy Grid
I was wondering how I might turn a bird's eye view image of a map into an occupancy grid. I use an edge detection algorithm to detect obstacles in the bird's eye view image and would like to then translate this information into an occupancy grid (the black squares would be obstacles as detected by the edge detection algorithm, and the white squares would be free space).
The image I would like to convert:
I would like to turn the image above into something along the lines of this (the image below is just crudely hand drawn)
My code for edge detection:
import cv2
import numpy as np
roomimg = cv2.imread("/Users/2020shatgiskessell/Desktop/roomimage.jpg")
gray = cv2.cvtColor(roomimg, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,3,0,0.04)
dst = cv2.dilate(dst, None)
roomimg[dst>0.01*dst.max()]=[0,0,255]
cv2.imshow('dst',roomimg)
if cv2.waitKey(0) & 0xff == 27:
cv2.destroyAllWindows()
maybe replace the 1st image with an "uncluttered" one ?
ppl can still draw their own harris dots if they want, using the code above
btw, above does corner detection, not edge detection (not the same thing !)
your scene is synthetic, your camera and lighting looks static. if you have an image of the "background", i.e. no obstacles, you can take the difference between that and the current view.