So I have the following code:
import numpy as np
import cv2
fullbody_cascade = cv2.CascadeClassifier('haarcascade_fullbody.xml')
cap0 = cv2.VideoCapture('walking0.mp4')
cap1 = cv2.VideoCapture('walking1.mp4')
while True:
ret0, frame0 = cap0.read()
ret1, frame1 = cap1.read()
gray0 = cv2.cvtColor(frame0, cv2.COLOR_BGR2GRAY)
gray1 = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
fullbody0 = fullbody_cascade.detectMultiScale(gray0)
fullbody1 = fullbody_cascade.detectMultiScale(gray1)
for (x,y,w,h) in fullbody0:
cv2.rectangle(frame0, (x,y), (x+w, y+h), (255,0,0), 2)
for (x,y,w,h) in fullbody1:
cv2.rectangle(frame1, (x,y), (x+w, y+h), (255,0,0), 2)
#roi_gray = gray[y:y+h, x:x+w]
#roi_color = img[y:y+h, x:x+w]
cv2.imshow('cam0',frame0)
cv2.imshow('cam1',frame1)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
And what this does is use a trained Haar Cascade of a person's fullbody and draws an outline around it and "cv2.imshow" is used to preview what is being seen. What I actually want to do is display it on a webpage instead, whether it be a ddns server, localhost or a domain, any is fine. I initially used Motion with my Raspberry Pi 3 to stream to localhost but it is not what I need. I've seen people use Python Flask but I do not think it is compatible with my application, I may be wrong. To explain the code more, it's processing 2 videos; "walking0.mp4" and "walking1.mp4" and outlining the humans in both videos at the same time. Does anyone have any suggestions on how I can modify this code to get my desired outcome?