用Python和OpenCV做摄像头监控

jopen 9年前
 

起因:

我的猪笼草不知道被什么虫子咬了,新长的叶子老是被咬烂,以至于长不出笼子,这还能忍!猪猪草可以已经陪了我快一年了!所以我决定要把真凶揪出来!

用Python和OpenCV做摄像头监控

用Python和OpenCV做摄像头监控

分析:白天我经常到阳台去,除了蚂蚁没见过什么异常的虫子,所以我判断虫子应该是夜间出没,而且看叶子上的咬痕,应该是昆虫吃过的痕迹. 我特地跑去问淘宝卖家,希望他有过类似的经验,他说有可能是黑色毛毛虫,其实我也不确定是啥,我也没在阳台见到过毛毛虫.

所以我还是要采取行动,考虑到我平时也不会一直在猪笼草旁边,所以就想做一个监控摄像头,这样就可以实时监控猪笼草附近的一举一动,真凶迟早要现行!

手头材料不多,就只有一个webcam,本来打算买个红外摄像头,以便于夜间监控,但是网购还是要花点时间,所以想先用webcam代替,等做 出来了再考虑要不要换.于是我就上网搜索资料,看看有没有类似蛙眼的实现方法,于是就搜到上面的两篇文章,其实是一篇文章,中文版本为英文版的翻译版本.

我对作者的代码做了一点对应我的需求的改动:

1. 每过一段时间刷新一下首帧,这样就算环境有一点点静态的改变,系统也能很快适应

2. 需要把有入侵者的部分录制和拍照下来,以便于事后观察和取证(因为录制视频很占空间,所以只录制有异常的部分)

以下是老规矩,贴代码(代码的解释在上述引用的文章解释得很清楚了,我比较懒,就不在赘述):

# http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/  # http://python.jobbole.com/81593/  # import the necessary packages  import argparse  import datetime  import imutils  import time  import cv2  import cv2.cv as cv  import numpy as np  # construct the argument parser and parse the arguments  ap = argparse.ArgumentParser()  ap.add_argument("-v", "--video", help="path to the video file")  ap.add_argument("-a", "--min-area", type=int, default=300, help="minimum area size")  args = vars(ap.parse_args())  # if the video argument is None, then we are reading from webcam  if args.get("video", None) is None:    camera = cv2.VideoCapture(0)    time.sleep(0.25)  # otherwise, we are reading from a video file  else:    camera = cv2.VideoCapture(args["video"])  # initialize the first frame in the video stream  firstFrame = None  # Define the codec  fourcc = cv.CV_FOURCC('X', 'V', 'I', 'D')  framecount = 0  frame = np.zeros((640,480))  out = cv2.VideoWriter('calm_down_video_'+datetime.datetime.now().strftime("%A_%d_%B_%Y_%I_%M_%S%p")+'.avi',fourcc, 5.0, np.shape(frame))  # to begin with, the light is not stable, calm it down  tc = 40  while tc:    ret, frame = camera.read()    out.write(frame)    #cv2.imshow("vw",frame)    cv2.waitKey(10)    tc -= 1  totalc = 2000  tc = totalc  out.release()  # loop over the frames of the video  while True:    # grab the current frame and initialize the occupied/unoccupied    # text    (grabbed, frame) = camera.read()    text = "Unoccupied"    # if the frame could not be grabbed, then we have reached the end    # of the video    if not grabbed:      time.sleep(0.25)      continue    # resize the frame, convert it to grayscale, and blur it    frame = imutils.resize(frame, width=500)    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)    gray = cv2.GaussianBlur(gray, (21, 21), 0)    # update firstFrame for every while    if tc%totalc == 0:      firstFrame = gray      tc = (tc+1) % totalc      continue    else:      tc = (tc+1) % totalc    #print tc    # compute the absolute difference between the current frame and    # first frame    frameDelta = cv2.absdiff(firstFrame, gray)    thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]    # dilate the thresholded image to fill in holes, then find contours    # on thresholded image    thresh = cv2.dilate(thresh, None, iterations=2)    (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)    # loop over the contours    for c in cnts:    # if the contour is too small, ignore it      if cv2.contourArea(c) < args["min_area"]:        continue      # compute the bounding box for the contour, draw it on the frame,      # and update the text      (x, y, w, h) = cv2.boundingRect(c)      cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)      text = "Occupied"    # draw the text and timestamp on the frame    cv2.putText(frame, "Room Status: {}".format(text), (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)    cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"), (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)    # show the frame and record if the user presses a key    cv2.imshow("Security Feed", frame)    cv2.imshow("Thresh", thresh)    cv2.imshow("Frame Delta", frameDelta)    # save the detection result    if text == "Occupied":      if framecount == 0:        # create VideoWriter object        out = cv2.VideoWriter(datetime.datetime.now().strftime("%A_%d_%B_%Y_%I_%M_%S%p")+'.avi',fourcc, 10.0, np.shape(gray)[::-1])        cv2.imwrite(datetime.datetime.now().strftime("%A_%d_%B_%Y_%I_%M_%S%p")+'.jpg',frame)        # write the flipped frame        out.write(frame)        framecount += 1      else:        # write the flipped frame        out.write(frame)        framecount += 1    elif framecount > 20 or framecount<2:      out.release()      framecount = 0    key = cv2.waitKey(1) & 0xFF    # if the `ESC` key is pressed, break from the lop    if key == 27:      break  # cleanup the camera and close any open windows  camera.release()  cv2.destroyAllWindows()

用Python和OpenCV做摄像头监控

用Python和OpenCV做摄像头监控