If You Liked It Then You Should Have Put a Ring On It...


My wedding ring is palladium, and emits no IR heat at all. So cold and unyielding...

With the basic wiring and shell design complete,  I've the past couple of days working on the software for this project and that means digging heavily into CV2 (Open Computer Vision V.2), a really magical library of image manipulation software that integrates with Python on the Raspberyy Pi. 

In particular, I'm exploring the possibilities of object recognition, gesture recognition and vehicle tracking. Depending on the circles you run in, this is either old hat, or something terrifying from a Jason Bourne movie. I didn't expect this, but essentially any filter or adjustment I'm accustomed to using in photoshop can be applied to the raw image data in python. For example, check me out with a couple of cool threshold and gaussian blur filters:


Trust me, it's terrifying in live video mode. 


Imma ghost!!!!
 

The big advantage of using a Lepton thermal camera for this process is the naturally high level of high contrast between subjects (people & cars) and everything else. Hot objects are clearly distinguishable from the background. 

I've already tried out a couple of algorithms used to identify human shapes, but the big limit for this project is going to be how much image processing I can run directly on the Raspberry Pi. It's an amazing device, but you can only squeeze so much from something that small and I want to limit the data stream to the server so that everything could be run off a 3G cellular link if necessary. (wifi has better bandwidth, but cellular is more portable)

I'm feeling hopeful. The CV2 library has a lot of options so I'm sure I'll hit on a workable trade-off between object identification and speed. 

My next big push will be to take object counts and push them along to the web server, and to lock in a couple of basic gesture-recognition controls for the Lepton. 

Oh! Also, my neopixel LEDs should be arriving in a day or so - time to start thinking about the lighting effect to go along with the gesture recognition.

Everything I'm doing so far is just for testing the Lepton, but here's today's test code to fiddle with the output parameters:

# import the necessary packages

#from __future__ import print_function

from imutils.object_detection import non_max_suppression

from imutils import paths

import numpy as np

import argparse

import imutils

import cv2

from pylepton import Lepton

from matplotlib import pyplot as plt

 

#create ironblack array

def applyCustomColorMap(im_gray) :

 

    lut = np.zeros((256, 1, 3), dtype=np.uint8)

    lut[:, 0, 2] = [0, 253, 251, 249, 247, 245, 243, 241, 239, 237, 235, 233, 2$

    lut[:, 0, 1] = [0, 253, 251, 249, 247, 245, 243, 241, 239, 237, 235, 233, 2$

    lut[:, 0, 0] = [0, 253, 251, 249, 247, 245, 243, 241, 239, 237, 235, 233, 2$

    im_color = cv2.LUT(im_gray, lut)

#setup the Lepton image buffer

def capture(device = "/dev/spidev0.1"):

    with Lepton() as l:

      a,_ = l.capture()     #grab the buffer

    cv2.normalize(a, a, 0, 65535, cv2.NORM_MINMAX) # extend  contrast

    np.right_shift(a, 8, a) # fit data into 8 bits

    return np.uint8(a)

 

# initialize the HOG descriptor/person detector

hog = cv2.HOGDescriptor()

hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

windowY = 300

frameRate = 1

blur = 125

 

cv2.namedWindow('Lepton', cv2.WINDOW_AUTOSIZE)

cv2.resizeWindow('Lepton', windowX, windowY)

cv2.moveWindow('Lepton',1,1)

 

def nothing(x):

    pass

cv2.createTrackbar('WindowX','Lepton',400,640,nothing)

cv2.createTrackbar('WindowY','Lepton',300,480,nothing)

cv2.createTrackbar('frameRate','Lepton',10,100,nothing)

cv2.createTrackbar('blur','Lepton',125,255,nothing)

 

 

# loop over the image paths

while True:

 

    windowX = cv2.getTrackbarPos('WindowX','Lepton')

    windowY = cv2.getTrackbarPos('WindowY','Lepton')

    frameRate = cv2.getTrackbarPos('frameRate','Lepton')

    blur = cv2.getTrackbarPos('blur','Lepton')

    # load the image and resize it to (1) reduce detection time

    # and (2) improve detection accuracy

    image = capture()

 

    #threshold

    #_ ,image = cv2.threshold(image,blur,255,cv2.THRESH_TOZERO)

    #image = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2$

    #image = cv2.Sobel(image,cv2.CV_8U,1,0,ksize=5)

    kernel = np.ones((3,3),np.float32)/8

    image = cv2.filter2D(image,-1,kernel)

hile True:

 

    windowX = cv2.getTrackbarPos('WindowX','Lepton')

    windowY = cv2.getTrackbarPos('WindowY','Lepton')

    frameRate = cv2.getTrackbarPos('frameRate','Lepton')

    blur = cv2.getTrackbarPos('blur','Lepton')

    # load the image and resize it to (1) reduce detection time

    # and (2) improve detection accuracy

    image = capture()

 

    #threshold

    #_ ,image = cv2.threshold(image,blur,255,cv2.THRESH_TOZERO)

    #image = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2$

    #image = cv2.Sobel(image,cv2.CV_8U,1,0,ksize=5)

    kernel = np.ones((3,3),np.float32)/8

    image = cv2.filter2D(image,-1,kernel)

 

    #image = cv2.applyColorMap(image, cv2.COLORMAP_SUMMER)

    #im = cv2.imread("pluto.jpg", cv2.IMREAD_GRAYSCALE);

    image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR);

    image = applyCustomColorMap(image);

 

    #resize 

    image = cv2.resize(image, (windowX,windowY))

 

    #image = imutils.resize(image, width=min(400, image.shape[1]))

    #orig = image.copy()

    # detect people in the image

    (rects, weights) = hog.detectMultiScale(image, winStride=(4, 4), padding=(8$

 

    # draw the original bounding boxes

    #for (x, y, w, h) in rects:

    #        cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)

    # apply non-maxima suppression to the bounding boxes using a

    # fairly large overlap threshold to try to maintain overlapping

    # boxes that are still people

    rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])

    pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)

 

    # draw the final bounding boxes

    for (xA, yA, xB, yB) in pick:

            cv2.rectangle(image, (xA, yA), (xB, yB), (0, 128, 255), 2)

 

 

    # show the output images

    #cv2.imshow("Before NMS", orig)

    cv2.imshow("Lepton", image)

    cv2.waitKey(frameRate)