Squeezing Data From a Thermal Image

This is the "core" of the project: Lepton + Battery + Raspberry Pi3 + AWS Webserver

Little did I know that Python would be back to haunt me later in life... no, not the famed British comedy troupe, but the Python coding language (which is, in fact, named after the famed British comedy troupe...)

Look Ma! I put a cutting-edge thermal aware
thermal sensor...in a cardboard box.

If you've read some of my earlier posts, then you'll know that this project has two major components, the "Shell" and the "Core". The Shell is purely the protective/decorative outside bit, made out of corrugated plastic, which I'm having printed up at a local printshop.

The Core is all the fancy tech bits, including a lithium-ion battery, a Raspberry Pi and the Lepton thermal camera. From the picture you can see I've packed it all into a handy cardboard box, but this is just a temporary housing until I get the the Shell back from the printer. This way I can move it around safely and try it out in different settings to test the image recognition software.

For this project, I'm trying to count distinct objects that move by the Lepton, and also capture some gestures, so the first step is grab the image buffer from the Lepton and dump it into an array so we can try different functions on the data. 

After a couple of false starts yesterday, I have a script now that can display a feed from the camera, plus some color variation to indicate hot objects and also a few sliders to modify fussiness the edge detection. Check it out: 
It's worth noting again that the Raspberry Pi has built in wifi, but in theory I could do all this with a 3G wireless card. All the image processing happens on-board, so the only real output is small pings to the cloud-hosted webserver. (More on that later.) A big part of the final plan is to have a device that can be adapted for any power/connectivity scenario.

Here's the code for the script shown in the video above, written in python (p.s. If you're a real programmer, cut me some slack, this is only my 3rd day with python):

import numpy as np
import cv2
​from pylepton import Lepton

#setup the Lepton image buffer

def capture(device = "/dev/spidev0.0"):

    with Lepton() as l:
    a,_ = l.capture()     #grab the buffer
    cv2.normalize(a, a, 0, 65535, cv2.NORM_MINMAX) # extend  contrast
    np.right_shift(a, 8, a) # fit data into 8 bits
    return np.uint8(a)

#Create a window and give it features

def nothing(x):

cv2.namedWindow('flir', cv2.WINDOW_AUTOSIZE)
cv2.resizeWindow('flir', 320, 240)

#start a continuous loop    

while True:

        #update the image processing variables
        thresh = cv2.getTrackbarPos('thresh', 'flir')
        erodeSize = cv2.getTrackbarPos('erode', 'flir')
        dilateSize = cv2.getTrackbarPos('dilate', 'flir')

        #read the bugger into an object
        frame = capture()
        frame_v = frame

        #apply some image processing

        frame = cv2.applyColorMap(frame, cv2.COLORMAP_JET)
        blurredBrightness = cv2.bilateralFilter(frame_v,9,150,150)
        edges = cv2.Canny(blurredBrightness,thresh,thresh*2, L2gradient=True)
        _,mask = cv2.threshold(blurredBrightness,200,1,cv2.THRESH_BINARY)

        #erodeSize must be >= 1

        if erodeSize <= 1:
            erodeSize = 1

        eroded = cv2.erode(mask, np.ones((erodeSize, erodeSize)))
        mask = cv2.dilate(eroded, np.ones((dilateSize, dilateSize)))

        #display the image
          cv2.imshow("flir", cv2.resize(cv2.cvtColor(mask*edges, cv2.COLOR_GRAY2RGB) | frame, ($