GW-Wang-thu / Temperature-Monitor-System-based-on-Infrared-Camera-and-Face-Detection

基于红外相机和人脸检测的体温监测告警系统
37 stars 24 forks source link

Couldnt find the class file #1

Open SaddamBInSyed opened 4 years ago

SaddamBInSyed commented 4 years ago

Hi @GW-Wang-thu ,

Thanks for your work.

I am getting the below error when I run the FIITM.py

"ModuleNotFoundError: No module named 'QTUI'"

where I can find the below code, from QTUI.MainWindow import Ui_FIITM from QTUI.Dialog import Ui_Dialog

Highly appreciated your help on this.

GW-Wang-thu commented 4 years ago

Thanks for reminding me! I'll upload the missed files later.

SaddamBInSyed commented 4 years ago

Waiting for the same.

Just I want to try this to see the IR camera temperature detection accuracy during this lockdown.

GW-Wang-thu commented 4 years ago

I'm sorry but this work is now just a framework as the thermal camera is still unreachable. Studies on the accuracy will be carried out in mid-May when I return to school.

SaddamBInSyed commented 4 years ago

Well Noted.

If you can upload the missing file then it's nice.

Thanks.

SaddamBInSyed commented 4 years ago

@GW-Wang-thu thanks for adding the missed files.

i have tested the same but only camera streaming happening,

I think the provided code is different from the posted UIDemo.jpg image result.

but once again thanks

GW-Wang-thu commented 4 years ago

Thanks for issuing.

  1. Right now there Is no calculation of the temperature and a new Temperature_Calculator_Class with basic obtaining of maximum gray value will be upload later.

  2. I download and again test the codes with no error occurred. I think probably the error occurs because you do not have another camera that can be driven by OpenCV or you haven't correctly changed the values. I use my smartphone camera as the "IR Camera" in the code, driven by a smartphone network-camera driver app named "IP Camera" and here is the homepage. On line 21 of the "Cameras_Class.py", you can see the IP of my smartphone camera, which can be acquired from "IP Camera". You need to correctly set the info of the two cameras by changing the camera_id which will be fed to cv2.VideoCapture(camera_id).

BabsD5 commented 4 years ago

hello Wang, already congratulations on the work you have done .. But tell me I have a question: By executing your program I could understand that you called the TemCamculator function but I would like to know the option "anchor" with which you affected it, the FaceMaskDetector.outputs can you explain me? Until your answer, thank you

GW-Wang-thu commented 4 years ago

Thanks for issuing. Sorry but I'm not quite clear about your problem...and I'm not an expert in detection. Did you mean how the anchors generate or how to change the generation parameters of anchors like density or zoom levels?

  1. The FaceMaskDetector was forked and simplified from @AIZOOTech. For details of the detection method, you can visit that homepage and refer to the codes.

  2. In my codes, a series of pre-generated anchors was saved in ./__model/anchors_exp.csv and I just generate those anchors for each frame for the concern of simplification. The anchors were generated from the "anchor_generator.py" of AIZOOTech/FaceMaskDetection/utis/

Hope these will be helpful.

BabsD5 commented 4 years ago

thank you for your response Wang. Indeed, I asked if by executing your program with a thermal camera we would have precise values ​​of the temperature at output on the thermal camera? thanks again

Le ven. 1 mai 2020 à 1:45 AM, Guowener notifications@github.com a écrit :

Thanks for issuing. Sorry but I'm not quite clear about your problem...and I'm not an expert in detection. Did you mean how the anchors generate or how to change the generation parameters of anchors like density or zoom levels?

1.

The FaceMaskDetector was forked and simplified from @AIZOOTech https://github.com/AIZOOTech/FaceMaskDetection. For details of the detection method, you can visit that homepage and refer to the codes. 2.

In my codes, a series of pre-generated anchors was saved in ./__model/anchors_exp.csv and I just generate those anchors for each frame for the concern of simplification. The anchors were generated from the "anchor_generator.py" of AIZOOTech/FaceMaskDetection/utis/ https://github.com/AIZOOTech/FaceMaskDetection/blob/master/utils/anchor_generator.py

Hope these will be helpful.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/GW-Wang-thu/Temperature-Monitor-System-based-on-Infrared-Camera-and-Face-Detection/issues/1#issuecomment-622208737, or unsubscribe https://github.com/notifications/unsubscribe-auth/APM7JNH7PX4FYEI7QV4OQE3RPIST5ANCNFSM4MIXSDCA .

BabsD5 commented 4 years ago

Besides, my last question which you dont understand correctly is : in your main program, you call the Function TemCalculator and you put "anchors = self.FaceDetector.outputs" I don't understand this...That's why i ask more explaination about this, because i want to use this in a other program

Le ven. 1 mai 2020 à 1:45 AM, Guowener notifications@github.com a écrit :

Thanks for issuing. Sorry but I'm not quite clear about your problem...and I'm not an expert in detection. Did you mean how the anchors generate or how to change the generation parameters of anchors like density or zoom levels?

1.

The FaceMaskDetector was forked and simplified from @AIZOOTech https://github.com/AIZOOTech/FaceMaskDetection. For details of the detection method, you can visit that homepage and refer to the codes. 2.

In my codes, a series of pre-generated anchors was saved in ./__model/anchors_exp.csv and I just generate those anchors for each frame for the concern of simplification. The anchors were generated from the "anchor_generator.py" of AIZOOTech/FaceMaskDetection/utis/ https://github.com/AIZOOTech/FaceMaskDetection/blob/master/utils/anchor_generator.py

Hope these will be helpful.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/GW-Wang-thu/Temperature-Monitor-System-based-on-Infrared-Camera-and-Face-Detection/issues/1#issuecomment-622208737, or unsubscribe https://github.com/notifications/unsubscribe-auth/APM7JNH7PX4FYEI7QV4OQE3RPIST5ANCNFSM4MIXSDCA .

GW-Wang-thu commented 4 years ago

Thanks again for issuing.

Got it this time. The anchor means where there are faces and it's a list of four-corner-points of each area containing a face, and that's the output of face detection which can be found in lines 93 to 114 in FaceMaskDetection_Class.py. Again, for details, you may refer to AIZOOTech

In the codes, the anchors are fed to TemCalculator telling the corresponding thermal image where there are faces or foreheads, etc.

Please import TemperatureCalculator_Class_1 instead of TemperatureCalculator_Class and then by executing the program, you will get a maximum gray value of the detected forehead area as is showed in ./Images/Demo-2.png in the temporary version. If u feed the "infrared image" with a temperature array and correctly match RGB image and the thermal image, you'll get a maximum temperature of the forehead.

BabsD5 commented 4 years ago

OK thanks a lot I understood therefore the output comes from the line output.append (....... this is what we send as anchor in TemCalculator_Class1. Is it necessary to convert the thermal image to RGB and if so how can we do it? because I understood in your remarks that we must match the thermal image and the RGB image ... for the detection of face in the thermal image what is the process? Thanks again for everything, you really help me

Le ven. 1 mai 2020 à 12:04 PM, Guowener notifications@github.com a écrit :

Thanks again for issuing.

Got it this time. The anchor means where there are faces and it's a list of four-corner-points of each area containing a face, and that's the output of face detection which can be found in lines 93 to 114 in FaceMaskDetection_Class.py. Again, for details, you may refer to AIZOOTech https://github.com/AIZOOTech/FaceMaskDetection/

In the codes, the anchors are fed to TemCalculator telling the corresponding thermal image where there are faces or foreheads, etc.

Please import TemperatureCalculator_Class_1 instead of TemperatureCalculator_Class and then by executing the program, you will get a maximum gray value of the detected forehead area as is showed in ./Images/Demo-2.png in the temporary version. If u feed the "infrared image" with a temperature array and correctly match RGB image and the thermal image, you'll get a maximum temperature of the forehead.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/GW-Wang-thu/Temperature-Monitor-System-based-on-Infrared-Camera-and-Face-Detection/issues/1#issuecomment-622361368, or unsubscribe https://github.com/notifications/unsubscribe-auth/APM7JNDQ4FJVMGMR6G22KF3RPK3DTANCNFSM4MIXSDCA .

BabsD5 commented 4 years ago

Besides How do we get this gray value? Because in your program, you initialized the cameras then read and then sent the frameDC in faceMaskDetector, and the frameIC in TemCalculator, so was there a conversion to gray of the frameIC? Thanks again

Le ven. 1 mai 2020 à 12:04 PM, Guowener notifications@github.com a écrit :

Thanks again for issuing.

Got it this time. The anchor means where there are faces and it's a list of four-corner-points of each area containing a face, and that's the output of face detection which can be found in lines 93 to 114 in FaceMaskDetection_Class.py. Again, for details, you may refer to AIZOOTech https://github.com/AIZOOTech/FaceMaskDetection/

In the codes, the anchors are fed to TemCalculator telling the corresponding thermal image where there are faces or foreheads, etc.

Please import TemperatureCalculator_Class_1 instead of TemperatureCalculator_Class and then by executing the program, you will get a maximum gray value of the detected forehead area as is showed in ./Images/Demo-2.png in the temporary version. If u feed the "infrared image" with a temperature array and correctly match RGB image and the thermal image, you'll get a maximum temperature of the forehead.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/GW-Wang-thu/Temperature-Monitor-System-based-on-Infrared-Camera-and-Face-Detection/issues/1#issuecomment-622361368, or unsubscribe https://github.com/notifications/unsubscribe-auth/APM7JNDQ4FJVMGMR6G22KF3RPK3DTANCNFSM4MIXSDCA .

BabsD5 commented 4 years ago

I will send you my code so that you can help me on how to calculate the temperature, everything works fine except that, I detect the person and his face to see if he is wearing a mask..but the temperature is causing me problem..I will send it to you

Le ven. 1 mai 2020 à 12:04 PM, Guowener notifications@github.com a écrit :

Thanks again for issuing.

Got it this time. The anchor means where there are faces and it's a list of four-corner-points of each area containing a face, and that's the output of face detection which can be found in lines 93 to 114 in FaceMaskDetection_Class.py. Again, for details, you may refer to AIZOOTech https://github.com/AIZOOTech/FaceMaskDetection/

In the codes, the anchors are fed to TemCalculator telling the corresponding thermal image where there are faces or foreheads, etc.

Please import TemperatureCalculator_Class_1 instead of TemperatureCalculator_Class and then by executing the program, you will get a maximum gray value of the detected forehead area as is showed in ./Images/Demo-2.png in the temporary version. If u feed the "infrared image" with a temperature array and correctly match RGB image and the thermal image, you'll get a maximum temperature of the forehead.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/GW-Wang-thu/Temperature-Monitor-System-based-on-Infrared-Camera-and-Face-Detection/issues/1#issuecomment-622361368, or unsubscribe https://github.com/notifications/unsubscribe-auth/APM7JNDQ4FJVMGMR6G22KF3RPK3DTANCNFSM4MIXSDCA .

BabsD5 commented 4 years ago

import argparse import datetime import imutils import time import cv2 import os import glob import math import pandas as pd import numpy as np from imutils import paths from scipy.spatial import distance as dist from imutils import perspective from imutils import contours from imutils.object_detection import non_max_suppression from PIL import Image from utils.anchor_generator import generate_anchors from utils.anchor_decode import decode_bbox from utils.nms import single_class_non_max_suppression from load_model.pytorch_loader import load_pytorch_model, pytorch_inference

model = load_pytorch_model('/home/babs/pythonmm/models/model360.pth');

anchor configuration

feature_map_sizes = [[33, 33], [17, 17], [9, 9], [5, 5], [3, 3]]

feature_map_sizes = [[45, 45], [23, 23], [12, 12], [6, 6], [4, 4]] anchor_sizes = [[0.04, 0.056], [0.08, 0.11], [0.16, 0.22], [0.32, 0.45], [0.64, 0.72]] anchor_ratios = [[1, 0.62, 0.42]] * 5

generate anchors

anchors = generate_anchors(feature_map_sizes, anchor_sizes, anchor_ratios)

for inference , the batch size is 1, the model output shape is [1, N, 4],

so we expand dim for anchors to [1, anchor_num, 4]

anchors_exp = np.expand_dims(anchors, axis=0)

def Temperature(IRImage, Anchors, Alarm_Tem = 37, DistanceCorrectionFlag = 0, GestureCorrectionFlag = 0): Frame = IRImage LabeledImage = IRImage Anchors = Anchors Alarm_Tem = Alarm_Tem DistanceCorrectionFlag = 0 GestureCorrectionFlag = 0 CareAreas = [] Foreheads = [] Temperature = [] AlarmFlags = [] for i in range(len(Anchors)): ForeheadAnchor = int(Anchors[i][3] + 0.3 * (Anchors[i][5] - Anchors[i][3])) # ymin + 0.3 (ymax - ymin) Foreheads.append(np.array(Frame[Anchors[i][2]+5:Anchors[i][4]-5, Anchors[i][3]+5:ForeheadAnchor-5])) # xmin:xmax, ymin:y1 if DistanceCorrectionFlag == 1: pass if GestureCorrectionFlag ==1: pass Temperature.append([np.max(Foreheads[i]), # Tem (np.unravel_index(np.argmax(Foreheads[i]), Foreheads[i].shape)[0] + Anchors[i][2]+5, np.unravel_index(np.argmax(Foreheads[i]), Foreheads[i].shape)[1] + Anchors[i][3]+5)]) # xmin+idx, ymin+idy Frame = cv2.rectangle(Frame, (Anchors[i][2], Anchors[i][3]), (Anchors[i][4], Anchors[i][5]), [255, 0, 0], 2) if Temperature[i][0] >= Alarm_Tem: color = [255, 0, 0] AlarmFlags.append(1) else: color = [0, 255, 0] AlarmFlags.append(0) Frame = cv2.rectangle(Frame, (Anchors[i][2], Anchors[i][3]), (Anchors[i][4], ForeheadAnchor), color, 2) Frame = cv2.circle(Frame, Temperature[i][1], 1, color, 4) Frame = cv2.putText(Frame, str(Temperature[i][0]), Temperature[i][1], cv2.FONT_HERSHEY_SIMPLEX, 0.8, color) return Frame

def merge_picture(img1, img2, dir=0): if img1.any() and img2.any(): shape = img1.shape
cols = shape[1] rows = shape[0] channels = shape[2] if dir == 0: dst = np.zeros((rows 2 + 2, cols, channels), np.uint8) dst[0:rows, 0:cols, :] = img1[0:rows, 0:cols, :] dst[rows+2:rows2+2, 0:cols, :] = img2[0:rows, 0:cols, :] if dir == 1: dst = np.zeros((rows, cols 2 + 2, channels), np.uint8) dst[0:rows, 0:cols, :] = img1[0:rows, 0:cols, :] dst[0:rows, cols+2:cols2+2, :] = img2[0:rows, 0:cols, :] return dst

id2class = {0: 'Mask', 1: 'NoMask'}

def inference(image, conf_thresh=0.5, iou_thresh=0.4, target_shape=(160, 160), draw_result=True, show_result=True ): ''' Main function of detection inference :param image: 3D numpy array of image :param conf_thresh: the min threshold of classification probabity. :param iou_thresh: the IOU threshold of NMS :param target_shape: the model input size. :param draw_result: whether to daw bounding box to the image. :param show_result: whether to display the image. :return: '''

image = np.copy(image)

output_info = []
height, width, _ = image.shape
image_resized = cv2.resize(image, target_shape)
image_np = image_resized / 255.0  # 归一化到0~1
image_exp = np.expand_dims(image_np, axis=0)

image_transposed = image_exp.transpose((0, 3, 1, 2))

y_bboxes_output, y_cls_output = pytorch_inference(model, image_transposed)
# remove the batch dimension, for batch is always 1 for inference.
y_bboxes = decode_bbox(anchors_exp, y_bboxes_output)[0]
y_cls = y_cls_output[0]
# To speed up, do single class NMS, not multiple classes NMS.
bbox_max_scores = np.max(y_cls, axis=1)
bbox_max_score_classes = np.argmax(y_cls, axis=1)

# keep_idx is the alive bounding box after nms.
keep_idxs = single_class_non_max_suppression(y_bboxes,
                                             bbox_max_scores,
                                             conf_thresh=conf_thresh,
                                             iou_thresh=iou_thresh,
                                             )

for idx in keep_idxs:
    conf = float(bbox_max_scores[idx])
    class_id = bbox_max_score_classes[idx]
    bbox = y_bboxes[idx]
    # clip the coordinate, avoid the value exceed the image boundary.
    xmin = max(0, int(bbox[0] * width))
    ymin = max(0, int(bbox[1] * height))
    xmax = min(int(bbox[2] * width), width)
    ymax = min(int(bbox[3] * height), height)

    if draw_result:
        if class_id == 0:
            color = (0, 255, 0)
        else:
            color = (255, 0, 0)
        cv2.rectangle(image, (xmin, ymin), (xmax, ymax), color, 2)
        cv2.putText(image, "%s: %.2f" % (id2class[class_id], conf), (xmin + 2, ymin - 2),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.8, color)
    output_info.append([class_id, conf, xmin, ymin, xmax, ymax])

if show_result:
    Image.fromarray(image).show()
return output_info

def detect_people(frame): """ detect humans using HOG descriptor Args: frame: Returns: processed frame """ (rects, weights) = hog.detectMultiScale(frame, winStride=(4, 4), padding=(8, 8), scale=1.05) rects = non_max_suppression(rects, probs=None, overlapThresh=0.65) for (x, y, w, h) in rects: cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2) return frame

def detect_face(frame): """ detect human faces in image using haar-cascade Args: frame: Returns: coordinates of detected faces """ faces = face_cascade.detectMultiScale(frame, 1.3, 5, 0, (20, 20)) return faces

def draw_faces(frame, faces): """ draw rectangle around detected faces Args: frame: faces: Returns: face drawn processed frame """ for (x, y, w, h) in faces: xA = x yA = y xB = x + w yB = y + h cv2.rectangle(frame, (xA, yA), (xB, yB), (255, 0, 0), 2) return frame

if name == 'main':

conf_thresh = 0.5
output_info = []
Alarm = 37.2
Distance = 0
Gesture = 0
subject_label = 1
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
font = cv2.FONT_HERSHEY_SIMPLEX
cascade_path = "/home/babs/Human-detection-and-Tracking/face_cascades/haarcascade_frontalface_default.xml"
face_cascade = cv2.CascadeClassifier(cascade_path)
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
if args.get("video", None) is None:
    camera = cv2.VideoCapture(-1)
    #camera1= cv2.VideoCapture(IDCamera)
    time.sleep(0.25)
else:
    camera = cv2.VideoCapture(args["video"])
firstFrame = None
(grabbed, frame) = camera.read()
print(frame.shape)
frame_r = imutils.resize(frame, width= min(800, frame.shape[1]))
gray = cv2.cvtColor(frame_r, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
print(frame_r.shape)

height = camera.get(cv2.CAP_PROP_FRAME_HEIGHT)
#height1 = camera1.get(cv2.CAP_PROP_FRAME_HEIGHT)
width = camera.get(cv2.CAP_PROP_FRAME_WIDTH)
#width1 = camera1.get(cv2.CAP_PROP_FRAME_WIDTH)
fps = camera.get(cv2.CAP_PROP_FPS)
#fps1 = camera1.get(cv2.CAP_PROP_FPS)
#fourcc = cv2.VideoWriter_fourcc(*'XVID')
# writer = cv2.VideoWriter(output_video_name, fourcc, int(fps), (int(width), int(height)))
total_frames = camera.get(cv2.CAP_PROP_FRAME_COUNT)
#total_frames1 = camera1.get(cv2.CAP_PROP_FRAME_COUNT)
min_area = (2000 / 800) * frame_r.shape[1]
idx=0
status = True
while status:
    (grabbed, frame) = camera.read()
    #statut, img = camera.read()
    #thermal_frame = camera.read()
    text = "Unoccupied"
    if not grabbed:
        break
    #img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) 
    frame = imutils.resize(frame, width= min(800, frame.shape[1]))
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (21, 21), 0)
    if firstFrame is None:
        firstFrame = gray
        continue
    frameDelta = cv2.absdiff(firstFrame, gray)
    thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
    thresh = cv2.dilate(thresh, None, iterations=2)
    ( cnts, _ ) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    temp = 0
    for c in cnts:
        if cv2.contourArea(c) > args["min_area"]:
                temp = 1
            #continue
        text = "Occupied"
    if temp == 1:
        inference(frame, conf_thresh, iou_thresh=0.5, target_shape=(360, 360), draw_result=True, show_result=False)
        #img = inference(frame, conf_thresh, iou_thresh=0.5, target_shape=(360, 360), draw_result=True, show_result=False)
        #Temperature(thermal_frame, Anchors = img, Alarm_Tem = Alarm, DistanceCorrectionFlag = Distance, GestureCorrectionFlag = Gesture)
        idx += 1
        frame_processed = detect_people(frame)
        faces = detect_face(gray)
        if len(faces) > 0:
            frame_processed = draw_faces(frame_processed, faces)
    cv2.putText(frame, "Room Status: {}".format(text), (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
    cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
    (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35,
    (0, 0, 255), 1)
    temp1 = np.array(temp) 
    #mergeframe = merge_picture(frame, img, dir=1)
    cv2.imshow("Security Feed", frame)
    cv2.imshow("Frame Delta", frameDelta)
    cv2.imshow("Thresh", thresh)
    #cv2.imshow('image', img[:, :, ::-1])
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break
camera.release()
cv2.destroyAllWindows()

It's m'y code....

BabsD5 commented 4 years ago

That's the code that I said you