Open phamquangnam opened 5 years ago
i meet the same problem
do you solve it?
@phamquangnam you can use any off-the-shelve person/pedestrian detector, Faster R-CNN/Yolo-v3/SSD etc to generate the detections
hello can you tell me how to save them in the required format as i am unable to get the right format i am using tensorflow object detection model for detections please help
@Knightfire1998 the format is as follows: From each frame there will be array of array containing key-points for each person in the following format
[top_right_x, top_right_y, bottom_left_x, bottom_left_y, confidence]
For example in frame 0 there may be 2 person so the detection size will be (2, 5)
and in frame 10 there may be 1 person so the detection size will be (1, 5)
.
I have used faster_rcnn_inception_v2_coco_2018_01_28
from tensorflow object detection. You can use this model if working with person/pedestrian class.
@InzamamAnwar can you share code snippet when you pickle it as I m using the box object to save the Co-ordinates their array of size(1,5) only please help me with it thank you or email me if possible reshikeshdhanrale@gmail.com
while True:
r, img = cap.read()
if img is not None:
img = cv2.resize(img, (1280, 720))
#print(img)
boxes, scores, classes, num = odapi.processFrame(img)
else:
break
# Visualization of the results of a detection.
for i in range(len(boxes)):
# Class 1 represents human
if classes[i] == 1 and scores[i] > threshold:
box = boxes[i]
a=[box[0],box[1],box[2],box[3],0]
li.append(a)
print("*****")
cv2.rectangle(img,(box[1],box[0]),(box[3],box[2]),(255,0,0),2)
if len(li)!=0:
ma.append(np.array(li[:], ndmin=2))
print("-----")
li=[]
cv2.imshow("preview", img)
pickle_out = open("1.pickle","wb")
pickle.dump(ma[:], pickle_out)
pickle_out.close()
key = cv2.waitKey(1)
if key & 0xFF == ord('q'):
break
object1 = pd.read_pickle(r'1.pickle')
print(object1)
see the code and tell me whats wrong plz as after giving the pickle file 1.pickle to master.py as input it shows problem wit concat in embeddings as index[0] has dim 1,5 and index[1] has 1,6
@Knightfire1998 I have written the while loop and tested with the data provided with this repo. Insert this loop instead of yours and It will work. If not, let me know
det = []
while True:
r, img = cap.read()
if r:
out = []
boxes, scores, classes, num = odapi.processFrame(img)
for i in range(len(boxes)):
if classes[i] == 1 and scores[i] > threshold:
box = boxes[i]
out.append([box[1], box[0], box[3], box[2], scores[i]])
if len(out) == 0:
det.append(np.empty(shape=(0, 5), dtype=np.float32))
else:
det.append(np.asanyarray(out, dtype=np.float32))
else:
break
@InzamamAnwar i m getting the following error after running the loop i did pickle.dump(det, pickle_out) then a repllaced 4p-c4 with my own video file and pickle file with same name i got the following error:
frame_id: 100
Traceback (most recent call last):
File "master.py", line 291, in
Have you used this type of snippet for pickle dumping.
with open("filename.pickle", "wb") as f:
pickle.dump(det, f)
pickle_out = open("1.pickle","wb")
pickle.dump(det, pickle_out)
pickle_out.close()
i used this
wont it work? should i change it?
I think it should work. Can you please use 4p-c4.avi and generate detections from this and than run the code. If possible you can upload your code over github and I will look into it
okay i m running that on the file 4p-c4 i guess its problem with the number of maximum entities i m just checking that i ll upload the code in a while Thanks for helping man @InzamamAnwar
i used 4p-c4 but got this error
frame_id: 157
Traceback (most recent call last):
File "master.py", line 291, in
i have uploaded the code on my repo as object detector with surveillance system plz take a look master.py is as it is no major changes have been made
@InzamamAnwar
@Knightfire1998 Can you email me the full version of this section? thank you hieus.lecong@gmail.com
Hi .. How can I create file .pickle for other video demo? Please help me
Thanks!