imsoo / fight_detection

Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition
MIT License
193 stars 42 forks source link

Is this repo working? #2

Open srika91 opened 4 years ago

srika91 commented 4 years ago

When i run the command, ./worker cfg/openpose.cfg weights/openpose.weights -gpu 0 -pose

I am getting this error "libdarknet.so: cannot open shared object file: No such file or directory". But I have compiled darknet in the way mentioned in the repo. Looks like fight detection Makefile is not updated with correct libraries?

imsoo commented 4 years ago

@srika91 Hi,

Thank you for your interest in this repo.

Please try this and let me know if it doesn't work.

srika91 commented 4 years ago

Hello Imsoo,

Thanks for the quick reply. I was able to handle the above error by some modifications. Following are the status of sink,ventilator,worker and client

worker:

Worker | Send To Sink | SEQ : 557 LEN : 25606 forward fee: 13.2492ms Darknet | Detect | SEQ : 558 Time : 58.0658ms Worker | Send To Sink | SEQ : 558 LEN : 25668 forward fee: 13.2613ms Darknet | Detect | SEQ : 559 Time : 56.8428ms Worker | Send To Sink | SEQ : 559 LEN : 25744 forward fee: 13.2072ms Darknet | Detect | SEQ : 560 Time : 75.6282ms Worker | Send To Sink | SEQ : 560 LEN : 25666 forward fee: 13.2289ms Segmentation fault (core dumped)

Ventilator:

Ventilator | Send To Worker | SEQ : 3007 LEN : 14819 Ventilator | Recv From Client | SEQ : 3008 LEN : 14893 Ventilator | Send To Worker | SEQ : 3008 LEN : 14893 Ventilator | Recv From Client | SEQ : 3009 LEN : 14852 Ventilator | Send To Worker | SEQ : 3009 LEN : 14852 Ventilator | Recv From Client | SEQ : 3010 LEN : 14952 Ventilator | Send To Worker | SEQ : 3010 LEN : 14952 Ventilator | Recv From Client | SEQ : 3011 LEN : 14923 Ventilator | Send To Worker | SEQ : 3011 LEN : 14923 Ventilator | Recv From Client | SEQ : 3012 LEN : 14967 Ventilator | Send To Worker | SEQ : 3012 LEN : 14967 Ventilator | Recv From Client | SEQ : 3013 LEN : 15034 Ventilator | Send To Worker | SEQ : 3013 LEN : 15034 Ventilator | Recv From Client | SEQ : 3014 LEN : 15340 Ventilator | Send To Worker | SEQ : 3014 LEN : 15340 Ventilator | Recv From Client | SEQ : 3015 LEN : 15360 Ventilator | Send To Worker | SEQ : 3015 LEN : 15360 Ventilator | Recv From Client | SEQ : 3016 LEN : 14333 Ventilator | Send To Worker | SEQ : 3016 LEN : 14333 Ventilator | Recv From Client | SEQ : 3017 LEN : 14919 Ventilator | Send To Worker | SEQ : 3017 LEN : 14919 Ventilator | Recv From Client | SEQ : 3018 LEN : 15328 Ventilator | Send To Worker | SEQ : 3018 LEN : 15328 Ventilator | Recv From Client | SEQ : 3019 LEN : 15322 Ventilator | Send To Worker | SEQ : 3019 LEN : 15322 Ventilator | Recv From Client | SEQ : 3020 LEN : 15047 Ventilator | Send To Worker | SEQ : 3020 LEN : 15047 Ventilator | Recv From Client | SEQ : 3021 LEN : 14920 Ventilator | Send To Worker | SEQ : 3021 LEN : 14920

Sink:

Sink | Recv From Worker | SEQ : 553 LEN : 25138 Sink | Pub To Client | SEQ : 553 LEN : 25138 Sink | Recv From Worker | SEQ : 554 LEN : 25140 Sink | Pub To Client | SEQ : 554 LEN : 25140 Sink | Recv From Worker | SEQ : 555 LEN : 25295 Sink | Pub To Client | SEQ : 555 LEN : 25295 Sink | Recv From Worker | SEQ : 556 LEN : 25460 Sink | Pub To Client | SEQ : 556 LEN : 25460 Sink | Recv From Worker | SEQ : 557 LEN : 25606 Sink | Pub To Client | SEQ : 557 LEN : 25606 Sink | Recv From Worker | SEQ : 558 LEN : 25668 Sink | Pub To Client | SEQ : 558 LEN : 25668 Sink | Recv From Worker | SEQ : 559 LEN : 25744 Sink | Pub To Client | SEQ : 559 LEN : 25744 Sink | Recv From Worker | SEQ : 560 LEN : 25666 Sink | Pub To Client | SEQ : 560 LEN : 25666

Client:

R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560 R : 1 | C : 1 | F : 0 | T : 3022 : 560

Commands i used,

./worker cfg/yolov3.cfg weights/yolov3.weights names/cooc.names -gpu 0 -thresh 0.2 ./darknet_client -addr 0.0.0.0 -vid fight.mp4 -out_vid -dont_show

Any ideas about what is going on here?

imsoo commented 4 years ago

@srika91 Hi, This repo dosen't support YOLO. Could you try below command?

./worker cfg/openpose.cfg weights/openpose.weights -gpu 0

if it doesn't work, please check below and let me know. it would be helpful to figure out the problem.

  1. please retry and check it has a same result. (fault SEQ: 560)

  2. try other video and check the output video. (using -out_vid option)

srika91 commented 4 years ago

If I need to run the fight detection simply running the action.py is enough?

imsoo commented 4 years ago

You need to run worker, ventilator, sink and action.py

client
  ↓
ventilator
  ↓
worker
  ↓
sink  <-> action.py
  ↓
client
srika91 commented 4 years ago

Sure,I will check and and revert in an hour or two.

srika91 commented 4 years ago

If I run like above ,I am not getting any hit in action.py. Client: R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1 R : 0 | C : 4458 | F : 0 | T : 4459 : 1

Ventilator: tor | Send To Worker | SEQ : 4448 LEN : 15618 Ventilator | Recv From Client | SEQ : 4449 LEN : 15189 Ventilator | Send To Worker | SEQ : 4449 LEN : 15189 Ventilator | Recv From Client | SEQ : 4450 LEN : 15073 Ventilator | Send To Worker | SEQ : 4450 LEN : 15073 Ventilator | Recv From Client | SEQ : 4451 LEN : 15080 Ventilator | Send To Worker | SEQ : 4451 LEN : 15080 Ventilator | Recv From Client | SEQ : 4452 LEN : 15035 Ventilator | Send To Worker | SEQ : 4452 LEN : 15035 Ventilator | Recv From Client | SEQ : 4453 LEN : 15115 Ventilator | Send To Worker | SEQ : 4453 LEN : 15115 Ventilator | Recv From Client | SEQ : 4454 LEN : 15114 Ventilator | Send To Worker | SEQ : 4454 LEN : 15114 Ventilator | Recv From Client | SEQ : 4455 LEN : 15011 Ventilator | Send To Worker | SEQ : 4455 LEN : 15011 Ventilator | Recv From Client | SEQ : 4456 LEN : 14965 Ventilator | Send To Worker | SEQ : 4456 LEN : 14965 Ventilator | Recv From Client | SEQ : 4457 LEN : 14972 Ventilator | Send To Worker | SEQ : 4457 LEN : 14972 Ventilator | Recv From Client | SEQ : 4458 LEN : 14914 Ventilator | Send To Worker | SEQ : 4458 LEN : 14914

Sink:

Sink | Recv From Worker | SEQ : 2 LEN : 19244 Sink | Recv From Worker | SEQ : 3 LEN : 19401 Sink | Recv From Worker | SEQ : 4 LEN : 19185 Sink | Recv From Worker | SEQ : 7 LEN : 18958 Sink | Recv From Worker | SEQ : 8 LEN : 18915 Sink | Recv From Worker | SEQ : 9 LEN : 18868 Sink | Recv From Worker | SEQ : 8 LEN : 18516

Worker:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/initializers.py:143: calling RandomNormal.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:97: calling GlorotUniform.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:97: calling Orthogonal.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:97: calling Zeros.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor model loaded ...

CUDA_VISIBLE_DEVICES=2 python3 action.py ./darknet_client -addr 0.0.0.0 -vid fight.mp4 -out_vid -dont_show

srika91 commented 4 years ago

Is there a test code for single image or a video inferencing?

imsoo commented 4 years ago

Sorry. there is no code for test.

I think that connection is not established between sink and action.

  1. check ventilator -> worker -> sink pipeline

please change sink.cpp and retry it. if you can get result, sink <-> action pipeline is something wrong.

From https://github.com/imsoo/fight_detection/blob/4eb493056c744a88b2255054130c9b299fb7ad0b/server/src/sink.cpp#L252-L261

To

        std::string hists = p_ss.str();
        if (hists.size() > 0) {
          // zmq_send(sock_rnn, hists.c_str(), hists.size(), 0); /* here */
          // zmq_recv(sock_rnn, rnn_buf, 100, 0);  /* here */

          // action update
          for (unsigned int i = 0; i < track_data.size(); i++) {
            track_data[i].p->set_action(0);     /* here */
          }
        }  
  1. check ventilator -> worker -> sink <-> action pipeline

First, reconnect between sink and action. and Insert print statement before and after the 56 line in action.py msg = socket.recv() and check standard output.

https://github.com/imsoo/fight_detection/blob/4eb493056c744a88b2255054130c9b299fb7ad0b/server/action.py#L55-L63

srika91 commented 4 years ago

Hello Imsoo,

I will try your suggestions. Keras: '2.2.4-tf' Tensorflow: 1.14.0'

srika91 commented 4 years ago

Can u explain more on point 1 and point 2.I am not familiar with this ventilator,sink concepts. Is there any order to be mainitained when starting them?

Trying the changes you mentioned , am not seeing anything in standard output of sink. Am running everything in a single machine.

Also please clarify to get fight detection ,what all things we have to run.

I am running,

  1. ./ventilator
  2. python3 action.py
  3. ./sink
  4. ./darknet_client
imsoo commented 4 years ago

For example,

  1. client sends the webcam video stream to the server(Ventilator).

    Client -- Frame4 -- Frame 3 -- Frame 2 -- Frame 1 --> Server (Ventilator)
  2. Ventilator receive frame and distribute to Worker. (In this case, Assume that three Worker processes are running)

    Ventilator -- Frame 1 --> Worker 1   
    Ventilator -- Frame 2 --> Worker 2
    Ventilator -- Frame 3 --> Worker 3
    Ventilator -- Frame 4 --> Worker 1 
    (Repeat)
  3. Workers process frame (neural network computation) and send result to Sink. (Note that worker 3 could process faster than worker 2)

    Worker 1 -- Processed Frame 1 --> Sink   
    Worker 3 -- Processed Frame 3 --> Sink   
    Worker 2 -- Processed Frame 2 --> Sink   
  4. Sink collects results back from the worker and send result to client in sequence

    Workers -- Frame 2 -- Frame 3 -- Frame 1 --> Sink -- Frame 3 -- Frame 2 -- Frame 1 --> Client

  5. check ventilator -> worker -> sink pipeline

First, we need to check this pipeline to figure out problem. for this, we have to modify some code in sink.cpp (worker, ventilator, action don't need to modify) this works remove action.py from pipeline.

after modify, rebuild and run server processes (ventilator, worker, sink). and then start client program.

  1. run ventilator
  2. run worker
  3. run sink
  4. run darknet_clinet

if you can get result(Pose Estimation), move to 2.

If yon needs more explain, feel free to add comments.

WooXinyi commented 2 years ago
  • Ventilator : A ventilator that distribute tasks that can be done in parallel
  • Worker : A set of workers that process tasks (Pose Estimation)
  • Sink : A sink that collects results back from the worker processes

For example,

  1. client sends the webcam video stream to the server(Ventilator).
Client -- Frame4 -- Frame 3 -- Frame 2 -- Frame 1 --> Server (Ventilator)
  1. Ventilator receive frame and distribute to Worker. (In this case, Assume that three Worker processes are running)
Ventilator -- Frame 1 --> Worker 1   
Ventilator -- Frame 2 --> Worker 2
Ventilator -- Frame 3 --> Worker 3
Ventilator -- Frame 4 --> Worker 1 
(Repeat)
  1. Workers process frame (neural network computation) and send result to Sink. (Note that worker 3 could process faster than worker 2)
Worker 1 -- Processed Frame 1 --> Sink   
Worker 3 -- Processed Frame 3 --> Sink   
Worker 2 -- Processed Frame 2 --> Sink   
  1. Sink collects results back from the worker and send result to client in sequence
Workers -- Frame 2 -- Frame 3 -- Frame 1 --> Sink -- Frame 3 -- Frame 2 -- Frame 1 --> Client
  1. check ventilator -> worker -> sink pipeline

First, we need to check this pipeline to figure out problem. for this, we have to modify some code in sink.cpp (worker, ventilator, action don't need to modify) this works remove action.py from pipeline.

after modify, rebuild and run server processes (ventilator, worker, sink). and then start client program.

  1. run ventilator
  2. run worker
  3. run sink
  4. run darknet_clinet

if you can get result(Pose Estimation), move to 2.

If yon needs more explain, feel free to add comments. please let me know your tensorflow and keras version, thanks!