divSivasankaran / Context-Weighted-Majority-Algorithm

Context-aware fusion for multimodal biometrics
https://div1090.github.io/Context-Weighted-Majority-Algorithm/
1 stars 4 forks source link

How to generate files(such as pie_files) list in config.ini? #1

Open rlinhw opened 6 years ago

rlinhw commented 6 years ago

I want to run the demo, but can not find pie_file/sitw_file/pubfig_file/Samples_Paired_pie_full.csv/pubfig_epa.csv listed in config.ini. How to generate those files from original dataset? and where can i find those predefined files?

divSivasankaran commented 6 years ago

@rlinhw - I'm really sorry, but the code base that I had in c++ was not well structured, and so I was in the middle of porting my code over to python and haven't quite completed it.

Give me a few days to resolve this? I'll upload instructions on the setup & along with a link to the dataset & how to preprocess them!

In the mean-time, if you don't mind working with the original code-base (a part of which is in the c++ branch of this repo), I can share that with you privately.

rlinhw commented 6 years ago

@div1090, thanks, actually i have just downloaded C++ branch code, so please share me privately if possible. Now i'm investigating the way to make biometrics-based multimodal (face, voice, touch etc) continous authentication to be running on real mobile phone environment, so i'm very interesting in your work!

divSivasankaran commented 6 years ago

Oh @rlinhw .. So this repo only contains the code for context-aware fusion.. the experiments were run on the desktop (mainly because the experts were desktop based). You can check the presentation deck here to see if this is relevant for your real deployment.

Check this repository for a web-server based android app for a demo on continuous authentication using only face (VGGFace Descriptors). Ideally if you can port the experts to run on the device, you can have any modality (gait, voice, face) authenticate the user.

rlinhw commented 6 years ago

@div1090 , yeah, i understand this repo only contains the contex-aware fusion part, but i think it should be the core of multimodal mechanism. And i also hope to build a real multimodal model on mobile phone based on your BioSecure project with some modifications. And moreover, i'm thinking whether it is possible to build a multimodal deep learning network by introducing some additional layers such as fustion-layer upon caffe framework? If so we can translate it to mobile framework such as ncnn easily.

divSivasankaran commented 6 years ago

That sounds like a very interesting approach! I'm already curious to hear more about what you find :)

The only reason I didn't try using a deep network for fusion was that it'd make online learning hard/impossible to work real-time. But if there is a big enough dataset to cover real world mobile usage, I'm sure a pre-trained deep model could work well in real-life without any need for online learning!