MIT IEEE URTC 2023. GSET 2023. Repository for "SeBRUS: Mitigating Data Poisoning in Crowdsourced Datasets with Blockchain". Using Ethereum smart contracts to stop AI security attacks on crowdsourced datasets.
A little bit of a complicated feature so we will need a few people to work on this. For this feature we will be using this example from the ART. In order to perform the Activation Defense, we will need to do a few things. ActiviationDefense requires a model and a dataset (ActivationDefence(classifier, x_train, y_train) the reason why there is x_train and y_train is because x are the images and y are the labels). It returns a list identifying which testing data examples are poisoned.
[ ] (frontend) add a UI for model uploads so users can upload a saved copy of their model. We will also need a field on the Image component that will be a boolean for whether or not the image is poisoned.
[ ] (backend) It will also receive a list of image objects from the frontend as well so that this can be passed into the model. There may be ways to optimize this in the future such as saving a copy of the images on the blockchain locally. However, for now we will assume that we will need to convert base64 strings to image files. You will also need to construct a custom dataloader for the images you download from the frontend. https://keras.io/api/data_loading/
[ ] (backend) pass the loaded model into the ActivationDefense and return the list of results to the frontend
[ ] (frontend) taking the list of results from the backend, change the boolean of the image object for the poisoned data to true so that the UI clearly indicates that an example is poisoned.
A little bit of a complicated feature so we will need a few people to work on this. For this feature we will be using this example from the ART. In order to perform the Activation Defense, we will need to do a few things. ActiviationDefense requires a model and a dataset (
ActivationDefence(classifier, x_train, y_train)
the reason why there isx_train
andy_train
is because x are the images and y are the labels). It returns a list identifying which testing data examples are poisoned.Image
component that will be a boolean for whether or not the image is poisoned./api/detect
this will received the serialized model from the frontend file upload and the load the model into ART'sKerasClassifier
class. https://www.tensorflow.org/guide/keras/serialization_and_saving#how_to_save_and_load_a_modelActivationDefense
and return the list of results to the frontend