This repository contains the replication package of our ICSE'23 paper:
S M Hasan Mansur, Sabiha Salma, Damilola Awofisayo, and Kevin Moran, “AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces,” in Proceedings of the 45th IEEE/ACM International Conference on Software Engineering (ICSE 2023), 2023, to appear
This replication package includes three main parts which we discuss in details in later sections:
There has been a wealth of work from the general HCI community that has constructed Dark Pattern taxonomies. Given the somewhat complementary, yet disparate nature of existing taxonomies of Dark Patterns, we aimed to create a unified taxonomy that merges together similar categories and provides a larger landscape of patterns for mobile and web apps toward which we can design and evaluate our automated detection approach. Our unified taxonomy is primarily a fusing of the various categories and subcategories derived by Gray et al. [1], Mathur et al. [2] and Brignull et al. [3]. Our final unified taxonomy, illustrated in the following figure, spans 7 parent categories which organize a total of 27 classes that describe different Dark Patterns.
We aimed to prioritize the detection strategy of AidUI toward certain patterns that carry with them distinct visual and textual cues which both manifest on a single screen. Thus, we identified a final set of 10 target Dark Patterns, toward which we oriented AidUI’S analysis. The targeted Dark Pattern categories are marked with a in the above figure. We provide descriptions and examples of each Dark Pattern in this document.
Based on the observations gained during taxonomy study, we developed AidUI, the research prototype of our proposed automated approach to detect UI dark pattens.
The architecture of AidUI, depicted in in the Figure above, is designed around four main phases: (1) the Visual Cue Detection phase, which leverages a deep learning based object detection model to identify UI objects representing visual cues for DPs, (2) the UI \& Text Content Detection phase, which extracts UI segments containing both text and non-text content, (3) the DP Analysis Phase phase, which employs text pattern matching, as well as color and spatial analysis techniques to analyze the extracted UI segments and identifies a set of potential DPs, and (4) the DP Resolution phase, which uses results from both Visual Cue Detection and DP Analysis phases to predict a final set of underlying DPs in the given UI. It is important to note that AidUI operates purely on pixel data from UI screenshots, making it extensible to different software domains.
We set up the directory structure of the project to closely follow the architecture of the tool presented above. The following subsections present the directory structure of the source code of AidUI as well as the instructions to set it up.
├── AidUI
│ ├── UIED --> module to extract UI area segments(text/non text)
| |
│ ├── object_detetion --> DL model to detect visual cue(i.e., icons)
│ │ ├── object_detection_frcnn_mscoco_boilerplate
| |
│ ├── text_analysis --> module to detect lexical patterns
│ │ ├── pattern_matching
│ │
│ ├── visual_analysis --> module to analyze brightness of neighbor UI segments
│ │ ├── histogram_analysis
| |
│ ├── spatial_analysis --> module to analyze relative size and proximity of neighbor UI segments
│ │ ├── size_analysis
│ │ ├── proximity_analysis
| |
│ ├── dp_resolver --> module to identify potential underlying dark patterns in UIs
NOTE: We have bundled AidUI in a docker image so that the replication process can smoothly run on every major OS. The docker image of AidUI is available here on dockerhub, or here as a direct download. Additionally, you can find the dockerfile here
To setup and run AidUI, following steps need to be done.
To install Docker, please follow the instructions at this link.
docker pull smhasanmansur/aidui-img
docker run -it smhasanmansur/aidui-img
cd AidUI/
./run_dp_detection.sh
turn on evaluation mode? answer with y/n
Once the process is complete, we can expect the following output files in the directory AidUI/output/
NOTE: Our provided instructions for installing AidUI are currently only applicable to Ubuntu 20.04.2 LTS (although other recent versions of Ubuntu should be fine)
To setup and run AidUI, following steps need to be done.
Clone this repository by using the git clone
command. If git is not already installed, please follow the installation instructions provided here.
To install Anaconda, please run these commands in your terminal:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh
After this is done, restart your terminal.
Installed Anaconda comes with a default conda environment "base". We can check the available environments using the following command:
conda info --envs
For AidUI, two conda environments need to be setup: _"dl_dp_obj_det_env" and "dp_uied3"_
We provide the specification files to build identical conda environments as ours:
Following commands can be used to create the required environments from the root of the cloned repository:
conda env create -f env_specification_files/dl_environment.yml
conda env create -f env_specification_files/dp_environment.yml
conda activate dl_dp_obj_det_env
conda activate dp_uied3
python -m spacy download en_core_web_trf
/root/.cache/torch/hub/checkpoints/
AidUI/object_detection/object_detection_frcnn_mscoco_boilerplate/
Create a ocr key from here and replace the existing API key in UIED/detect_text/ocr.py
Activate this enviornment using this command:
conda activate dl_dp_obj_det_env
Execute the following command to run AidUI
This will create the directories needed for the project when run for the first time.
python main.py
input
Execute the following command to run AidUI
python main.py