gitSean23 / AidUI

This repository contains the replication package of our ICSE'23 paper "AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces"
MIT License
2 stars 0 forks source link

AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces

DOI License

Overview

This repository contains the replication package of our ICSE'23 paper:

S M Hasan Mansur, Sabiha Salma, Damilola Awofisayo, and Kevin Moran, “AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces,” in Proceedings of the 45th IEEE/ACM International Conference on Software Engineering (ICSE 2023), 2023, to appear

This replication package includes three main parts which we discuss in details in later sections:

Part1: Unified Taxonomy of UI Dark Patterns

There has been a wealth of work from the general HCI community that has constructed Dark Pattern taxonomies. Given the somewhat complementary, yet disparate nature of existing taxonomies of Dark Patterns, we aimed to create a unified taxonomy that merges together similar categories and provides a larger landscape of patterns for mobile and web apps toward which we can design and evaluate our automated detection approach. Our unified taxonomy is primarily a fusing of the various categories and subcategories derived by Gray et al. [1], Mathur et al. [2] and Brignull et al. [3]. Our final unified taxonomy, illustrated in the following figure, spans 7 parent categories which organize a total of 27 classes that describe different Dark Patterns.

We aimed to prioritize the detection strategy of AidUI toward certain patterns that carry with them distinct visual and textual cues which both manifest on a single screen. Thus, we identified a final set of 10 target Dark Patterns, toward which we oriented AidUI’S analysis. The targeted Dark Pattern categories are marked with a in the above figure. We provide descriptions and examples of each Dark Pattern in this document.

Part2: Source code and setup instructions of AidUI

Based on the observations gained during taxonomy study, we developed AidUI, the research prototype of our proposed automated approach to detect UI dark pattens.

The architecture of AidUI, depicted in in the Figure above, is designed around four main phases: (1) the Visual Cue Detection phase, which leverages a deep learning based object detection model to identify UI objects representing visual cues for DPs, (2) the UI \& Text Content Detection phase, which extracts UI segments containing both text and non-text content, (3) the DP Analysis Phase phase, which employs text pattern matching, as well as color and spatial analysis techniques to analyze the extracted UI segments and identifies a set of potential DPs, and (4) the DP Resolution phase, which uses results from both Visual Cue Detection and DP Analysis phases to predict a final set of underlying DPs in the given UI. It is important to note that AidUI operates purely on pixel data from UI screenshots, making it extensible to different software domains.

We set up the directory structure of the project to closely follow the architecture of the tool presented above. The following subsections present the directory structure of the source code of AidUI as well as the instructions to set it up.

Source Code Directory structure

├── AidUI
│   ├── UIED --> module to extract UI area segments(text/non text)
|   |
│   ├── object_detetion --> DL model to detect visual cue(i.e., icons)
│   │   ├── object_detection_frcnn_mscoco_boilerplate
|   |
│   ├── text_analysis --> module to detect lexical patterns
│   │   ├── pattern_matching
│   │   
│   ├── visual_analysis --> module to analyze brightness of neighbor UI segments
│   │   ├── histogram_analysis
|   |
│   ├── spatial_analysis --> module to analyze relative size and proximity of neighbor UI segments
│   │   ├── size_analysis
│   │   ├── proximity_analysis
|   |
│   ├── dp_resolver --> module to identify potential underlying dark patterns in UIs

Setup AidUI Using Docker (Recommended)

NOTE: We have bundled AidUI in a docker image so that the replication process can smoothly run on every major OS. The docker image of AidUI is available here on dockerhub, or here as a direct download. Additionally, you can find the dockerfile here

To setup and run AidUI, following steps need to be done.

  1. Install Docker

    To install Docker, please follow the instructions at this link.

  2. Download AidUI docker image by executing the following command:

    docker pull smhasanmansur/aidui-img
  3. Run the container by executing the following command:

    docker run -it smhasanmansur/aidui-img
  4. Use following command to move to the root directory of AidUI:

    cd AidUI/
  5. Execute the following command to run AidUI

    ./run_dp_detection.sh
  6. You should see the following prompt:

    turn on evaluation mode? answer with y/n
    • Type y and press ENTER
    • The process usually takes around 1.5~2 hours
  7. Output

    Once the process is complete, we can expect the following output files in the directory AidUI/output/


Install and Run AidUI without Docker (Ubuntu Only)

NOTE: Our provided instructions for installing AidUI are currently only applicable to Ubuntu 20.04.2 LTS (although other recent versions of Ubuntu should be fine)

To setup and run AidUI, following steps need to be done.

  1. Clone AidUI

    Clone this repository by using the git clone command. If git is not already installed, please follow the installation instructions provided here.

  2. Install Anaconda

    To install Anaconda, please run these commands in your terminal:

    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
    bash ~/miniconda.sh

    After this is done, restart your terminal.

  3. Setup the conda environments

    Installed Anaconda comes with a default conda environment "base". We can check the available environments using the following command:

    conda info --envs

    For AidUI, two conda environments need to be setup: _"dl_dp_obj_det_env" and "dp_uied3"_

We provide the specification files to build identical conda environments as ours:

Following commands can be used to create the required environments from the root of the cloned repository:

conda env create -f env_specification_files/dl_environment.yml
conda env create -f env_specification_files/dp_environment.yml
  1. Activate the environments with these commands and run the following command in each environment:
    conda activate dl_dp_obj_det_env
    conda activate dp_uied3
    python -m spacy download en_core_web_trf
  2. Download and setup CNN Rico Model

    • Download the CNN Rico model from here.
  1. Download and setup Visual Cue Detection model

    • Download the pretrained Visual Cue Detection model from here.
  1. Create a ocr key from here and replace the existing API key in UIED/detect_text/ocr.py

  2. Run AidUI

    • Move to the root directory of AidUI

References

  1. C. M. Gray, Y. Kou, B. Battles, J. Hoggatt, and A. L. Toombs. The dark (patterns) side of ux design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–14, 2018.
  2. A. Mathur, G. Acar, M. J. Friedman, E. Lucherini, J. Mayer, M. Chetty, and A. Narayanan. Dark patterns at scale: Findings from a crawl of 11k shopping websites. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–32, 2019.
  3. H. Brignull, M. Miquel, J. Rosenberg, and J. Offer. Dark patterns - user interfaces designed to trick people. 2010.